Speaker 1: Hello, and welcome to the webinar. We're pleased to talk today about how to deliver better video quality, starting from the source. And I am Carlos Hernandez, I'm Chief Revenue Officer from SimWave, and I'm very pleased to be here today. And well, the main thing is this, remember a film, remember a movie, where you really felt the emotion that they wanted you to feel. And remember what happened could have been like any movie on your on your memories, or could have been also a sports event. So what about this, you're in a football game, or whatever sport you love, and your team is winning, or is losing, and it's the last minute, and all the emotions are there. And let's assume you're at home, and you're watching the game. And this moment comes, and you see the expression of the players, and you are completely immersed on the content and on the game, and nothing else matters. And then this happens. And this is obviously really bad. I mean, this is bad for the viewer, because you would be dissatisfied, you would be very annoyed. This would be bad for whoever is distributing the content. So the aggregator, the distributor, the MVPD, because obviously, you're probably going to call and you're going to have a complaint, or you're going to be unhappy about this, and they're going to put it on kind of social media. And whoever is advertising here would be affected as well, and the content creator. So this is what we want to avoid. And this is really what it means to be able to ensure that the quality of video you are delivering is the best possible. And at the end of the day, the ultimate judge of quality, and who knows, and who would tell whether the quality of the video is good or not, is just the viewer, is the person who watches the show, is the person who pays the bill. So at the end of the day, what we all want is to make sure that the viewer is happy. And this is the center of everything we all do, is making sure that the viewer has the best possible experience. And this is how Seamwork can help. We can really bring home the tools on how you can tell and how you can manage your workflow and your value chain to distribute the best possible quality of video. So let me share something with you. We have a very interesting video that shows how we can deliver video quality. And here you see a race car, and hopefully you are seeing all the banding and all the issues that may happen with the screen, and how the score that is at the bottom left, and the line that shows how we detect those impairments are. And this basically tells you what people are seeing. You would see a very evident macro-blocking event here. And this is key. We help you detecting all these issues before they get to the customer, and we also help you delivering the best possible quality at the lower possible cost. So let's get back to our slides, and let's see how we do this. And the key is that we develop an algorithm where we model how the human visual system behaves, and we model that on software. So we included the biology piece of it, where we can tell how your eyes see the content, and also the neuroscience piece of it, which is how your brain processes that information, how you remember that information, how much it bothers you when you see something that is something that doesn't match with what the content you're expecting to see. And then, this is obviously super complex, and the goal here is to put in a very simple score all these complexities, so it can be actionable, so you can understand what's going on, and you can make decisions on how to improve the service in the long run, but also make real-time decisions on how to solve problems, how to improve the service, and understand what your customer behavior is. And what is very important is that this score must be with a very, very high correlation with what the human would say. So basically, our score has the highest correlation in the market, and what it means is that it represents what hundreds of thousands of humans would say on how that quality is. And what is very important here is that we measure pixels and frames. We measure the content and how you would see the content. So it's very important not be limited by measuring network conditions as a proxy for quality, but actually measuring how you perceive that video, and you would include some of those network conditions to feed the models and basically tell you who's delivering that service how you're doing, what the customer is experiencing, and basically help to manage better your workflow. So what is critical as well is that you can stop guessing on what good looks like. And this applies for any type of content, any type of device, for any resolution, any frame rate. For instance, if you have interlaced content that you need to stream and you need to switch to progressive or dynamic range, encoder standards, bit rates, and for both live and bought type of content. So the score can match all these qualities and also has a very unique differentiation, which is that as most scores in the market, we can measure reference-based. So you have the score of the difference when the video was before and after processing. But most importantly, we can score the quality of a video without a reference. And we call it no reference. And basically what we can do is we can take a video that you receive from a source that is outside your organization, and we can help you understand the quality of that source. And that is very important because it helps you making very well-informed decisions from the very beginning. And this is why we've got the Emmy Award, and we have multiple recognitions that would help us kind of provide this service to the service providers and help them making better decisions. When you think about this and how the industry processes content today, you can see that when you have, in this case, at the table, you have 20 content types and you have five resolutions, two frame rates, two dynamic ranges, number of bit rates, encoder standards, and all these combinations add up to two and a half million combinations. And theoretically, you should test all of these combinations to make sure that the quality you deliver is good for all of them. But obviously, until very recently, this was not possible because there was no way to measure how humans would see the content in a way that scales. And you could probably say that this slide is pretty easy, has a lot of text, and that's how your probably your life would be if you would have to do all this work. So what pretty much every time ends up happening is that if you get the really visual test, which is a very small subsample of what you would really need to measure and guarantee. So out of those two and a half million assets, you measure 120. The problem with that is that it is as easy as to get it right as it is to get it wrong. So basically, you don't know. You don't know what quality you may be delivering. So this is where we can really help. And this is where the scores play a very important role, which. It basically tells you in a scale of from zero to 100 what the quality is, and it helps you from the very beginning, preventing from bad videos to spoil your workflow. Identify them when you get those videos. Remember, we have the only no reference score in the market. So we can differentiate what is the artistic intent of the content when you have certain circumstances that would be hard to differentiate by a traditional algorithm, whether something that was a different color or a different texture that was intended by the content creator or something that is just a result of the compression or any type of process. And then you can decide based on a predetermined score that you would select what for you is a fail and needs to be sent back to the content provider or to the previous process you did. And what is a pass, which means it complies with the quality you want. So this way, what you can do is you can identify what is the score you want to deliver to your customers. And remember, that score is it works across content, across resolutions and bit rates. And in that case, you can say, for instance, I want to deliver 82 to all my customers. With that, do you know that all your customers are going to be satisfied and you are not going to have any complaints or any customer satisfaction issues? And then from there, you can add this level of intelligence to your workflow. And this involves detecting the most difficult impairments or issues and to really take the next step, which is avoiding them. That includes microblocking, that includes banding, video freeze, audio video sync and distributed latency. So basically we get all across the value chain and even in locations that are not in the same site, we could detect audio, video sync and distributed latency. So then in this scenario, we can help you understand if you've got a source that is third for that means it's poor quality. And then everything from there is going to go bad. I mean, like you cannot make miracles, basically, if you get a bad source, the result is going to be bad. Now, if as a difference, you get a good source, which is 89, and then while you process it goes down a couple of points or several points. And then when you deliver the content, it gets to 28. Then there, you know where the problem is. But now you have the intelligence to know where the issue happens, why it happened, how to correct it. And in several cases to resolve it before it gets to the reviewer. And you could automate the decision making process on if an input has 74 and 74 is good for you and you pass and it proceeds to the rest of the workflow. But if you at any point you get a 40 or a 58, you can determine that that asset doesn't comply with the quality you expect. And therefore, it's not going to be further processed. And we can help you determining what action you want to take. And therefore, the process could be completely automated. Now, what is also very interesting is as we have the no reference algorithms and the full reference algorithms, basically, we can help you manage the quality of the content from the point that you get the source to any of your distribution paths. And if you are a content creator, you can manage a distributed network of intelligence so you can track the quality that gets to your customers, to your viewers. Or if you are a distributor or an aggregator, you can first measure the source and the quality you got on it, whether you want to distribute or not. And you can track what quality you are at the end delivering. And in this case, you would measure, for instance, that after you got a 91, then you process and these are the quality levels that you get. And you can either measure what you deliver in a set of box or a streaming player if that's applicable. And but if you also have a direct to consumer path and OTT for your service, you can compare those as well. So you can know whether your OTT has a better or not quality than your IPTV or linear or or any other of your services. Also, if you want to migrate from premise to cloud, we can help you comparing the quality you are delivering on both. And if you are living to your customers expectations and what is the impact of any of those transitions that you are going to make, as well as what is the impact of those decisions that you will be taking on behalf of the customer. So in this way, we can take multi form of decisions in the way that you would know what source you get, what impact each of these processes take on your assets and how they compare to one another. So you can make decisions on any of these processes. You can select encoders or any of these functions. You can measure the quality of two distribution paths. So again, legacy to to streaming or from on premise to cloud. And with this, you can make decisions on how to configure your workflows. But also in real time, you can track these scores and you can know when something happens and resolve immediately. So you can make real time decisions. You can make planning decisions. And then you can move also to make sure that you can allocate resources on demand. So not being necessarily the scope of this webinar, once you know everything that you are seeing on the screen, you can move to the next level, which is basically using this score to find what is the beat rate and what is the cost that you will have when distributing the content. And in some cases, you may be distributing lower quality that you want. So you can make decisions to improve that content. But we can also help you reducing the cost of distribution for a given quality. Just once you know the quality that you are going to get to your customers, it is a matter of optimizing the resources on how you get it distributed. And therefore, it could provide a significant amount of savings and better quality for our customers. And when you think about this and you deliver a given quality at a lower beat rate, you start also getting control of what you didn't have control before, which is you have a smaller size of files that is less likely to generate congestion, re-buffering and distribution issues on the delivery side. And therefore, you are going to have an impact on the quality that you are going to be distributing to your customers. So at the end of the day, everybody wins. When you think about that football game that went bad, now you can really remove the difficult balance that existed before, that if distributors needed to deliver better quality, you would have to spend more money. Now it can be done the quality you want at the lower cost. But again, everything starts from the source and from the ability to keep track of what is the quality across your workflow and your distribution. And with this, basically we would close, we would move to the questions. We hope you get in touch and I would like to take any questions the audience would have. So thank you very much. And we will take the questions. Let us check what is here on the chat.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now