Speaker 1: First up is Jeff Malkin from Encoding.com, and we go way back. We've been talking to each other for ever since Encoding.com came on the scene, I believe, which was when again?
Speaker 2: 13 years ago.
Speaker 1: 13 years ago. Wow. And it's been amazing to watch how the company has grown and changed. Thank you, likewise. Last time we spoke, you were in Hawaii. I don't think you're still there.
Speaker 2: No, unfortunately, that was just a brief moment, not long enough.
Speaker 1: But you're in San Francisco, right?
Speaker 2: I am. We are in San Francisco. And the company has evolved, so we have offices in multiple locations, but we were headquartered in San Francisco for the last decade.
Speaker 1: Right, right. And today you're going to be talking about optimizing complex media processing workflows in the cloud, more than just encoding. So with that, I will pass it over to you. I'll keep my eye out for any questions from the audience. Thank you. Take it away.
Speaker 3: All right. Thank you very much. I'm going to share my screen here. Let's get this started. Looks good. All right. Well, hello everybody.
Speaker 2: Um, my name is Jeff Malkin, uh, from encoding.com and thank you for giving me this opportunity to chat with you today. Uh, my role here at encoding.com is to manage all things revenue. And as such, I am on the front lines working with significant customers, including a bunch of large media and entertainment companies. And while I know that, uh, this innovation hour is, is, is more of a tech talk, um, I'm not a video engineer, but my perspective, uh, will be more layman, which I think can be helpful. You know, there's no doubt that video processing workflows have already, or are planning to move from on-premises infrastructure to the cloud. Uh, you know, we have a quote on our conference booth that says, quote, uh, we'll never invest another dollar in encoding infrastructure says the CTO of every major media company. And that said it ain't easy. So powering mission, mission, critical broadcast media supply chain and direct to consumer premium video workflows requires more than just sophisticated transcoding and packaging capabilities. And it was really only recently that that broadcast video pipelines historically managed on on-premise infrastructure, uh, have, have began migrating to the cloud as well. So with today's talk, I will share more about the complexities and challenges, uh, to successfully stand up and operate these critical video workflows. You know, when processing high volume, uh, complex video workflows, we at encoding.com tend to think of, of these pipelines in three buckets. Uh, there's the direct to consumer. Where, uh, you're, you're working with, you know, Avon and Espad and so far there's media supply chain, and then there's broadcasts and, uh, I'm going to bucket media supply chain broadcast together today, but the workflows regardless all require an ingest component, some type of validation, some transcoding and packaging, uh, along with many other, uh, processing requirements, and then of course delivery. And while there are some obvious overlaps in the capabilities between a direct to consumer pipeline and a, and a broadcast pipeline, the devil's in the
Speaker 3: details, you know, for, for all of these pipelines and workflows, we're going to
Speaker 2: typically ingest, uh, and these are for premium VOD workflows, right? Um, we're, we're going to, uh, typically ingest high bit rate, high res video formats could be pro res could be J2K or MXF, uh, where the broadcast ingest formats are going to also include specific camera formats, DNX HD and XD cam, et cetera. But from this point going forward, things can be quite different, you know, for the direct to consumer pipelines, we're going to typically include, um, requirements for, uh, supporting multi-language audio and multi-channel audio preparation, caption and subtitle conversions where, you know, you'll, you'll have to extract captions, transform them either mux and bin back in, or create sidecar files to make sure you support your target platforms. Um, segmenting content for adaptive bit rate delivery for HLS, for dash, for CMAP, um, ingesting and inserting of triggers and, and, and, uh, stream conditioning for dynamic ad insertion requirements, converting existing, uh, Nielsen watermarks, uh, audio watermarks to ID three tags, and then running videos through a rigorous QC cycle that not only validates many audio and video checks, but also, um, checks the manifests for your ABR formats are properly And at the end encrypting, uh, the output content in various DRM frameworks and registering those keys with different key management servers. So there's a mouthful. There's a lot of components that make up that DTC, um, premium video workflow. Now broadcasts on the other hand can actually be even more complex. And in addition to many of the components I just described, um, broadcast and media supply change often start with, uh, you know, a conformance and assembly step. Where, um, you know, we will, uh, validate that what we're receiving, the assets we're receiving are exactly what they're supposed to be. And then, you know, with some automated assembly steps to ensure that, um, we can remove tops and tails or detect black frames, remove black frames and things of that nature. Beyond that in the U S we're typically going to need to insert, uh, an original Nielsen take watermark. The outputs for broadcasts tend to be a difference, but also need to run through an even more rigorous QC cycle. Um, checking for in the U S cable labs compliance and more globally, making sure that output videos adhere to strict set-top box hardware requirements. So as you can see, it's much more, these workflows are much more than just
Speaker 3: a simple transcoding and packaging. To support all of these requirements and overcome, you know, the challenges
Speaker 2: of managing this in the cloud, you know, we built a sophisticated job orchestration platform that drives over 50 engines underneath the hood, uh, that power, a growing suite of microservices. And all of these features are accessible, accessible via an API. Uh, when Eric and I, as you brought up earlier, when we first met, uh, over a decade ago, we may have had five engines, right? So we continue to add more and more engines as the requirements change. So, you know, we, we employ a, by the way, what's on the screen here is a, uh, what we call our periodic table of engines. So, uh, we, we employ a combination of open source commercial and proprietary engines underneath our hood so that we can always use the best of breed technology to meet the specific workflow requirement. You know, to do so, we also believe it's important to be engine agnostic, you know, whatever engine works best for that specific workflow requirement. You know, in the end, most of our customers don't care how the sausage is made. Uh, they just want it to delicious and, and being, and made as cost-effectively as possible. Because requirements change so often. Implementing an agile development, um, process is also critical. You know, we've been doing at encoding.com weekly production releases for many, many years now. And when customers come to us with new requirements, which is very often, um, we're going to survey all of our existing engines to see if there are, you know, features, maybe we haven't yet exposed yet. And if we haven't great, we'll expose them, make them available in the API. If not, we're going to find another engine and add it to the job orchestration platform, whether it's open source again, commercial or something that we build in house and by integrating, uh, this API, the net result, uh, for customers is that we can future-proof their workflows. They only need to modify their JSON or XML based job requests that they're sending us to take advantage of the latest and greatest capabilities.
Speaker 3: Even with this flexible, uh, engine architecture and job orchestration platform, there are challenges.
Speaker 2: Um, and let's start with massive files working with bit rate intensive formats like, you know, Dolby vision and HDR 10 and HLG and 4k formats and various camera formats like J2K and pro res and, and others, the source video average size can be up to 250 gigabytes in size. And this makes it very difficult to, have to move from one location to another source videos are located on-prem, you know, we need to bring it into the cloud or they're located in the cloud already and stored in the cloud well, you know, then you need to be looking at, uh, you know, what are the cost requirements for storing that heavy content in the cloud? You know, as I mentioned earlier, the, the changing workflow requirements is something that we experience on a daily basis. Um, and I could actually probably speak for 30 minutes on just recent feature requests we've had from our media and entertainment customers, could be anything. Could be supporting new video and audio codecs, more languages to support, unique subtitle formats, changing to QC test plans, new dynamic assembly requirements, etc. So this can be challenging. And speed. Speed is challenging and critical, right? Processing thousands of dense assets per day while adhering to rigid SLA job turnaround requirements is challenging. And we know that speed to market is a critical factor for monetizing VOD content. So you could think of speed as the time it takes one file to be ingested, queued, processed, and delivered. Or you can think of speed as how many files you can ingest and process in parallel. And it's working on the speed of our platform has been something from day one and still is today a critical priority for us because it's, you know, we feel it's a competitive advantage. And, you know, we now employ a number of technologies to accelerate an entire video pipeline from ingest through queue time all the way through delivery. Working with premium video assets takes on another whole new level of security requirements, especially when working for large media and entertainment companies. You know, we've taken multiple steps to ensure highest levels of security throughout a VOD workflow. You know, as an example, it's things like job API calls and notifications, being able to be able to be sent over 256-bit SSL encryption, ingestion support for assets that are already encrypted, media processing and temporary storage for assets never leaving a particular data center, content encryption with DRM frameworks, et cetera. So these are all things that are critical in powering these workflows. And, you know, to support the vast set of evolving requirements means utilizing and maintaining many different tools and engines and sometimes integrating with third-party service providers as well. So how you manage your third-party dependencies, you know, can be critical. You know, as an example, Apple ProRes is a critical format for the pipelines that we support. You know, we worked with Apple to develop a solution using Apple's 64-bit Linux library in the cloud to encode and decode all ProRes flavors. We were working with Nielsen years ago when we implemented Nielsen in our capability set, you know, rigorous certification process just like Apple. And at the time we deployed or started working with Nielsen, they still had a dongle requirement, so not really cloud-ready, right? So keep in mind that tools and engines you might be using in your on-premises infrastructure may not yet be available in the cloud. So you may need to find an alternative solution. And the third-party dependencies, you know, adds complexity also during, you know, when you're operating. For example, we had a recent issue with a customer and where a particular component in their workflow, which I won't mention, stopped working because all of a sudden there was a license server issue. And so now over the last few weeks, we've had to build redundancy to ensure that that doesn't happen again. So this is just something to keep in mind in with relying on third-party dependencies. And then, you know, in scale, right? Mission-critical pipelines need to be able to be able to support ingesting 50 or 500 or 5,000, you know, assets in parallel. And to date, we've processed over a billion assets
Speaker 3: and have certainly taken on the chin during those phases of our company lifecycle when we were scaling up. You know, I've shared a bunch of information on what's required to power and
Speaker 2: support mission-critical video workflows in the cloud, but I also wanted to share some thoughts on, you know, why I believe encoding.com has had some success over the past decade, as this, you know, too, might be relevant for you guys, whether you're potential customers or entrepreneurs in our space or, you know, even competitors. You know, as you can probably and hopefully grok from this presentation, powering premium VOD workflows in the cloud for, you know, D2C or broadcast distribution is really complicated. And as a business, you know, we decided that the only way we could be truly successful was to focus on file-based workflows in the cloud. So we're not providing services for live linear, we're not offering a player or streaming analytics, we're not a CMS provider because we feel that it's complicated enough to try and be the best in the world at even powering file-based workflows in the cloud. For our largest customers, you know, we generally find that our platform, which already has a huge suite of capabilities, may only support about 97% of their requirements at that time. So it's critical and it's been critical for us that we have a process and a model that supports the ability to quickly add new features to support that missing 3%. And we think of product development and adding new capabilities in days and weeks, not months and years. And if you were to ask our customers what they find compelling about us, and hopefully they find something compelling about us, how quickly we support new requirements, I believe is going to be very high on their list. And finally, you know, it's just experience. You don't know what you don't know. But with VOD processing in the cloud, we might know what you don't know. And with over a billion videos processed, as I mentioned, and I think we ingested over a trillion API requests last year, we've now powered many, many thousands of different workflows in the cloud, and we've seen many different kinds of requirements, and we've come across many challenges and issues that we've had to overcome over the years that we could never have forecasted. So I think this is what's been critical for us. And I really want to thank you for your time. You know, I know you've been sitting through probably lots of presentations over the last few days. I hope you were able to take something away from today's presentation that's helpful. And please feel free to reach out to me with any questions, jeff at encoding.com or sales at encoding.com. Thanks again.
Speaker 1: Thank you very much, Jeff. And just just in case anybody's wondering, you still do plenty of straight encoding and transcoding, right?
Speaker 2: You know, you Yes, we do. Can get lost in all these complex pipelines. But you know, transforming one format to another is still the heart and soul of what we do. Yes.
Speaker 1: Right, exactly. And we did have one attendee asked if it was possible to get a copy of your presentation. They said they kind of want to spend a little more time with it. I don't know if that's something you can share, or, or not. Yeah, we're happy to, of course. Okay. So yeah, jeff at encoding.com AV trainer, who, who should win a medal or something for being at almost every one of our sessions is wondering about that. So yeah, jeff at encoding.com will get you that information. Thanks so much, Jeff. Really, thank you for taking the time and hopefully we'll see you in person soon.
Speaker 2: Thanks, everyone. Cheers. Oh, actually, Jeff, someone is alerting me to the fact that there
Speaker 1: are a couple of questions in q&a that I somehow missed. Okay, how much latency does all the QC checking add to the broadcast workflow? Well, keep in mind that we're not doing live linear,
Speaker 2: right? You're saying so that's not it's via this. It's not a it's not really an issue.
Speaker 1: Okay. But what about ingest technologies? I don't know if this is something that that, that that, you know, is what ingest technologies are you using to deliver faster ingest? Is this
Speaker 2: things you've built? Well, or there's a combination, but we do support a sparrow. And I think that's probably for our large media and entertainment customer customers, the,
Speaker 1: the technology used most often for that. Okay. And, again, being VOD ultra low latency video delivery is not something that you're correct, you're getting involved in not not an issue for us. Yes. All right. Well, thanks so much. Thank you, Tucker, for pointing out that there were questions in the q&a. It's been a long day here. So I really appreciate the heads up. So once again, thanks to Jeff Malkin from encoding.com. Thank you guys. Cheers. Thanks.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now