VP Land

Strada: Creating a Road to an AI Powered Media Workflow

January 24, 2024 New Territory Media Season 3 Episode 8
Strada: Creating a Road to an AI Powered Media Workflow
VP Land
More Info
VP Land
Strada: Creating a Road to an AI Powered Media Workflow
Jan 24, 2024 Season 3 Episode 8
New Territory Media

In this wide-ranging conversation with Michael Cioni, CEO of Strada, we talk about the future of media workflows, how AI will help us work faster, and why he decided to build his company in public.

πŸ“§ GET THE VP LAND NEWSLETTER 
Subscribe for free for the latest news and BTS insights on video creation 2-3x a week: 
https://ntm.link/vp_land


Connect with Michael & Strada:
Strada - https://strada.tech
YouTube -  https://www.youtube.com/@Strada-Tech
Instagram - https://www.instagram.com/strada.tech
LinkedIn - https://www.linkedin.com/company/strada-tech
TikTok - https://www.tiktok.com/@strada.tech
Michael @ Linkedin - https://www.linkedin.com/in/michaelcioni/


#############

πŸ“Ί WATCH MORE VP LAND EPISODES

Inside Coffeezilla's '$10,000,000' Virtual Production Studio
https://youtu.be/FjAkmqJCbJY

Imaginario: Using AI to speed up video search and editing
https://youtu.be/4WOb5Y1Qcp0

Fully Remote: Exploring PostHero's Blackmagic Cloud Editing Workflow
https://youtu.be/L0S9sewH61E

🎧 LISTEN IN YOUR FAVORITE PODCAST APP
Apple Podcast: https://ntm.link/vs-apple
Spotify: https://ntm.link/vs-spotify
Overcast: https://ntm.link/vp-overcast

#############

πŸ“ SHOW NOTES

Will we be filming less in a world with AI? A chat with Frame.io's Michael Cioni [NAB 2023]
https://www.youtube.com/watch?v=qjM1HO3Xghg

REDucation
https://www.red.com/reducation

Light Iron: A Panavision Company
https://www.panavision.com/post-production

HuggingFace
https://huggingface.co

Y Combinator
https://www.ycombinator.com

Remini
https://remini.ai

Evoto
https://www.evoto.ai

eMastered
https://emastered.com

RODE Mics
https://rode.com/en/microphones

#############

⏱ CHAPTERS

00:00 Introduction
01:40 Gen AI vs. UtilityAI
02:25 Overview of Strada
03:15 Four Ts of Strada (Transfer, Transcode, Transcribe, Tanalyze) 
08:50 Strada Stacks
09:50 Private Beta Launch Event
11:00 Consumer Technology vs. Professional Technology
14:00 Advanced AI Search and Virtual Folders
15:30 Bring-Your-Own Storage and Cloud Flexibility
26:50 Indexing and Duplicate Assets
28:55 Orchestration Layer and Timeline Visualization
30:30 Transcibe and Tanalyze (Tag & Analyze)
32:50 Unlimited Users and Pricing Flexibility
38:00 Strada as a Marketplace for AI Models
41:40 Integrating Strada with NLE
45:35 Documenting Strada's Journey on YouTube
53:15 Michael's Recommended AI Tools
53:35 Remini AI
54:40 Evoto
56:15 eMastered
57:15 Filling Skills Gap with AI
01:00:00 Michael's Thoughts on AI (NAB vs. now)
01:02:00 Limitations of Generative AI
01:04:40 AI as a Utility in Real-Life Productions
01:07:55 Potential of Mobile Filmmaking
01:12:45 Why Apple's Camera Technology is Impressive
01:19:20 Upcoming Launch and Roadmap 
01:20:10 Outro  

Show Notes Transcript

In this wide-ranging conversation with Michael Cioni, CEO of Strada, we talk about the future of media workflows, how AI will help us work faster, and why he decided to build his company in public.

πŸ“§ GET THE VP LAND NEWSLETTER 
Subscribe for free for the latest news and BTS insights on video creation 2-3x a week: 
https://ntm.link/vp_land


Connect with Michael & Strada:
Strada - https://strada.tech
YouTube -  https://www.youtube.com/@Strada-Tech
Instagram - https://www.instagram.com/strada.tech
LinkedIn - https://www.linkedin.com/company/strada-tech
TikTok - https://www.tiktok.com/@strada.tech
Michael @ Linkedin - https://www.linkedin.com/in/michaelcioni/


#############

πŸ“Ί WATCH MORE VP LAND EPISODES

Inside Coffeezilla's '$10,000,000' Virtual Production Studio
https://youtu.be/FjAkmqJCbJY

Imaginario: Using AI to speed up video search and editing
https://youtu.be/4WOb5Y1Qcp0

Fully Remote: Exploring PostHero's Blackmagic Cloud Editing Workflow
https://youtu.be/L0S9sewH61E

🎧 LISTEN IN YOUR FAVORITE PODCAST APP
Apple Podcast: https://ntm.link/vs-apple
Spotify: https://ntm.link/vs-spotify
Overcast: https://ntm.link/vp-overcast

#############

πŸ“ SHOW NOTES

Will we be filming less in a world with AI? A chat with Frame.io's Michael Cioni [NAB 2023]
https://www.youtube.com/watch?v=qjM1HO3Xghg

REDucation
https://www.red.com/reducation

Light Iron: A Panavision Company
https://www.panavision.com/post-production

HuggingFace
https://huggingface.co

Y Combinator
https://www.ycombinator.com

Remini
https://remini.ai

Evoto
https://www.evoto.ai

eMastered
https://emastered.com

RODE Mics
https://rode.com/en/microphones

#############

⏱ CHAPTERS

00:00 Introduction
01:40 Gen AI vs. UtilityAI
02:25 Overview of Strada
03:15 Four Ts of Strada (Transfer, Transcode, Transcribe, Tanalyze) 
08:50 Strada Stacks
09:50 Private Beta Launch Event
11:00 Consumer Technology vs. Professional Technology
14:00 Advanced AI Search and Virtual Folders
15:30 Bring-Your-Own Storage and Cloud Flexibility
26:50 Indexing and Duplicate Assets
28:55 Orchestration Layer and Timeline Visualization
30:30 Transcibe and Tanalyze (Tag & Analyze)
32:50 Unlimited Users and Pricing Flexibility
38:00 Strada as a Marketplace for AI Models
41:40 Integrating Strada with NLE
45:35 Documenting Strada's Journey on YouTube
53:15 Michael's Recommended AI Tools
53:35 Remini AI
54:40 Evoto
56:15 eMastered
57:15 Filling Skills Gap with AI
01:00:00 Michael's Thoughts on AI (NAB vs. now)
01:02:00 Limitations of Generative AI
01:04:40 AI as a Utility in Real-Life Productions
01:07:55 Potential of Mobile Filmmaking
01:12:45 Why Apple's Camera Technology is Impressive
01:19:20 Upcoming Launch and Roadmap 
01:20:10 Outro  

There's this gen AI, which everybody's talking about, but I think the stuff that's most valuable to our community is going to be UtilityAI, the AI that actually makes the work that we do in real life a little easier. Welcome to VP Land, the podcast where we explore how making media is getting faster, easier, and less expensive to create from virtual production to AI and everything in between. I am Joey Daoud, your host. You just heard from Michael Cioni, CEO of Strada. Strada is a new AI-powered cloud platform, aimed at speeding up post production workflows. Michael is no stranger to reinventing workflows, from his first startup, Light Iron, to overseeing Frame.io's camera-to-cloud technology. In this wide-ranging conversation that we have, we explore what a 100% cloud-based workflow looks like. So it's really about a workflow acceleration technique. Over time, we're all going to be working in the cloud exclusively. So we want to be that brand and that product that helps people get there. How Strada is rethinking how we think about assets when making movies. And sometimes, we just think of each asset as a one dimensional thing, but it's really three dimensional. And we'll revisit our conversation from NAB 2023 and how Michael's views on AI have shifted since then. This is going to be a bloodbath was the word that we used, right? This idea that AI is not going to just unilaterally eliminate creative jobs. It's not that simple. Plus, we've got a little bit of an exclusive preview of what's coming in Strada's private beta launch. I'll tell you something I haven't told anybody yet as sort of a preview. Links for everything we talk about are available in the YouTube description or in the podcast show notes. And be sure to subscribe to the VP Land Newsletter to stay updated of all of the tech that is changing the way we are making movies. Just go to vp-land.com. And now enjoy my conversation with Michael Cioni. Well, hello, Michael. Good to see you. Thanks for joining. Excited to talk about Strada. Uh, but first off, can you just tell me about your two definitions of AI? Oh, my two definitions are, there's this gen AI, which everybody's talking about. It creates photos, it creates videos. Runway, Midjourney, DALLΒ·E. Super cool stuff. But I think the stuff that's most valuable to our community is going to be. The AI that actually makes the work And so my focus is on AI that is a utilitarian nature. And I think when it comes to creating- if you want to create real life, you'll always do that. So gen AI has its place, but it's, it's really not for me in most cases. So going with this utility AI, I feel like that feeds into what you're doing with, uh, Strada. So can you just give me the high-level overview of Strada and then we got a lot to, uh, dive into on everything that you're doing. Yeah. Strada is a AI-powered cloud platform that allows people to use AI models that are in the market and concatenate them together, which is a big, ugly word for like stitch together multiple models. And then you could process in the cloud and you can get the cloud to do these automations for you. So it's really about a workflow acceleration technique. And over time we're all going to be working in the cloud exclusively. So we want to be that brand and that product that helps people get there. I don't think I've ever heard, um, the word concatenate used outside of an Excel formula. So, uh, that's . It's a very good formula. That's right. That's right. Um, so, yeah. So let's dive into just sort of like a bit how this looks. And I know you've got the 4 Ts, which is what you're out with as your foundation. So can you list out the 4 Ts and then we could jump into each one at a time. And I kinda got some like more follow-up questions for each one. But, uh, let's start with the 4 Ts overview. Yeah. What's important for people to understand is when you're starting a new product, you have all these challenges where you have these grandiose ideas. Like, let's not talk about Strada for one second. If you're an entrepreneur, you're a filmmaker, you're creative, you want to do and create something, you have like the moment where you feel like you're finishing- you're crossing the finish line, and everybody's cheering. And you know what? That is in your head. But of course, nobody sees getting up at 4 o'clock in the morning and practicing and doing all your rehearsals and stuff like that. Nobody sees that. That's the dirty, ugly part of what really success is built on. So for us at Strada, we had to really focus on the foundational aspects before we could get to the really juicy, exciting, groundbreaking stuff. And what we realized, and it sounds kind of basic, but I think it's actually pretty novel. There are four things that every market segment needs, whether you're narrative or nonfiction or, uh, sports or news, journalism, whatever you are, we all have trouble _Transferring_ files. This whole drive shuttle thing is just not really good. It's just a mess. Right? Then we all have trouble _Transcoding_. And most of the problems with transcoding is when you're transcoding, whether it's you or someone else, you get locked out of your product. You're kind of like I gotta wait till this is over. Maybe that's only an hour, maybe that's overnight, who knows? But there's this transcoding hiccup, which includes syncing sound for a lot of us. And then you have to, _Transcribe_. And I think we only transcribe in the nonfiction space. And I think transcription in narrative is going to unlock a lot because it's the best search tool because you could just search by the language people say, even if that's a narrative. And so that's a big thing. And then the last one is what we call Tag or Analyze. And we combine that to the word _Tanalayze_. And what that means is we can use AI to do object detection, facial emotional detection, location detection, even shot composition detection, what's a medium. Give me a medium shot of Robert De Niro in the kitchen. If you just type that out.'cause that's how your brain thinks as opposed to knowing that's Scene 12 and the closeups were 12, uh, delta. That's harder to find and it gets slower, right? But if you say De Niro, kitchen, closeup. Boom. And now they're all there and they're already synced. And so the idea of Transfer, Transcode, Translate or Transcribe, and then Tanalyze is something that we think everyone can use a boost on. And if we could build that at the foundational level of Strada, we can help people get value quickly by using cloud enablers, accelerators. And then we can start building all the crazy AI stuff that will make things a lot better in the future. I mean the first version is sort of these- um, I know you're not calling it a digital asset manager, but in that realm where it's like a central spot where you can manage store well store-ish. I guess we could talk about the Transfer. First off, what kind of storage and stuff are you rolling out with and what's the idea behind the storage? That's a great question because you, you said this digital asset manager or a media asset manager, or an asset manager. So are they DAMs, MAMs, AMs? There's the first part of the problem who, who knows? Who knows what they're called, right? Second of all, what these asset management systems are at their core is robust, tight, organization, that's what they're about. I don't think I'm alone. When you're a very left, you know, a very like left-handed, right-brained person, the last thing you want to do is be like constricted to these rules and be super organized. It, it doesn't come naturally to us. When we are in the most creative of the creative process, we make a mess. And I think these asset management systems are actually designed For non-creative people, they're good for maybe archiving and organizing things when things are done. But during the active workspace of creating, creating, they're a mess. They're restrictive. They, they make you- I feel like they're built by engineers 'cause they,''cause they are. In a lot of cases, yes. Yeah. So we are not building an asset manager. I don't like asset managers because I don't have the patience when I'm creating to follow the rules. And I'd rather do everything. When I'm creating I'm, I, my house is a mess. My mixing room, there's cables everywhere. When I'm editing, my desktop is insane, And I got cables all over the table. But that's what it's, that's okay. That's what it's like'cause we're creating. Then when it's done, I want to bring someone else into vacuum and sweep and all that stuff. That's where the asset managers come in. So what Strada does is it allows us to give people the ability to have the asset management value of instant search and quick response to questions of where things are without having to follow any rules because we've analyzed everything. We've analyzed the language, we've analyzed the visuals, we know the slates, we know the scenes, we know the clips. So you could search by anything. We even know the perspectives. And what a perspective is if you shoot A camera, but you're also shooting a B camera, we know there's a B when you find A. If there's A and B, and then there's audio, we know where the audio is even when you look for A camera. If there's a BTS photographer or BTS videographer, we know where that stuff is, too.'cause those are all perspectives of the same instance in time. And sometimes we just think of each asset as a one dimensional thing or maybe two dimensional with like picture and sound, but it's really three dimensional because there's, there's LUTs, there's script notes, there's CDLs, there's BTS. Those are all additional dimensions of perspective. And that's what we're trying to do is associate those together and we call them Strada Stacks. And it's like a sandwich. And your stack is just sandwiching together all the perspectives of that aspect of that moment in time. So when you find one, you can find them all. And you can use Advanced AI Search, which is basically like, imagine googling your own movie or your own TV show, your own news story. That's actually what you want to be able to do because we are all good at Google, we all are satisfied with the results, and we all know how to use it. But when it comes to finding our needles in haystacks, we don't really have a good way to do that in today's world, and we're going to fix that. I feel like it's something that we are sort of used to on the consumer level. Uh, Apple, uh, Photos, you know, now you can just search for, show me, you know, pictures of the dog, my dog. Uh, you tag someone that you know, and it's like, hey, show me all the photos of, uh, you know, Rebecca. We've gotten used to that on the consumer end, but we haven't really had anything in the professional end that is as seamless as that. I've, I've seen some attempts and stuff, but it's just been a little, uh, clunky. Jumping around a little bit. One thing I did like a lot in the demo and going into, transcribe was I think Austin was like, oh, if you label the person once we remember who they are. So when they're in other video files, and that is an issue I've run into again and again with like even just the programs where their entire job is to just transcribe and I have to add the same person's name over and over again. And it's like, why? Why don't you remember? You know, their voice. Um,the stuff we just talked about, is this, this is already going to be in the first version? Yeah, the private beta. So the private beta is our way to start. I would love to give this to everyone that signed up, which is a huge number, but, we can't. We need to roll it out in phases because we need to load test the system 'cause we've never hit it with, you know, several thousand renders at once. We haven't hit that yet. And the only way to do that, it's hard to simulate. You gotta just get people to hit it, you know? So we're going to roll it out starting February 1st. And if people are interested, they could sign up and we could talk about getting in the private beta. And we'll just roll it out to make sure our load testing works so we can expand the amount of servers required to keep the load even. But you're right about being able to, uh, use like the consumer technology, uh, is really, really advanced. And then we go to the pro technology, that's not a new thing. And it's always bothered me because about 10 years ago, maybe 10, 12 years ago, I used to give these presentations mainly at REDucation, which was the RED camera company's like education branch. And they were always forward thinking. They were always pushing the boundaries. But all these people would come in to learn how to use RED cameras 'cause 10, 12 years ago, it was pretty uncommon still. Digital was still pretty uncommon. We would speak and I had this slide. And I showed the quality of media when you consume it. And on the low end, it was the internet. In the middle, it was television. And the high end was cinema. And that was what 12 years ago. People said the highest quality content was in the movies, uh, in terms of the quality of the image. Middle was television. The low end, which was like 720p was the web. And I said, here's what's going to happen. My prediction 12 years ago was slots one and three were going to transpose. And what would happen was television would stay in the middle, and the internet would improve in quality, and cinema would be lower quality than television. And that's exactly what happened. That's exactly what happened. The consumer technologies of the internet coupled with television actually accelerated. And the professionals who were so gung-ho on like blocking innovation and making sure it met all these crazy specs of luminance values and all this stuff with DLP projections. And they- I was like, you guys are going to stifle this and you're going to regulate yourselves below the quality threshold. And that's exactly what happened. Right now, when you go to the cinema, I'm not saying the images aren't shot well, and I'm not saying projectors aren't good, but you get a better picture at home on a 75-inch LG streaming from the internet than you do in a theater. And someone out there is going to challenge me on that. But go ahead. I've experienced this. Yeah, going back to the theaters and stuff, I have like, um, even- I'll throw them under the bus, sort of, IPIC, you know, where they sort of like brand themselves as like, yes, you know, better projection. And like both times, it's either been a, uh, a subpar sound experience or a subpar like even just the masking on the projector was off and uh, you know, stuff's bleeding onto the black. And it was just like this is supposed to be, you know, I'm paying the premium to like, get the, the supreme experience, but I can just pop in headphones and watch it on my TV at a much better quality. Well, not to mention, while there's always a marquee opportunity with a laser projector on a great screen that's well curated, the average person is not seeing 4K, they're not seeing laser.

They're not seeing more than 2000:

1 contrast ratio where we're able to

see more than a 1000000:

1 at home. The brightest image on a projector is often like 20 lumens, you know, and we could get, you know, 10 times that at home now. The criticism is not of the work or the shows or the stories, the criticism is of the opportunity and the technology. And so what I'm saying is sometimes the consumer market runs away and because of competition, they're able to like explore these new avenues. And so that's why Apple, you mentioned, and this happens on Galaxy and, uh, Pixel as well, but you can search your photos, you can find things because it's AI tagged automatically in the background. And the reason that is, is they're, they're innovating that way, they realize there's a value. This is interesting. They learned that nobody makes folders on their phones. The folder technology has been there for many, many, many years, but nobody creates a new folder and curates their own albums. So they said, well, how do people find things if they're not going to create folders? And so what you're actually doing when you're searching is searching virtual folders, because when you search dog, it's creating a folder in a way, you just didn't make it. You can't see it, but it gives you the ascent- the essence of a virtual folder. The reason that doesn't exist in the pro space for, you know, Avid and Final Cut and Premiere and Resolve and Pro Tools and Nuendo is because those are not cloud applications, so it's hard for them. They're, they're not even also online, like they're not always aware of the internet. And so they can't leverage a lot of this technology, and so it's not being built that way. You have to have a cloud system. You have to be online all the time for things like this to materialize. So Strada will act like your iPhone. You can search by dog. It'll show you a picture of a dog, it'll show you, a dog barking 'cause it knows what a dog sounds like. And then if someone says the word dog, it'll bring up the video of them being interviewed, talking about dogs. And then you have a virtual folder of dogs. And that type of speed going to really change how everyone's going to want to start to edit and produce and write. Going to where the media's stored and stuff. And I sort of threw this in at the end of one of the other questions,'cause you are not providing storage, it is bringing your own storage, so walk me through like what are some of the initial storage options and where you see that going in the future? Yeah, I'll, I'll answer that and I'll, I'll tell you something I haven't told anybody yet as sort of a preview. So first of all, we all have cloud subscriptions, and all of us have subscription fatigue. And so what I didn't want to do is invent a new product that required more sub fatigue. So we are allowing you to bring your own existing storage. That storage has to abide by a certain type of technological rule, which is called OAuth, but if they support OAuth with a REST API, we're able to connect to that. So Strada can do things on existing storage like Dropbox or Google Drive or Frame.io that maybe those products don't do, or Lightroom even. So if you want to leverage some of our, uh, search or some of our transcoding or some of our AI analysis, you could do that even though Dropbox may not have the same function. So you don't pay anymore for storage'cause you're already paying for it. There's not even an egress charge, so you're just using, 'cause it's just reading off your own storage. So that's, that's important to me that we make this a low barrier to entry so people can just connect storage. Plus, if you are good at managing your own clouds, then you know where your stuff is. And so Strada just looks familiar'cause you're like, oh, these are my directories and I could do that. But I want to take it a step further and what we haven't told anyone, and what we'll talk about this more in the future, but what I haven't told anyone is we also don't want the cloud to be a barrier to entry at all. Meaning some people cannot afford to store their assets in the cloud. They just don't have the money to do that because once you're putting assets in the cloud, you pay that forever, 20, 30 years. Most of us don't think about that. When we first signed up for email, depending on what age you were, you probably didn't realize you'd have this email address for decades and decades and is why- yeah. We didn't know that. That's why a lot of us, our first email address was like, like some guy in the frontyard@, right- Hotmail. Yeah. We didn't- we didn't like, we just say- Two megabytes, you know? Then you can switch to Yahoo and get the whopping four megabytes. Yeah, we didn't, we didn't think it through. But now people are starting to realize, wow, when I have a cloud subscription, it's forever. So that sounds expensive. It's like worse than a mortgage.'cause you don't ever pay it off. So it's like, oh my gosh, a fixed expense that doesn't come down on principle. Like what? It's only going to go go up as you keep adding more media. Yeah, so what I want to do is make sure that we give people an additional alternative, because I'm willing to bet everybody, at least listening to this, has a closet somewhere where there's lots and lots of old drives. Some of them work, some of them probably don't. But we have this closet of drives. We're not going to throw 'em away. So what are you going to do with them? I also think people also have an extra computer or two, an old laptop, an old tower, whatever it is, and you used to make it your primary, then it became your secondary. Now maybe it's in the closet. Well, what if there was a technology with an orchestration layer called Strada, and Strada could put those machines or those drives online and you actually made your own cloud. Because if you are connected to the internet on an old computer and you put your drives there, what is a cloud? A cloud is a drive on the internet. That's all it is. So why does it have to be Amazon's or Azure's or Google's or Apple's? It doesn't, it doesn't. Obviously there are major advantages for, uh, enterprise level clouds. But if all you need is an internet connection and a hard drive, what if we could make that work for you? Imagine you're being a photographer and you do a shoot and you have your photo retouch, and all you do is download your photos to your drive, which you do anyway, and your drive is connected into Strada, and your retouch or can pull files off your drive using your internet, and there's no cloud used at all. That's, that's a democratized future. And that's, that's the type of thing that we're starting to build that should lower the barrier to entry and allow you to make use of things you have today. Then when you need additional acceleration or more capacity or speed, then the cloud is going to outperform your local hard drive. But when you don't need that,why not make use of it? And, and I think one of the hardest parts about getting into the cloud is getting stuff up to it. We're super aware of that. When I worked at Frame io, the hardest part is to get assets to the cloud so we had to make these little proxy files. And what I want to do is make sure that we can generate proxy files even easier for any asset coming off of any technology and, uh, make sure that it gets to the cloud and, and it could be done on a very poor internet bandwidth so that people could get working regardless of what internet resources they have. That's awesome. That is exciting to hear, too, just from, uh, my own personal, uh, workflow stuff where, yeah, I bought a NAS and then learned after the fact where it's like, it's really hard to have people remotely access it without involving the cloud in some way. And it's where it's like, oh, the media's gotta go up to an AWS or Backblaze or something before they can pull it. And it is the same paradox where it's just like, well then why did I buy NAS if I have to store it in the cloud anyways? Uh, so yes, this exactly is, uh, very excited to hear. So is this this a, uh, like a local application that's running on your desktop or whatever that you on the computer that you're sort of designating as your, your, your, your, your your Strada PC? Not even, uh, because, um, Strada is bring your own storage. When some companies say bring your own storage, they kind of mean like a shallow version. So you gotta check, you gotta read the fine print. When people say bring your own storage, what do they actually mean by that? When I say it, I'm talking about your own storage, like a thumb drive. That's how granular I'm going for your own storage is turn that in into storage. So bring your own storage means you connect it to Strada, the cloud app. So you need an internet connection for it to run the traffic, right? So those are, those are mainly kilobytes. They're not super large to manage the traffic. Now, moving the asset, you need a decent internet connection. Decent. But again, you don't need to run a special app. Strada is going to allow you to connect that drive and, uh, allow other people- From, from the web app, from the web interface. So like on the computer, I can just load up, whatever Chrome and, and plug in my drives and That's right. Just like because- Point to Strada- point Strada to it. Yeah. Yeah.'cause to me you don't want there to be a different experience for the user between connecting Dropbox and connecting your LaCie drive or your NAS. Like It shouldn't be any different. And then you just realize that. Now if you turn your computer off or disconnect your NAS, whoever's reading it, it goes away just like when AWS goes down, which happens once in a very long while, but it can happen. So people have to understand there's a responsibility for serving assets through the cloud. But I think people can wrap their heads around that and be like, okay, my computer's turned off, that's the end of that asset. And so I'm going to have to reboot and, but it'll reconnect just like a drive that is on a computer that turns off. It reconnects, and, and then you're good from there. So, you know, it's a little more advanced than just having a disc locally on your computer, but it's not so much more advanced than having to spin up your own AWS. Instance, which 99% of the consumer world will never do, and creatives that have teams fewer than 10 almost never do that. And we want them to have the same access that perhaps an Amazon, uh, disparate server has for them without the cost. If you've invested in a NAS, why can't you share that NAS across your team? And I'll give you one more about this, if you are on the same network, so let's say you're in an office with a NAS, which a lot of people have. If you're in an office with a NAS, you could connect that NAS through Strada and it will use your LAN to move the assets around. Strada will still be the interface of the orchestration, but the assets will actually move through your routers, so they won't go through the internet, they just use the routers. And if you have Wi-FI 6 routers, you're going to be able to move those assets on a local area network like a SAN, but you only have a NAS. And for people that know what, those two acronyms are different, that's a big breakthrough and, uh, a very, very powerful workflow advantage. So people working locally in the same office with the NAS are going to be able to access the media off their- it does not have to go up to the internet and come back down. They could just pull it as if they're wired right into the NAS to work that media. Yeah. That's not going to be in the beta. That's not gonna be in the beta. Uh, but those are the things we're working on because again, the beta, the 4 T's, um, is all about foundational. We need to get a foundation going so people need the product and use it. And then we'll start putting in these workflow accelerators. I, I've been working in post-production since I was 18 years old. Um, I got a job at PBS at a PBS station called WSIU, at 18. I started working at post. I didn't even know what post was. And so for 25 years, I've been in post-production. And I became a workaholic way back then at 18 'cause I just fell in love and just got so excited, and I haven't slept since. So across that world, when I started, I was on 1-inch tape machines, VPR 80, Ampex 1-inch tape machine. And I learned Beta, and then DigiBeta and then DV cam. And then we moved to P2 with Panasonic, then RED files, and, and on and on. And throughout each of those, I watched these transitions happen and I kept running into these brick walls'cause I was an early adopter in every single one of these technologies. And in those technologies, I just wanted to use workflow, uh, knowledge to make it a little bit easier and better. And that always gave me an, an accelerant. It always gave me an advantage compared to other people in the space, and so I was able to move faster. I want those advantages to go back, uh, into being able to be distributed at a larger level. I've always tried to be transparent. When we built Light Iron, we put out a lot of like how-tos, all the time and I showed people, you know, what's going on behind the curtain. And I was very transparent about that. I hate the idea that post houses have called secret sauce. I couldn't stand that because I, every time I learned what the secret sauce was, I'm like, this is a lie. It's not even sauce, it's just, it's just like Someone working the night shift or something? It, it, it really, it really seemed like, yeah, like this is our automated VFX pull list and it would print out a, a job and someone would read it and do it, and then print it back. I mean, oh, we switched the Phillips head screws for flathead screws. That's our custom. I mean, come on. I'm exaggerating a little bit, but I, I didn't like the idea of secret sauce. I thought it's a better world if we could be transparent. And 19 out of 20 people that you tell them how to cook a great meal are still going to pay you to cook it for them, right? Because they're like, well, you're an expert. I, I'd rather go to your restaurant and have you cook it even though you gave me the cookbook, right? And then there's that 1 out of 20 that says, oh, thanks for this, I'm going to do it myself. Okay, fine. But 19 others become fans, right? And they, they respond to the transparency. And so what Strada is, is the ultimate version of me being able to take the workflows that I've learned and pioneered and try to make them even better and make them unilaterally available. Um, where something like a fingerprint ID is going to give you access to them. And hopefully we're building a tool simple enough that it's not built for editors, it's built for everyone, writers, producers, directors, post sups, vfx sups, certainly editors. Um, but we're trying to make a tool that doesn't require you to need to go to YouTube and learn how to use it before you can figure out how to take advantage of it. Yeah. Um, and I do want to just jump into some more of the use cases, but I had a, I had a specific question that came up and now it's even more relevant that you just told me that you're going to be able to connect any hard drive you want to, um, Strada. But, uh, a, a frequent case is like, okay, you have your media, you copied to your hard drive, your two hard drives, and then you copy it to maybe you copied some stuff to Google Drive and then you copied it to another hard drive that you're working off of. If you have the same file, is there some sort of checksum or something when the stuff gets indexed by Strada? Where it would be like, oh, hey, we, we, we realize that these are the same exact files, and so we're not going to like create duplicate assets. Yeah, the good news is, um, when you go to cloud storage, checksums are part of that process. And so they're not surfaced the same way that we do when we like do on set, but they all have an xxHash associated with the, with the UUID. So that becomes kind of in invisible in a way. However, because Strada today isn't a storage itself, it only reads what you've already uploaded to it. So you have to get it there in your Dropbox or your Google Drive or whatever, um, because you can't upload it. You're not really like uploading in the Strada. You know, that's to your advantage right now 'cause it saves you money, but you've gotta upload it to Google and it has to be there on the Google side before Strada can read it. Well, I guess what I'm saying is like if I had the same file on my Google Drive and I had it on a hard drive that was connected to Strada, would it realize, oh, these are the same files and not be like, here's two different assets or, or would it be like, hey, this is one asset and it's stored in your Google Drive and it's stored on your LaCie Drive. I see what you mean. Yes. Sorry about that. Um, going back to the idea of Strada stacks, a stack can actually be a duplicate of the same asset, right? Or if you shot a ProRes file and you have that as the original on your drive, and then you put it into Google Drive, then you have two ProRes files. Well, those just go in the same stack and it shows you, well, one is located here and the other's located there. Great. Now, if you took that ProRes and converted it into an H.265, even though it's not a ProRes, it's the same thing happening again. So it's like, here's the HEVC, here's a ProRes located here, here's a ProRes located there. Again, that's another perspective. The version on a local drive versus a cloud drive is just another perspective. And that's why the stacks keeps track of all that, um, to ensure it's- it's a really clever, this orchestration layer is very clever. Most review and approval tools, uh, I would say almost none, have an orchestration layer. This is one of like the secret things about Strada that we built that's really complicated- This is like the timeline kind of view of like pairing up? Like I've seen some of these like kind of like timeline previews in your videos where it's like pairing up the A cam and B cam, and BTS photos, like in a synchronized view. That's the orchestration view? That's how we display it so you can see it. But the orchestration view, think of it more like the air traffic controllers running an airport. You never see those people. You know, there's a tower somewhere on the property and they're doing all this stuff, but they're invisible. But their jobs are really important to make sure nothing runs into each other. That's what the orchestration layer does, and then it just surfaces it again with an airport analogy on the board, and you see this is the gate, this is the terminal, and this is the time it's leaving. All that stuff is what's ultimately shared. All right, real quick. If you are enjoying my conversation with Michael, then you will enjoy the VP Land Newsletter. It is a twice weekly newsletter where we cover all sorts of news and updates in the latest technology that is changing the way we are making movies, from the latest virtual production updates, the latest AI stuff that relates to making movies, not just all sorts of AI stuff, stuff that matters to you, and all sorts of stuff in between, behind the scenes episodes, videos, interviews, stuff like that. So if that sounds interesting, be sure to subscribe. You can go to vp-land.com, or the link will be in the description on YouTube or in the show notes wherever you are listening or watching this. All right, now back to the episode. I did want to clarify or something that was impressive and that you've mentioned, and I know you're still figuring out whatever the sort of model and pricing is, but right now you were saying everything that you do upload is going to get transcribed and tagged/analyzed. Uh, so is that still the case? That is very impressive and very cool because normally it's like, ah, you only gotta pay, you know, per minute every time. And then you gotta decide do I transcribe it or do I not? So is that still the case? And um, you know, what was the thought behind that? Yeah. And again, what's so important to us is to lower the barrier to entry. So if you have time and you have equipment, you can elect to do a lot of the processing on your local machine. So, for example, if you, um, let's say you're doing a job and it's a low budge job and it's, it's small enough that you can afford to just do the transcodes yourself, which is how you do it today, you could have Strada, still do the Strada technique, but it would pull all the processing on your local, drive, your local computer. So it would do all that, even transcode it and then upload it that way. So then you're not incurring any expense for cloud processing, um, of your assets. But let's say you have a job and it's a fast turnaround and you've got more money coming in for the job and you've gotten a lot more data. Okay, now I'm gonna have the cloud do the rendering. And so you can elect those variations. And again, it's all about shape shifting. So many of us in the creative world, our jobs change, the scope of the job changes, the size of the team changes. Even the season that we work, there's seasons that pay more than other seasons. It's, it, there's a rhythm to all this stuff. And what I don't like about cloud subscription tools is they, they either don't know that that's how it works, or they don't care. And so they just charge you a flat rate no matter what. And the problem is you, you might have a product that has 20 users and in a month where you are really active, that feels really valuable. But when you're in the downtime or like what we just came out of, you know, the Christmas season, a lot of people don't shoot in December, right? And it's like, I still had to pay for the 20 users and I only had two people. Right. I don't like- Very inactive time. Yeah. It's a very inactive time. And sometimes, you know, the summer, like, you know, in the, in the northeast, uh, July is kind of a- people leave town, right? And so it's, it's very common out there. And so it's like when people are traveling, it's a different type of rhythm. So we wanna make sure that Strada can allow you to process where you want. And one thing we're not gonna be charging for is users. We will not charge for users. Strada will be unlimited users for the main use case, if there are functions that are specific to like a user need that that I reserve the right to explore what that could mean. But in terms of like how many people you can invite into a traditional Strada project, that's unlimited. We make money if you create more content. So I don't wanna make money based on how many people it requires for that content to be created. And if it's one person or a hundred people, what's the difference? I get paid if you're using the services to move these assets around and transcribe and translate and transcode and stuff like that. So I actually want more people to be on the platform, so I'm not gonna make it a penalty to invite them. And what a lot of people do, and I even do this because of the cost of some of the platforms I use, I have to share email addresses and passwords, or people use one like bulk password for a division or a department. That is a risk to security, and we do it to save money. I don't want that to be a Strada tenant. So we're allowing people to have unlimited users, so you're not penalized for inviting someone in and then you don't have to risk security issues. Or when someone leaves or is let go or moves on to another job, they like could still access the footage. That happens all the time with other platforms and it's because they're user-based subscriptions, and I think that is wrong for our marketplace. User seats can become a big issue of like, who do I give a seat to? Who do I not give a seat to? Uh, share password. Uh, and I know you've got a big thing on being able to have your identity- being able to have your virtual identity tied to like one Strada account across multiple email addresses. So then when you do figure out- so is the pricing sort gonna be based on like how much processing or how much media you're like having to do things to? Is that gonna end up being where pricing goes of like how this turns out? Yeah. Um, if you want to use the cloud to enhance or accelerate processing, you're gonna pay more for that. And what, what larger clients that are constantly rendering and moving files may do is reserve a certain amount of computers for them. So if you're a larger client, you could say, I want 50 machines for our account at all times, and then that's gonna cost more. But anytime someone on your team hits render, there's 50 machines computing together on that job, or you could have 50 people rendering at the same time, something like that. Um, and for people that don't want that, they can render locally or just render one machine at a time and you could sort of govern what your pain tolerance is, right? Another way we make money is if you, once we get the marketplace going, where you can start searching for custom AI models, those models, uh, just like the app store or like Airbnb, we just get a cut as a customer acquisition because we're lowering the customer acquisition cost. The CAC goes down. People can search for I need noise reduction for traffic, right? Because today there's- every NLE has a noise reduction tool, but it's usually just a slider that's just kind of like more or less. But in the AI space, people will make noise reduction- this is just one example, but they'll make noise reduction for traffic versus crowds versus fan noise versus an audio mic, um, you know, malfunction, uh, versus, uh, music in the background. AI models are gonna be able to be trained to do all those- ideally for each of those situations versus just unilateral noise reduction, which is always gonna be okay, but not perfect depending on the situation. So to do that, Strada allows you to search for auto noise reduction and then say traffic or fan noise or music, and then it will give you that model and you'll be able to find it, test it, try it, then buy it, and then I get a cut when you make that purchase, because I've been able to connect a user to that particular model way cheaper than the model having to set up a paywall and a website and an e-commerce system and advertise it. So they won't have to do all that stuff. If they just list on the App Store, just like the App Store works today. None of those companies have to advertise'cause their search is the advertisement and Apple just brings people, or, or, uh, Android to the Google Play Store. It brings people to the store or Airbnb. It brings people to the store. And then you can find the app you want, the address you want, the song you want, whatever. In our case, the AI model you want. This type of marketplace has worked in every other market. Um, it works in music, it works in lodging, it works in travel, it works in apps. Why doesn't it work in workflow? I, I think it could, no one's done it before. It's totally new. But we think that when people are looking for a specific type of thing, there'll be a model designed for that. And if there isn't the AI model community, they will build it.'cause they'll say, oh, there's a need for that. Okay, then let's make it right. And so this is the type of thing that's gonna get really exciting 'cause we'll be able to connect customers with developers so that we can give them the information on how the model should actually work to serve the creative community better. So let's dive a little bit more,'cause I know this is sort of the second part or the other part of like this marketplace and being able to, uh, bring in models and used custom workflows for whatever your needs may be. Um, but also just take a step back with just AI, the AI models in general.'cause I feel like a lot of the attention's been on ChatGPT, OpenAI, and just a lot of the AI tools out there are some sort of wrapper or something around the OpenAI model. But this is more in line with like HuggingFace being a directory of people who built their own AI models using I believe they disclose what the data sets are, right? And HuggingFace, is that gonna be something similar here on Strada, where it's just like a marketplace of people building custom models that solve specific problems? Yes, and we think the HuggingFace community is the best community to actually build this stuff because a HuggingFace community has the talent and the drive and the experience. But what they don't have is the insight on what to build and how it needs to work because the HuggingFace community doesn't connect with cinematographers and editors very easily. And, uh, cinematographers and editors and post supers and visual effects supers and, uh, and directors, they're not going to know how to navigate through the HuggingFace world to find what they're looking for. the HuggingFace world is very specific. One, one of the problems with AI is it is not codec-agnostic. It will never be codec-agnostic. It is very unlikely that you could take a, a Sony, you know, X-OCN file and run it through an audio AI noise reduction tool. It's very unlikely that will happen. So you have to create a mezzanine format. If you create a mezzanine format, well, does the quality go down? Is that a problem? How do you keep track of that mezzanine, right? Like it gets really nasty. It gets really hard. And then all of a sudden, you're like this isn't a workflow. This is really just a one-off thing. Maybe I could use it for a one-off problem, but I can't do it in batches. I can't do it with, uh, scale. I can't do it with a team. I can't do it securely. I can't do it and manage how much it costs because I don't wanna put my credit card if I'm working on a movie and use a model. I don't wanna give 'em my credit card. And I don't have Universal's credit card, right? So all these are problems that the HuggingFace community isn't really dialed in to solving. It's not their job, it's not their role, it's not their product. So Strada becomes that again, it's like that air traffic controller that can connect the passengers to the airlines, connect the users to the models, and, and be able to do that safely with security, with access, with information. And we should be able to- we should, like, this is all like should because we- none of this has been done before. So, uh, it's theoretical truly at this point. This is, what are we, January 2024. Uh, this is theoretical. But we believe it makes sense. We know technologically, it's possible 'cause we've, we've done some stuff in-house that proves it. And we think there's a market demand for this, but the market has to be invented because before smartphones, there was no proof that people wanted smartphones. Before Airbnb, there was no proof people would want to sleep in someone else's bed. Before, Uber, there was no proof that someone would want to get into a stranger's car. But all those barriers were overcome and there was a discovery that the people actually do want this type of control, this self-serve control that didn't exist prior. And we think that the creative community is no different. It would prefer to have AI models more specific or bespoke to their needs than have to just find sliders and get the best they can get out of that. Mm-Hmm. and cobbling together whatever tools or whatever is out there. Yeah. Yeah. We've talked a lot, so we've talked a lot about working in Strada and sort of what Strada can do. Where do you go from Strada to your NLE to your Premiere, Resolve, Final Cut? So like, what's the next kind of like going out of Strada, how does that connect to the other tools that that people use? Right, so that's a really great question because the simplest way is to use XML and ALE exchange. And so when you have all this work in Strada, you send it to the editor that way. And what Strada will do for editors today is it'll already be synced, it'll already be multicam, it'll already be tagged, it'll already be transcribed. So you don't have to do those things. I understand how much value there is in transcribing in an NLE, but I think the NLE is the wrong place to transcribe 'cause what's happening is that the post-production community and the production community is expecting the editors to do the transcription and it's not their job. Now, if you are the director, cinematographer, editor, colorist, okay, everything's your job, so I'm not talking to you. But if you have departments, you have different people. I don't think it's fair to dump transcription on the cutting room'cause they got other problems there. And they need to be cutting, not transcribing. So I think it's important that we transcribe upstream of the cutting room. That way, uh, two benefits. One, you're not taxing the cutting room to produce all that on a daily basis when there's new material. And two, a director or a producer or a story editor doesn't need to know how to use an NLE just to read the transcript. Or three, Or build out a paper edit. Or build out a, yeah, right. Build out a paper. Or three, they don't have to watch it independent of the video. See, if I say I'll just, I'll export an SRT file or a text document. Yeah. But then I can't see it at the same time. Today you have to have an NLE to see a transcript and see a video and hear the audio altogether, except in Strada. Strada makes these virtual files, right, these Strada stacks, and it'll have the text, the language, the video, the audio, the other cameras, right? And you don't have to render anything'cause it's happening in virtual space. The picture and sound and the transcript aren't being baked in. They're just there stapled together, right? Just like a sandwich. A sandwich doesn't become one.. Mass. It's all those layers. You could take it apart if you wanted to, right? Till you eat it, but before you eat it, it's all ible. And the same thing should be with video. What we've been taught for, for, for 50 years is that when you put things together, you then wrap them up and you transcode it, you bake it in, you flatten it, and then you consume it. And, and that wrong. That's limiting and that we have to break that behavior and be like, don't do that. There are, there are still ways that, that will certainly be useful, but if I can see and hear, read transcript, burn in, LUT, change, and I don't have to render anything, then you shouldn't have to. And that's where Strada's Virtual Stack System should help alleviate some of this stuff. So it's, it's a different way of working. I hope people get it. Yeah, I mean, I feel like things have been shifting in that direction. I mean, us personally, I've mean I've, uh, I've got some videos about it, but we've been using Descript just because it was a good combo of easy to use, transcribes everything and you can edit the text, uh, and building out rough cuts and stuff in there, and then sending the XML off to, you know, Premiere and stuff. Um, but it's easier to work with clients and that all the issues you just listed where it's like Premiere, like, yeah, I'm not gonna try to ask someone to work in that that's not familiar with it. I know some of your videos have mentioned, I've heard mentions of like paper edit in, uh, Strada. Is that gonna be in the first version or is that in the pipeline for being able to build out, um, assemblies and stuff, uh, inside Strada before sending it off as an XML? Yeah, not, not in the first beta. The, the, the private beta will have the ability to see the transcripts, read it and connect it all to the video, but it will not allow you to do an editing, um, of that yet. So that, that's on the roadmap for later in '24. Nice. Um, now I wanna shift a little bit. Uh, you've had the YouTube channel going on for a few months, um, to have documenting the journey. The first part wasn't even , it didn't even explain what Strada was. It was like a hardcore just like this is how you do a startup, and talking about like equity and all sorts of stuff where I was like, Where's the film stuff? But this was also very fascinating in just the behind the scenes of building the startup. Um, I think this was also maybe the first a record for the first YouTube channel shot on a Panavision DXL2. I don't know if you have a record for that. And um, the first time I was getting PR emails too of like, hey, Michael's got a new YouTube video up. Uh, you should check it out. Uh, so that was, I think, some, some records. Um, but anyways, I wanted to know what was the idea behind launching the YouTube channel and sort of documenting this process as you're building Strada? Yeah. Well, you made a funny point at the beginning is we don't talk about the filmmaking world because we didn't know- we didn't have Strada yet. When we started Strada, we didn't actually know what we are starting. When I was at NAB, actually, being interviewed by you was truthfully one of these pivotal moments. I did a lot of interviews back in April of 2023. And I was standing there and you were one of the people that we had a great conversation with. I bet there's some clues in that interview if we re-watched it, honestly. But I realized when I was looking out at the NAB floor, there was no AI technology there. None. Almost none. And I said, wow, five years from now, this thing is gonna be crawling with AI. And I looked at brands, I'm not gonna say them, but I looked at brands. I'm like, AI could replace them. AI could replace them, AI could replace them. Whoa. This is gonna be a, a bloodbath was a word that we used, right? And we said, this is gonna scary 'cause there's gonna be a massive transformation. And some of the companies that I'm sort of pointing at have been here for decades. Decades. These are very, very experienced long-form companies. I just realized driving back from Vegas, I usually drive to Vegas.'cause often I even have to fly so much when I can drive somewhere. I find that LA to Vegas- It is a great drive too. It is a great drive. 00 AM on a Sunday or a Saturday. The desert and the mountains. It's a beautiful- Really beautiful. It's beautiful. And when you're by yourself, you could just think. And there's no phones, it's really nice. So on my drive back, I'm like, I think, I think I, I, there's gonna be a new change and I want to be, I think, I know there's gonna be a change and I wanna be a part of it, and I don't know what it is. I don't know what I could do. I didn't have an idea, but I drove back from NAB, uh, in, in April of 23, and I just thought, I think I need to start something new in order to discover what that will be. my friend Haley Royal, um, she told me about this idea. It's a term called Build in Public. And a few companies have done this, but not many, but a few have done it. And she says you should build in public. And you just, just, you just capture what you're doing and you just share it. And that's just what it is. And each episode is a step. So it started out as just how to start a company and we talk about websites and investors and, and how to raise money and how to, uh, set things up, how to hire people, how to build job descriptions, really things that film schools are really bad at teaching. They don't teach those right? And everybody, of course, everybody wants to get to the lenses and the cables and the- I know that. We all wanna see that stuff. But really, none of that matters if you can't get a proper budgeting system up, a proper payroll system set up, a proper website stood up, a name that's trademarked. Like you gotta do all that stuff. And right-brained people, again, creative people are bad at that. It's not a criticism, it's just the truth. I'm bad at it. And so going through it, I wanted to document it to help people understand that being a proper business owner is necessary if you wanna really be a great creator. Because the greatest of all creators are great business people. If you find out, underneath that, they're great at business. Jim Cameron started Digital Domain, right? Like Lucas started the ILM, right? Like Peter Jackson helped start Weta. Like these are people that are building businesses that are very, very organized inside of their ability to create. Then they can create whatever they want.'cause they have this infrastructure of post-production and workflow and business, uh, tied to it. And then that business can do work for other companies, right? And those are just a few examples, but these are all really, really important, um, attributes that I think are missed in the scholastic sense of creatives. And um, so I wanted to document that. It's also true, like this is what we were doing, so if you watch the show from stage one- uh, next week is episode 20. So starting with episode one, I Quit My Job. I was kind of debating, was that a good idea? And then we just worked the problem out. And then we discovered Strada along the way, trying to solve problems and figure out what the problems were. And then an episode coming up will be the launch of the beta and you can watch the whole thing. Someone, some, I don't know who they are, but someone said like, this is like a Netflix show. And they said they've never seen a Netflix series on YouTube, and they called us that. I'm not, I don't know if that's- I should take that as- That's a good compliment. I mean, I think that's, you know, bring, you bring, uh, the high quality, uh, production value to it as well, which makes it feel, uh, different. Um, yeah. It's interesting you talk about the creator and business thing because I feel like, you know, you're giving examples from the creators at like the pinnacle peak of like Hollywood production and creativity. But I feel like we see the other end now a lot more in creator economy and creator space of people who like make stuff on YouTube and then have success and then realize like, I need to build the business aspect around what I'm already doing, from the other direction. Yeah, I think that's, uh, definitely an important trend that you've been, uh, talking about and documenting. And I learned a lot about just startup ecosystem in general and, uh, your, your video's about equity versus, uh, salary compensation and uh, just figuring out the balance behind that and building teams. Yeah, that's what we tried to do. It turns out what I calculated this week is I've written 60,000 words, um, across this series, which is basically a novel. So someday I wanna write a book. I kind of wrote one, and now I know I can do it because I wrote these scripts. Every Saturday I sit down and I just kind of chronicle what happened this week and I write it. And then on Sundays I record it. And then on Monday, Tuesday, Wednesday, we edit it. And then Thursday we polish sound mix, and then we cut some promos. And then we do it again every week. And it's like a little mini TV show. What's really important to us, the, um, creator of the, investment group, uh, Y Combinator, Paul Graham, he has a phrase that we use a lot around here. And he says, "It's better to have a hundred people love you than a million people that like you." We're not after millions of likes. We're after a few people that love it. And we're trying to help people at their, at where they are- meet them where they are and help them. And, and if we could build a product that people love, a few people that love it is really what we're after. And, uh, in our YouTube world, we have some wonderful people that follow us and are using it and giving us the feedback that they're able to actually take this information and use it for themselves to do better work and set themselves up. That is so awesome. It's so exciting that it's actually useful because this is the hidden, these are the secrets of business that nobody wants to talk about. Setting up a payroll system, buying a website domain. Not only does that sound boring, so I tried my best to make it entertaining. But it's also stuff that nobody even talks about.'cause you just think it's like a box you just check. Oh, get the website you make. Wait, that's a whole, that's like a month of work right there. How do you really pull that off? And I wanted to like tell that story because it's, everybody needs to go through it. And when we told that, everybody came outta the woodwork saying, oh it's the worst. I can't stand it. GoDaddy's a scam. People like all had the same experience we did. I had no idea 'cause nobody talked about it. I never knew everybody has the same woes. And so it's like, by putting it out there, I learned, oh, everybody's feeling the way I do. Okay, we're not alone here. It's a, it's a real nasty part of the business, but we all gotta do it. Can you walk me through, you mentioned one in some of your videos, you talked about some of the tools you use. You already mentioned sort of your production schedule, but, uh, some of the AI tools you used. Um, some of that I, I hadn't even heard of and I thought they're very interesting in, um, how you put together some of your archival footage and up-res-ed, uh, various shots and stuff. Yeah, my, one of my favorite tools is Remini. Remini is owned by the business Bending Spoons. Bending Spoons has great AI tech. They, they invest in a lot of places. They're, really powerful. But Remini did photo AI enhancements. You wouldn't say sharpening. Topaz is good for sharpening, right? But, but Remini actually rebuilds like skin and hair and eyebrows and things like that. My only problem with Remini is it goes too far. So you usually have to back it off separately. Um, but what it can do automatically is amazing. So I use Remini now constantly. I have the phone app, I have it on my desktop, and I'm constantly running photos through Remini. And then what I do is I bring it into, um, Photoshop or sometimes even in Final Cut, and I just mask out the part. I really want the advanced part to be in, cause it, it, it over enhances everything. But I use this all the time. And then in the middle of us making the show, Remini added video. You could start doing a video. Now the problem is the video advancement they have is short. So you can only do like smaller clips, but you can run small clips through it and it will start to do it in motion. It's slow and it is short, but it gives us video solutions. So we have a, a way to start doing that. Uh, another tool I use is Evoto. Evoto is like Photoshop retouching, but it doesn't require any training. And if you wanna get really good at like retouching wrinkles and skin and hair, and even wrinkles in shirts versus eyes and stuff, you have to be really good when you're messing with somebody's face. But in Evoto, you do not. Evoto basically takes an image and it carves it up invisibly to like 50 different layers and those layers are now sliders and a layer is just like eye wrinkle or smile line or forehead or, or this. And in Photoshop you have to carve all those out, label all those layers, and you have to work all your tricks to do that. In Evoto, you do not. Now, if someone says, I can get Photoshop to do better, that's fine. I would say Evoto will take you 80% of the way there with 10% effort. And that's why it's powerful. Not because it can do everything Photoshop can, it can't, but it can get you 80% of the way with a fraction of the effort. And so I run everything through Remini first, and then I back it off and then Evoto. And what I'm able to get in like 60 seconds is like what used to take me like 20 minutes. And that's a big difference when you're doing batches. Those are a couple of AI things that I am using pretty much daily and those subscriptions I'm consuming a lot. And hopefully, they'll be able to be plugged into that concatenation layer in Strada so that you could start doing it in batches across assets in a video workflow, hopefully. Right. Yeah. You're describing the sort of the round tripping you were doing and building out these things was sort of like the lead into this is what you're trying to solve with Strada, where you can just do it all inside, uh, the platform before going, having to round trip it back and forth to final cut. Um, and another interesting one that you had talked about was the eMastered where you were mixing, sending out your final audio mix. That one I hadn't heard of and you were just throwing your entire YouTube mix to eMastered and waiting to just mix your, mix your levels. Yeah, like I've been using eMastered for years. It was way ahead of this AI train that we see in the zeitgeist today. eMastered has been around for many years. It's just a model that allows you to upload a premix and it'll mix it. Now, a real mixer, I talk about this in episode 13, um, but a real mixer is gonna do a better job. But when you don't have the proficiency skills or knowledge or time to do a proper super mix, or the money, then eMastered is gonna get you 80% of the way with 10% of the effort, and that's where this AI as a utility is really powerful. You commented where it's like AI not really replacing the high skill level professional. But when the option is I'm just not gonna do anything because I don't have the money or resources or time to figure out how to do it myself and I can't hire someone to do it, that's the gap where these like utility AI is gonna fill and not replacing the craft professional who is a pro at this. That's exactly right. And, and what you're talking about, this is the hot sticking point that creative people need to understand. They gotta separate this battle in their mind. And you actually use the phrase gap and, and what we call it is the skills gap principle. This idea that AI is not going to just unilaterally eliminate creative jobs. It's not that simple. When there is a skills gap, like for me, mixing, or a time gap for mixing or a money gap for mixing then AI can go in and fill that gap to some degree and it's gonna be better than nothing, right? And in some cases it might be good. In a few cases it might be excellent, who knows, right? When I look at Evoto and Remini together, the results I get are excellent. And I am not a photo retouching expert. But when I do those two together, you wouldn't know that. You wouldn't know that I, I could do a pretty good job with those two. And my mixing. I'm a musician. I know how to record. I can write and play songs, but I'm not really good at the final mix. I don't know if it's- I can't hear those little differences. I don't know which order to put the filters in. I just don't have the discipline, and I'm maybe not patient enough. But when I use eMastered, it warms the voice. It separates the layers. It makes it feel wider. And it, it, it just levels everything and it, it, it comes back really great. So I use it for the YouTube show. We just get a premix. I mean, literally the editor puts in the sound effects music and the dialogue, and she cuts and cuts and cuts. Andrea cuts all my stuff, and then I just send it to eMastered and I get it back. I get a little toast notification, it's done. And I put it in, and very rarely do I do anything else. Like literally I do nothing else. I get that. I tuck it in. I I just keep going with whatever else I'm doing. And it sounds pretty good. Someone might say, oh, I could make it sound better. Good, fine. Are you gonna do it for free for me? Because if not. It's not an it's, it's a non-starter because I don't have, it's not an option. And so it's like this is fine. Now if you're a professional sound mixer, you are- see the funny thing is like, oh, so you're trying to eliminate sound mixers? No, the professional sound mixers are gonna mix the same way. But these professional sound mixers would do the same version of eMastered for color correction 'cause they're not colorists. So they want the auto color'cause they love mixing. The colorists want the auto mixing because they love the color, right? And that's the skills gap principle. Move the AI into the slot. That is not what you're great at, and it will elevate your project and you might be able to increase the quality with limited resources. Um, with our NAB conversation, which was a highlight for me, uh, at NAB and really enjoyed that conversation. It has stuck with me, uh, a long time since. Um, but in that conversation, too, uh, you were definitely more gung-ho about generative AI. I think you said something like more images, more videos were going to be generated than shot. And that was one of the lines, and I know some people kind of got hung up on that. The percentage of things that are photographed or recorded in the world for productions is going to go down. But then in the Strada demo, you said, uh, I think generative is a little, little bit overhyped and, you know, it's more about utility. but the fact of the matter is I think Gen AI. is overhyped. There's purposes for it. And again, I'm a user. But really what the AI that I think is going to serve the creative community better is utility. So I just wanna know where did eight months in AI land is like a million years in real life world. But like where did, um, what, uh, you know what, what shifted in your perception of like, uh, the generative tools and stuff? I know exactly what you're talking about. And so what I was referring to at the very beginning of that conversation, um, eight months ago is basically the quality of the results. The quality of the results were gonna hit a level that they they could replace photography, right? Because they look real enough, they look proper enough, and the everything in it is, is there. What I learned over time is that while they can make the quality look perfect and they could get the things that you're trying to get, they cannot repeat it. Large models are bad at repetition.And fine-tuned models might be able to repeat things, but they're not as good at being able to create, um, the, the wider array of things. And so it, it's really interesting. That's what changed is the consistency. We, and that's why when I, when you watch companies that make generative tools, one of the things, you gotta really pay attention to this, this isn't a criticism. It really isn't a criticism. It's sort of like reporting it's gonna rain tomorrow. That's not a criticism, it's just bad news. It's just the news.I wish it didn't rain, but it's gonna rain. Okay? So it's not a criticism, but when you watch these tools that make generative stuff, what they do is they make these amazing reels. They're like a, a, show reel, a montage, right? Those montages are amazing. They're amazing. You're like, oh my gosh, photography, video, motion, all this stuff looks amazing. But if you really think about it, that's all it can make is a montage. That's all it can make. Because if you put two characters in a car driving down the Sunset Strip and then you regenerate the next shot, it's a different car, it's different characters. The sun is now in the north instead of the West. And like all sorts of things are problematic, right? There's no continuity across that. And unfortunately, our minds pick up on that stuff. If somebody's jacket changes, we might not say his jacket changed. We might be like, what? What's? What? What just happened? Like we might not even know it consciously, but subconsciously we're gonna be like, wait, what just happened? Or if somebody's eyes change, but the rest of their body's the same, we're like, what just happened? We pick up on that stuff. And it starts to break down our ability to suspend disbelief. And that's why continuity is important'cause when the candles move, we're like, oh, the candles move. And then we lose suspension of disbelief. Well, if the person changes or the car is a different car. Cadillac became a a Bentley. It's like, wait, that's not right. Right. And that's what these models don't show when they do montages because the montages don't have to worry about continuity. So when you're doing montage stuff or snippets or clips or stuff, this stuff is amazing. It's great. What we have to work on, which is gonna take a long time, is to be able to stretch that continuity. Now they're starting to try to do that where you can circle things and say, save this, keep this. But it is not at the level that creative people need to retain It's certainly making a step towards that, but it is nowhere close. So when I think of replacing content, it's just gonna take a long time before the continuity is totally in control. And this is what it's gonna require. For Hollywood level productions, to use generative AI to make something, you will have a production designer, a wardrobe person, hair and makeup. You'll have, you'll have, uh, certainly a cinematographer. They will all be there but they will be prompting the material to look the way they want it to, and it will be locked in and consistent, shot over shot, seen over seen, take over take. And that's a long way away, long way away. So it's not something, it's not something encroaching on us right now, which is why in real life we'll still be the preferred way to create, but we can augment in real life with AI as a utility. I think that's the bridge where AI should be useful to the in real life community. Mm-Hmm.. Yeah. Speeding up everything we're doing. Speeding up our workflows. Making us faster, better, more efficient. And, and real quick, just like, you know, even, even mistakes, like we always have a C stand in the shot, a boom mic in the shot. Those are problems. Now they're easy to fix with AI, right? A logo that shows up. If you're doing non-fiction, what do they do in non-fiction? They have someone do a logo pass. First, it's a lawyer, right? After the lawyer, they log it all. Then they need someone to go in and they gotta track it all. Then what do they do? They blur it. AI can not only fix all that and, and, and remove this goofy kind of annoying blur floating around the shot, they can actually replace it with either no logo or a logo that is approved or your logo, right? And so all of a sudden, it gets better. And now there's no more blurring of something. There's actually control and it's actually working towards the content or the story intentionally. That's the type of stuff where AI in generative AI rebuilding a shirt that has a, you know, a soda logo and and it's now gonna be a different logo like that is really helpful and practical. The lawyers are happy, the creatives are happy. If you get a partnership and you go to some company and say, hey, Pepsi, do you wanna sponsor the show? We'll put a bunch of Pepsi logos in the show. Sure. Now there's a revenue opportunity there and it's gonna look natural, right? These are opportunities I think are really, really cool. Plus a, a friend of mine, a director pair friend of mine, John Requa and Glenn Ficarra, they're like, it's always a struggle to get great audio in on location. And the best place to place the mic would be in the shot.'cause that's like, put the mic right there 'cause it'll sound the best. But it's like, well then there's a mic right there. He's like, I just wanna do a table scene at a restaurant. Put the microphone on the table and just paint it out. And, and he's like, I, that's what I want. I don't wanna like have the boom and have to like, people don't always want, you know, wired mics up their body and stuff like that. And so it's like, just put the thing on the table and just paint it out. Like that's the easy stuff. Also, if you're low budget, that's easy. Just put the mic on the table 'cause then it sounds good and just paint it out like that stuff's gonna be so awesome. And you know, you just tell the actors, don't touch the mic. Okay fine. And then it's fine 'cause sound great and it'll look great later and it'll be so easy to paint this stuff out. That's the type of stuff that I think this generative AI is gonna be used for us, but not to just create the whole world, but to actually augment the in real life world to the way we want it to be. Yeah, I feel like the first case is gonna be these invisible effects, uh, where we just don't even realize that it was used or that it just changed something where we're not, like, it's not calling attention out to itself, it's just making things look more natural, more realistic. It's funny that you mentioned the mic thing though 'cause I feel like just as a aesthetic look, um, since the RODE mics came out, the clip-on ones, like, I feel like that- everything with microphones has been like, we gotta hide the mic, hide the mic. And then somehow, I mean, probably with YouTube and TikTok generation, it's like now become completely acceptable to just clip on the huge big pack on your shirt. And, um, it's fine. I mean, it's as an aesthetic look, it's become a thing. I wouldn't, I would, I would say don't use that for, uh, for narrative filmmaking, but, uh, yeah, it's just been, it's just been an interesting observation where it's no one- I've made that observation. I think It's, it's funny too, I'm like, okay, I guess we're doing this, but- It went through great extents to hide lavs, and now you just clip the big box on your shirt. Yeah. Speaking of the mobile filmmaking space, you've done a handful of videos diving into in depth on the iPhone, iPhone 15 Pro LOG, getting it to look more cinematic. What kind of prompted doing these deep dives into the iPhone, especially your most recent video when we're recording this, of, creating these depth maps to like really push the limit of like how you can adjust the, uh, the depth of field and what the iPhone could do? Well, at its core, democratization is always at my core, so I wanna discover what people are doing. Plus, the creator economy has 50 million people in it, and that's expected to double to 100 million people. So there's so many people that need access to technology and techniques and workflows that it's just ever growing. And so, um, this is an opportunity for me to experiment with what are the pain points, what are the opportunities, what are the ways to make things look and sound better? Because as more people are going to create content, which is good, we don't wanna lower the bar of what that content looks like, and we don't wanna make it harder for people to make it look and sound good. So there's opportunities. And one good thing about creative people, musicians are a good example. Musicians cannot help but go to Guitar Center and spend money. They love it, right? And filmmakers are the same. We love dropping money on gear. We You go to B&H, you go to Adorama. Yeah. Yeah. Absolutely. Absolutely. Like, oh my gosh, I gotta, I gotta spend money, right? And you go into these places and you're like, I, I can't believe how much I just spent, but you love it, right? So people like to buy gear. But there's so much gear out there. What is the best deal? What is the best fit for me? It gets kind of confusing and hairy. And so the iPhone to me is just a way to show if we can make an iPhone move upscale, then everything else- you have a place to orient. We need to have a bottom to have a top right. And it's always funny 'cause in the conference people are like, why don't you shoot on Alexa? I'm like, in all of some continents there might only be 50 Alexas in all of Africa, I don't know. But there's probably 50 in the zip code that I'm in in Los Angeles, right? So it's like, you gotta think about accessibility. People don't have access to even, even like FS7s. People don't have access to those. So it's important for us, especially when we have the knowledge and experience and research and resources, we need to help other people understand what they can have access to. And I find people have a real hard time understanding that we're spoiled. If you have an Alexa or a RED or a Sony near you, you're lucky. And it's like not everybody has that, so what can they have? And so I just wanna push the iPhone because I like showing how far it can go. And Apple has certainly got some great talent working there that are trying to push the iPhone into the cinema space. And the fact that it's even conceivable that you can compare it at all is incredible. And, and even though the people are nitpicking it and stuff, they've fallen for my trap because they're proving to me this is a product, this is an opportunity here. So if I could add AI depth the field as a automatic button to iPhone footage, I'm pretty sure after the last video I did, people would push that button and use it because it's like, yeah, that made the iPhone look better. If I shoot ProRes Log, add a good LUT, make some depth of field, it looks better. Then what do you want? That's it. Now, if you can afford an FS7 or an FX9 or a, a 1D C or a VENICE or whatever, like, then go for it. That's great. But if you can't, here's some available tools with some AI to make you at least a little bit closer. For me watching it, it's been a bit of just like blurring the line of where is in which acquisition, where is the camera and where is the final image. Because even with the iPhone itself and the taking the photos and the portrait mode where it's doing all of this AI processing, uh, and machine, like identifying faces and stuff and processing it in camera, uh, and sort of extracting some of the steps into what you did in your last video of building this depth map where it is you're capturing on the camera, but then you're still doing some processing, which I imagine the future of that is like a step that Strada could come into play of like how you are handling the footage and then what you are doing with it afterwards. And even, uh, shooting on a RED or debayering. It's like you shoot raw and you still have to take these steps to make it something that you can do. And it's just sort of like this blurring line for myself of like, where from like acquisition to like when you're actually done with it, uh, kind of seems to shift and also pulling these tools, it reminds, you know, like going back to when, uh, DSLR started being able to shoot video on the 5D and then seeing features adapted into other cameras.'cause it was like, okay, this could shoot video, but there are a lot of issues. Like there's no sound input, there's no, uh, overheats in like two seconds. You can only record for, uh, whatever the limit was at the time, 15 minutes. Technology from some of these lower tools, quotes, um, then being adapted into things that we might see five, 10 years from now in cameras. And I forgot which Sony model, it was a consumer camera, but even that had some AI features built into it, like face recognition and reframing the video based on, uh, identifying the face and cropping it so that the face stays in shot. I guess the question would be like, are you diving into the iPhone so much because like you feel like this is a hint at like where we're gonna see how we capture images in higher end models, like in the future? I think the reason I'm fascinated by the iPhone is because I believe Apple is making intentional steps towards moving into a more professional cinema space, which would be an entirely new camera technology. And the reason I'm fascinated by it is because every time a camera company has moved into the camera space, it's always met with some resistance because like, it's not good enough. It doesn't have this or it doesn't have that. Those are always true at the beginning, but they have to have people pioneer it through and make it work. Like, I'm not gonna do this, but right now, based on all my tests, . I could probably start up a company that builds a cage. Inside the cage is a dongle that has the USB-C and that USB-C gives you an HDMI converts it to SDI. It also sends that USB-C to a hard drive, so a little SSD that you could pull out, right, and it also has a little, bigger screen on it. And you could basically build a product that you just clamp the iPhone to and it has all these features, wireless video, external video, hard drive, wireless lens focus, like it would be all there. That's so cool.'cause you just build that product and people would absolutely use it. I think it would be big in news in sports, like journalists would do that.'cause they don't wanna have to build, bring around more than they need and this will be really powerful. So there's a lot of value here. I just find that using it and pushing it in the cinema sense is not- people never get this, but I don't push this technology in the cinema world to show that it would replace the cinema world. I have arguably the best cameras in the world and the best lenses in the world in my living room. I have access to that stuff so I understand what- and if people look up my IMDB, I understand what it takes to make a high caliber Oscar-nominated film. What I'm trying to show is if you can stretch things all the way, it means it will relax into a nice spot for the average person. And that's what it's all about, pushing things too far so that you find out the sweet spot. It's why most cars can go 130, 140 miles an hour. You need them to be able to do that so they can drive normally at 65, right? And so a lot of people don't realize that that's how all technology works. If you just make the highest performing element of it, the RED needle bearing the needle, then it's not gonna run very well in normal situations, right? The average situation. It also reminds me that the only way to make the iPhone look good is with a good crew. Makes perfect sense, but it's like, I need to have a gaffer. I have, I have all these aperture and cream source lights and you can't see all that. Well, kind of in the behind the scenes, you could see there's lights everywhere. There's no magic here. There's no like shortcut, you basically have a ProRes recorder. That's what an Alexa is, a ProRes recorder, and you gotta have a bunch of lights, a bunch of stands, a bunch of good people on a tripod and a slider, and, and Apple's just a ProRes recorder. It's a sensor. And that's actually how everything, actually, how everything works. Everything works that way, and there's no magic here. But the fact that the sensor is good, the quality of the recording is good. And a tip. If you're gonna try to use the iPhone, you gotta really just use the 24 millimeter lens on the iPhone 15, the 24 is the sweet spot. Once you go to the other lenses, it doesn't really have the same, it's, it's very different. And I would say it's, it's not as good. Um, so the 24 is what I learned is like the sweet spot. Just stay on that. But of course 24 has like infinite depth of field, so that's what the whole depth of field assignment was. It's like, can we automate with AI, a way to simulate depth of field, uh, shallower. And, and we did. And, and it looks okay. It looks better than I expected. Uh, some people would say, I can't even tell the difference. Okay, that's a win. And if, if it's even 20% better, well, that's, that's a lot, right? And I would say it's at least, at least 50% better. Plus a good colorist, Nick Lareau is my colorist. You gotta have a good colorist to make this stuff work. But ProRes 10-bit files on an iPhone on the 24 millimeter lens is amazing. Put some lights, put a good looking person in front of it. Have a good colorist tune it. At the end. You will, you'll blow your own mind. There's a lot of great YouTube videos of a lot of colorists doing that, and they're like, wow, this stuff looks really good. I'm, I'm impressed with my own footage and that I haven't had that feeling since. Like the RED ONE days where I was like, wow. Or before that, the DVX100 days. I was like, wow, I'm so excited by the footage I shot. I love that feeling. Who doesn't love that feeling, right? And so iPhone is just the craziest way to get that feeling and open up democratization in a way never before possible. Yeah, and I think to what you said, it just highlights the importance of the, the, the shot on iPhone, uh, a Keynote that they did when they revealed that it was an iPhone, just highlights more of the importance of lighting and crew and coloring and, and, and everything else around the device that has the sensor that you are filming with. Yep. Yep. Someday cameras will just become sensors. They just will. Someday, the entire system will just become a plate with a sensor and a lens collar and, and that all that technology will just fit into a probably 21-millimeter thick system. And we'll just be shooting on sensors. It's not that hard to believe if you think about it, but that's, that's where it's gonna go. And so people need to realize that what Apple's doing is just an early version of, of that and the gack that you have to build around it will continue to be more and more wireless, require less and less stuff, become more and more concise. Um, but talent and crew is always gonna be required. No, there's no magic way to do that. Now color correction in post is replacing some of that because you can, you could do a lot more in color correction. So average lighting can look better with good color correction. That's true. That's been true for a while. But you can make good, good lighting exceptional with color correction, right? That should be nothing new. But the iPhone is now in a league where it gets to compete in that space. ProRes 10-bit Log recording is the differentiator. If there's anything, it's like, well, what changed? ProRes Log 10-bit recording to an external drive. That's what's new about iPhone 15 that is going to change the narrative. I think if Apple made a camera with an RF mount that was 20 to 30 millimeters thick and wasn't a phone, I think it would be a big hit. Yeah. Maybe after they launch the Vision Pro, that'll be their next product. So. Yep. Maybe. Well, I really appreciate the time. Uh, do you wanna run through real quick again? Uh, we've got the private beta launching and then you have the NAB and IBC roadmap, uh, and then where people can sign up. Yeah, you could find us at strada.tech. You could sign up for the app. Uh, the private beta will come out February 1st, which this is the first time I've said that, so I'm so excited to share that, uh, February 1st. It'll be a private beta. We need that feedback to grow from there. To the public beta, to the version 1. It's gonna be a crazy 2024. Uh, but I'm so excited about trying to revolutionize workflow with the cloud. And that is it for this episode. Thanks a lot for watching. And thanks to Michael for coming on and sharing a lot of insights on Strada and just how he's rethinking the future of workflows and post production. If you enjoy this episode, please give it a thumbs up, a like subscribe to the channel, wherever you are listening to this. And once again, if you listen to this far, you'll probably like the newsletter that we've got twice a week, VP Land. Head to vp-land.com to subscribe, or just check out the links in the show notes below. Thanks a lot for watching. I'll catch you in the next episode.