How I AI

How an Emmy Award–Winning Director Continues to Create and Evolve with AI

Brooke Gramer Season 1 Episode 31

In this episode of How I AI, I sit down with Michaela A. Ternasky-Holland. She is a Peabody-nominated, Emmy Award–winning director using immersive technology, animation, and AI to tell deeply human stories.

Michaela has built a career at the intersection of creativity and innovation. She was also among the first filmmakers to premiere a short film created with OpenAI’s Sora at Tribeca Festival. 

We talk about how she’s evolving with new tools while staying grounded in story  using AI to expand what’s possible for independent creators and small studios without replacing human artistry. From worldbuilding and animation to ethical storytelling and cultural preservation, Michaela is redefining what it means to direct in the age of AI.

Topics We Cover:
• How AI is transforming animation and immersive storytelling
• The economics of creative production in an AI-assisted workflow
• The evolving relationship between artists, technologists, and AI tools 
• The making of Kapwa and how cultural storytelling meets technology

Tools Mentioned:

AI Creation & Production
Runway, Midjourney, OpenAI Sora, ElevenLabs

Development & Worldbuilding
Unity, Unreal Engine

Voice, Sound & Post-Production
Descript, ElevenLabs Music Studio

Machine Learning & Open Models
Hugging Face

AI Research & Writing
ChatGPT

Projects Referenced
Kapwa
• Sun Still Rises (AI-assisted short premiered at Tribeca Festival)
• Reimagined Vol. III: Blue (animated series with 2.2M+ YouTube views)

Connect with Michaela
Instagram: michaela.ternaskyholland
michaelaternaskyholland.com

Ready to cut through the overwhelm?

  • Explore all my resources in one place → https://stan.store/BRXSTUDIO
    Free AI Guide • 45-Minute AI Audit • AI Community • Custom GPT
  • Step into The Collective’s 7-Day Free Trial and unlock instant access to our AI Mastery Challenge, a vault of high-level trainings, and recordings from our most mind-shifting webinars.

More About Brooke:

Website: brookex.com

LinkedIn: Brooke Gramer

More About the Podcast:

Instagram: howiai.podcast

Website: howiaipodcast.com

"How I AI" is a concept and podcast series created and produced by Brooke Gramer of EmpowerFlow Strategies LLC. All rights reserved.

Michaela:

And as a filmmaker and a director, having these AI tools. Has allowed me to tell stories using the art of animation, which is a very expensive medium. It's allowed me to use still my high level creative art directors, my high level animators, my high level score composers, sound designers, but empower them to tell a bigger, longer story for less of a budget because they are assisted by these AI tools. They're not replaced, but they're assisted.

Brooke:

​Welcome to How I AI the podcast featuring real people, real stories, and real AI in action. I'm Brooke Gramer, your host and guide on this journey into the real world impact of artificial intelligence. For over 15 years, I've worked in creative marketing events and business strategy, wearing all the hats. I know the struggle of trying to scale and manage all things without burning out, but here's the game changer, AI. This isn't just a podcast. How I AI is a community, a space where curious minds like you come together, share ideas, and I'll also bring you exclusive discounts, and insider resources, because AI isn't just a trend, it's a shift, and the sooner we embrace it, the more freedom, creativity, and opportunities will unlock. Have you just started exploring AI and feel a bit overwhelmed? Don't worry, I've got you. Jump on a quick start audit call with me so you can walk away with a clear and personalized plan to move forward with more confidence and ease. Join my community of AI adopters like yourself. Plus, grab my free resources, including the AI Get Started Guide. Or try my How I AI companion GPT. It pulls insights from my guest interviews along with global reports, so you can stay ahead of the curve. Follow the link in the description below to get started. Storytelling in creative spaces is evolving faster than ever, and today's guest has been right in front of that wave her entire career. Michaela Ternasky-Holland is an Emmy award-winning director, whose work blends cutting edge technology with timeless human stories. From pioneering one of the first films ever created with open AI's Sora to designing interactive installations that reimagine Filipino American history. Her projects don't just entertain. They spark dialogue and transformation. In this episode, Michaela shares how she approaches new technology with intention. Why she believes AI should enhance creativity rather than replace it. And what it takes to craft immersive experiences that stay with people long after they leave the screen. If you're curious about the future of storytelling and how to wield AI without losing the human touch, this conversation will inspire you to think bigger about what's possible. Alright, let's dive into today's episode​Hello everyone, and welcome to another episode of How I AI I'm your host, Brooke Gramer. This week's guest is pioneering the future of storytelling with immersive tech. Welcome, Michaela. So happy to have you.

Michaela:

Hi, Brooke. Thank you so much for having me. I'm so excited to be here.

Brooke:

Absolutely. And before we get started, please, I'd love for you to share with listeners today a little bit about yourself and how you ended up where you are now. That's a great

Michaela:

question. So my origin story is really non-traditional. I actually went to school for journalism, but I dropped out a few semesters in so I could pursue my career dancing and performing on Disney Cruise Line, and then came back to finish my degree and continued to perform at theme parks in Southern California. During this time, I was really in awe of the power of immersive and interactive storytelling for the audience to have a really beautiful experience and even a transformative experience. And so I decided to apply some of these immersive, interactive techniques. Two, the format of journalism and nonfiction storytelling. And so I went on to work with Time Magazine, people Magazine, sports Illustrated. From there I went on to work in Social Impact for the United Nations Game for Change, Nobel Peace Center. And then even from there I was like, I love social impact, I love journalism. What if we just expand this to full narrative now? And so I did work with Meta and I did some consulting for some nonprofits around how to tell their stories from a more fictionalized. Standpoint and now I have continued that process to lead me to the use of generative ai. It's been a very natural part of my career. I think, I've seen many hype cycles of technology come and go. I've seen use case scenarios that have been really powerful for emerging technology. I've seen use case scenarios that feel a little bit more like, we're just gonna show off the technology, but we're not gonna really think about the audience, or we're not gonna really think about. The impact that we wanna have from a storytelling perspective. And so what I really like to do with my work and my artistic practice is to take these emerging technologies or to take familiar technologies and really think about how can, we can tell a really strong story for the audience and how we can create a really strong impact or experience for the audience as well.

Brooke:

Wow. I want you to take me back to when you saw that surge and explosiveness into AI and generative ai. When did that shift start? When did you see the industry changing a bit and shifting more into this digital landscape?

Michaela:

Yeah, I mean, I remember in 2022 having lunch with my uncle who works sort of in the cutting edge of Silicon Valley doing different types of things, and he's just a big nerd. He's one of the, one of the original Silicon Valley, eighties, nineties kind of folk. And he was telling me he was, you know, Michaela, you should really check out, what they're doing right now with ai. It's really powerful. And I heard, I had done some algorithmic ai, had done some machine learning projects, but nothing that was generative, right? And so he was explaining to me what generative AI was. And I remember taking note of it being like, that's so fascinating. That's so interesting. Some of the early stable diffusion models and what they were doing, he was reading some of the research papers, but it wasn't until 2023 when my really good colleague and collaborator, Aaron Santiago, premiered a project called Taltamanster, which was a project at Venice Festival that actually took your answers and created a script, took your and created a 360 world all in real time where generative AI really started to come into the art scene for me and my community in the digital artist community. And so I remember in 2023 thinking, okay, generative AI is here. It's not just this thing that researchers are doing. It's not just this thing that Silicon Valley, you know nerds are doing, in a way, it's actually being used now for like creativity and artistry. And I remember thinking like, I won't just use this for the sake of using it. I think, it was around late 20 23, 20 24 ChatGPT really came online. I wasn't just using CHE PT every day for my life or my personal or my professional tasks. I was really wanting to be specifically intentional and conscientious of how I used the generative AI technology, both in my personal life. As well as in my professional life.

Brooke:

Cool. And so when you started playing around with creativity and artistry and mapping and merging these tools and software systems, what did you start to lean into first? What was your toolbox that you're playing around in?

Michaela:

Yeah. Great question. So, I mentioned Aaron Santiago before on this podcast. I have to give him his flowers. He and I came together as Filipino American creative technologist and artist, and asked ourselves, what kind of story do we wanna tell? And I remember being very clear with Aaron. I was like, look, I know you're the generative AI guy. I know everybody comes to you for generative ai. I'm not interested in doing it unless it makes sense for the project we wanna do. And so we really thought about the story of how we wanted to portray the Filipino American experience. And something we kept coming back to was this loss of culture, this loss of connection from being both Filipino, which was a culture that was colonized by the Spanish, but also being American, and having sort of both these big disconnects. And we came to this idea of like, well, what if we showed Filipinos throughout American history?'cause we know Filipinos were here in America through the forties and the fifties and the sixties. What if we also showed what Filipino people could look like pre-contact with Spanish people? And we started saying, okay, cool. This is a really interesting way of talking about speculative past, speculative, future speculative present. And one good way we could do that is using generative AI as metaphor for memory, as this metaphor for something that we know exists. But we actually don't have any real quote unquote human quote unquote viable, quote unquote verified data. So we can use AI to fill in these cultural sort of gaps that we have as Filipino Americans, and we decided to pair that with social media videos of Filipino Americans making fun of themselves being Filipino, or talking about being Filipino. And so it became really this dialogue between the present day zeitgeists of people online reconnecting to some of these generative pieces that we were doing with stable diffusion at the time. So that was really my first big creative foray into generative with my collaborator, Aaron Santiago. And we've continued to think about ways, not just that we make quote unquote a film, which is one part of the practice I have, but actually making an interaction or making an immersiveness or making something that audience engage with, and the ability that generative AI has to make that special.

Brooke:

Wow. And I know in our first initial call you talked a lot about Sora. What were other tools and how did you bring this to life? Exactly. Did you have a team? Were you working with yourself? With the creative direction and the storytelling and mapping out the design? Did you have an engineer at hand? Take me through the process of like bringing this to a reality and how you had that come to fruition.

Michaela:

So it's different for each project slightly. But to give the audience a rundown, I've done three interactive installations that utilize generative AI flow and generative AI technology. I've also made three generative ai empowered assisted, animated films. And so when I'm staffing for either the installations or the films, it's two very different teams. So with Kapwa, it was really me and Aaron and a score composer. And we really created this gorgeous installation, which was about the Filipino American erasure. And we called it Kapwa. And the piece of the kind of ability that Aaron and I really used with stable diffusion as the gendered AI tool. For our interactivity, we utilized a depth camera and touch designer, which is a really awesome kind of almost Q lab like tool for independent installation creators and independent theater makers. And then I did all the editing from an audio perspective, video editing. Aaron did a lot of the touch designer. It wasn't exactly engineering, but he basically worked in touch designer and built the mechanism of the installation. Mm-hmm. And then our biggest gap as a collaborators was music. And we wanted it to be original music from a Filipino American score composer. So, our third teammate in that was that score composer. Her name's Anna Luisa Patrisco. So that was Kapwa. Now, every time we scale one of these installations, or every time we do a new installation, it becomes bigger and better. So our second installation was called Morninglight, which was an installation that utilized generative AI to read your tea leaves. And so we worked with CHICHA San Chen, which was a tea brand. We had a black box space that we were able to convert using gorgeous projections. We had a VFX designer create projections based on tea fields in Taiwan and mountainous kind of environment. So none of those were generated. The generative was really the tea leaf reading. Then we worked with a score composer for that one as well, who's also a sound designer. And so the backend of the generative AI Tea leaf reading, we were using 11 labs, hugging face, and our own custom GPT. So 11, the labs gave us the ability to have multiple voiceovers be the oracle quote unquote, which is the being that's giving you your reading. So we could have a more female sounding oracle, a more male sounding oracle, and then we would also use hugging face to copy. Basically what was coming out of the custom GBT as the astrological tea leaf reading. Then we would use the 11 lab voiceover so that hugging face could copy the words and the voice and have you hear the voice say those words. So that's really like the high level backend system. There was also a whole other backend system for the music, because the music was specific to the time of day. It was specific to what was happening in the actual installation. And all of that was being ran visually and audibly via um, touch designer. So all, every time we do an installation, it's different. It changes. The tools tend to be very open source and close source. Because we are creating things that certain tools have those quick out of the box mechanisms and then other tools. We need to build ourselves. There's not really a custom GPT out there that can give bespoke 20 to 30 second tea leaf readings based on each chain and based on testiy and based on astrology. So we trained our own custom GPT to do that. I think there's something very similar with the film part of that, and I can go into the film part of that, that if that's interesting. But the team looks very different from a film side than it does from an installation side.

Brooke:

Before we jump into the film side, going on the interactive exhibit, first of all, I wanna attend one of these. This just sounds so incredible to witness, but my first initial thought just as someone who has a marketing background and has worked in production and and events before, when you have an idea or or crafting these projects, are you approached by brands with a budget for funding? Are you raising your own money to create these installations? How does that work? Is it always surrounding a specific event or walk me through that process.

Michaela:

Yeah, so Erin and I work as independent artists. I think both of us are open to commercial work, but we have yet to find a brand that I think wants to do something like this with us. So, the first project, Kapwa that I mentioned, that was funded by Arizona State University through a residency program they had associated with the conference worlds in play. And then the second project I mentioned, which was Morninglight was a part of an inaugural tea festival. You could think of our client as WSA Water Street here in New York City is doing cutting edge technology and art and is sort of becoming a scene for influencers and taste makers. And so their resident, person, her name is Karen Wong. She curates and creates a lot of the shows at WSA for their nonprofit, which is WSP Water Street Projects. And so we were basically, our client was Water Street projects, but instead of being treated like a vendor or a production company, we were technically considered commissioned artists. So we were artists that were commissioned to do this work alongside tea, alongside this tea vendor. And we were also told that the aspects of our quote tea house that we were making was around immersive design and emerging technology. And so we sort of took, I guess you could call them like the aspects of the RFP and we created the installation. The third installation I'm currently working on, it's in development, is called The Great Debate, and it's all about having your classic famous LLMs live debate as political candidates. And you, the audience are a debate moderator wanting to hear the sides of the candidate story. So you get to actually create the candidate, generate the candidate, select the candidate's leadings, and then listen to the three candidates that live debate each other in real time, and that's still a work in progress. But like for example, a project like that right now is just like in early stage development, so currently looking to raise money or to find a funder, to find a brand that would wanna work with me to continue the development of the installation.

Brooke:

Wow, that's so creative. What a fun industry you're in right now, and I'm surprised you haven't had any major brands reach out to you. I'm sure it's only a matter of time. I know in New York City, they just have really cool product launches and interactive spaces all the time. So I'm really interested to see what happens for you in 2026 and the future of that industry when it comes to generative AI and interactive exhibitions. I wanna use this as a little bit of a learning moment in your own words, because. Maybe someone doesn't even know what the term generative AI means for starters. And then secondly what is, in your own words, open source versus closed source? Mean, just to have a little bit of a mini learning moment for someone who might be a little lost at this point in the conversation.

Michaela:

Sure, of course. So, I have to give credit to my colleague Rachel Joy, Victor. I've heard her speak and lecture on aI as a whole. So a lot of what I'm sharing right now is the, is the philosophy and ideology she's putting forward. But basically AI is nothing new. Even as early as the 16 hundreds, there was this idea of machines and robots and like this idea of technology talking and responding to humans in the way that humans talk and respond to each other. Right? So that is sort of the idea of an artificial intelligence and what we've seen, over the course of the development of ai, right? With this idea of being able to do like the Turing test, which is like if I ask a machine a question, how will it answer me in a way that I know whether or not it's a machine or a person, right? And so oftentimes computers have failed the Turing test over and over again. Unless you have a very specific data that you're training that computer on and they're able to talk back to you in a way, you're like, oh, that could be a human. Or an AI or that could an, that answer could have been a human or an ai. Often the computer was only able to do very specific things that would succeed. That changed over the course of time because what used to have to be a very intensive training. Training of the computer and kind of getting it to do exactly what you need it to do to sound more human. They were then able to train the computer on masses amounts of data because of the internet. Right. And the reality is like they were training computers already in this way, machine learning, but it was that classic issue of like, they just didn't have enough data for the computer to be able to differentiate between saying something that was intellectually, sounding smart versus just saying something that was completely out of context. Because what computers are really missing is context, right? When they're answering like a human, humans always speak to some sort of contextual language around them. Computers only can speak to the data that they have been trained on, and so because the internet has really opened up and allowed us to train computers on vast amounts of data, these computers can now quote unquote, take context and fill in the blank. So what generative AI really is it's doing a very similar system as what algorithmic AI is doing, but it's doing it at a much bigger level. So I would say that like traditional AI is like, here's my training data based on my question, find me the right answer. But the answer is an answer I've already trained you in. For example, if I'm interviewing somebody. And the interview answer is, I grew up in 1928. I might send that to a computer and say, Hey, computer, tell me when this person was born and the computer say she was born in 1928. Right. So it's retrieving the data based on what I'm asking it. That's all like natural learning processing. Right. That's the ability for it to recognize my question and then give me an answer. Where this has really exploded now, is that what generative AI does is that instead of just taking big answers that you've trained it on, it's actually taking the context of every single pixel for an image, every single letter for a word or a answer. So you would say, Mary had a little blank. You would probably say Lamb. At one point the AI did not know it would been lamb. Mary had a little. Mary had a little schoolbook, but because of the internet and the training data, it can now say Mary had a little lamb where these generative AI start to really, disintegrate and I think most people see this is when there's so much contextual data. It could be anything and the mathematical formulas this computer has to do to give you the quote unquote right answer. Right. So let's see. Like. Mary was in its classroom. Mary had a little pencil. Mary had a little pen. Mary had a little crayon. So the illusion that this thing is intelligent is because it just can take more contextual data and spit out an answer it thinks is right. This is the same issue we see with like image generation and video generation, right? If there's not a lot of data around how fingers move, right? Or if there's not a lot of data with how people and physics move, then every time they're trying to create a human or fingers of the human, it's not gonna get it quite right because it's not looking at the human as an overall anatomical human. It's literally looking at it pixel by pixel. So if the pixels start to warp or if the AI starts to hallucinate, it's actually not the AI quote unquote hallucinating. Because the AI knows it's hallucinating. It's actually us interpreting the answer as a hallucination. It's just the AI struggling to fill in the context of the data it has. So generative ai, what I like to say is like it's a big illusion of intellect because it's taking. Word by word, letter by letter. Pixel by pixel. A good example of this would be you pick up your phone and you call your friend, your friend's phone rings, they pick up their phone. You start talking to each other. That is quote unquote Intel intelligent technology, where like it's very seamless. It's very easy. I call, they pick up, they answer. What generative AI is doing is instead of calling your friend or you calling your friend, it's doing what Lord of the Rings did, where it's like lighting a fire. And then once that fire's lit, then it lets another fire. And once that's fire's lit, it lets another fire. And so it's just sending these fire signals to itself over and over and over and over and over again until it has what a thinks is the right answer. And then your friend's like, oh, I guess Michaela's calling me'cause I see her smoke signal, I see her fire signal way off in the distance, so then I need to send a fire signal back to her over and over. It's like a very, not elegant technology. It's like very brute force computing, and this is why generative AI takes so much power, right? So if you look at the backend system, one Google search for the computer to be able to gather all that Google search versus one che GPT search. Two very different power mechanisms. One is like when you give your friend a quick call, and the other is like when you light a fire. And the fire has to get lit in seven different places, one right after the other until your friend sees the fire on the other side of Manhattan. Right. So it's like the technology of generative AI is still very much brute force computing it's, yeah, I know I went really in depth there, computer science-y. But that is also why when you think about video and you think about image, or even when you think about words and you're looking at this thing, you're like, something's not quite right. It's because it's. It's very limited in certain ways. It seems like it does things really well, but it's looking at something pixel by pixel. It's diffusing the data. It's learned and refu it in a way you ask it to. That's why maybe when you ask ChatGPT to like. You ask it to make an image and then you ask it to change the image, but you just ask it to change one thing, it actually changes six things because it's not changing one thing based on your context. It's actually diffusing the whole image. It already gave you into noise and then refusing it based on your feedback. So there's multiple ways and areas that generative AI fails. That's often because it's contextual. Open source versus closed source data. Basically open source models or versus closed source models. This goes back to this kind of like. Theory of technology goes back to the theory of the internet, right? When the internet first came in the scene, there were a lot of people who believed that the internet should be open source. What does that mean? It means the knowledge of the internet. The knowledge of coding should be shared amongst everybody. It should be democratized. It should not be pay walled. Then people started creating websites that were paywalled. They were like, Hey, if you want access to these products and these goods, such as software. Such as hardware, right? You had to pay to use Word perfect. You had to pay to use Excel. This supposedly, all of what Excel and Word Perfect is just software and coding and data. There's a whole group of people that think that should be open source and you should just have access to it. As a way to empower everybody to be a technologist. And then the closed source community basically believes no, like what we do as technologists or what we do with coding is a magic power that people should pay for. It should be commoditized. So if we make products that are easy for people to use, like chat GPT, like Google Drive, then people don't have to make their own, or they don't have to go to the open source community to learn how to make their own. They can just use our closed source communities. We take on all of the labor. We take on all of the design, but we also gain all of the profit. So that's really the differentiating idea of open source versus closed source. And it gets stickier when it comes to ai.'cause open source models means everyone knows what the training data is. Everybody has access to what the training data is. Closed source, like a closed door, you, it's a black box. You don't know where the training data's coming from. Some of it ethical, some of it unethical. So these are sort of the like, ties of open source versus closed source. But it, it, it's a. Debate that expands just beyond ai. It's a debate that's been happening ever since the birth of modern day computer science technology. Mm-hmm. Especially with the birth of the internet. A great example of this would be like Wikipedia is an open source model of knowledge sharing and knowledge. Information people can add to Wikipedia and people can take away from Wikipedia people can access Wikipedia for free. That's a really great example of open source. Um, Versus a closed source technology would be something like a university library where you have to pay to go to that university. You have to pay to be able to check out that library book because you pay tuition to the university. It's like a closed source system. Um, so it's just two very, it's and both are valid in very specific ways. Right. There isn't a good versus a bad here. Both are not

Brooke:

perfect. How I AI is brought to you in partnership with the Collective AI, a space designed to accelerate your learning and AI adoption. I joined the Collective and it's completely catapulted my learning, expanded my network, and showed me what's possible with AI. Whether you're just starting out, or want done for you solutions, the Collective gives you the resources to grow your business with AI. so stay tuned to learn more at the end of this episode, or check my show notes for my exclusive invite link.. Okay. Thank you for diving into that. Those are terms that are. Thrown out here and there on a lot of these conversations, and I try to accommodate listeners that are both seasoned and new to ai. Um, So thank you for diving deeper into that. And it's so interesting to hear everybody else describe it in different ways, especially with your background. You have a different way and you also mix in storytelling in your educational moments. So I love it. My next question for you is. How do you feel like AI has changed you as a creator and a leader and a thinker? Maybe you can share about projects before and after. How does your brain rewire and restructure with this technology versus in the past? When we didn't have these tools at our disposal.

Michaela:

Hmm. That's a, yeah. Well, okay. So I think I have to break this up into buckets. Okay.'cause I have so many buckets. So the first bucket would be like my day-to-day personal. I think when it comes to utilizing ai, I try to be very intentional about how and when I use it. I don't just use it for the sake of using it, I still, want to work with an administrative assistant. I still wanna work with people, right? So I'm not so keen on just saying, oh, that AI tool can replace, someone on my team, or that AI tool can make my life easier by helping me manage my inbox, right? Like, I'm very conscientious about how I employ it in my day-to-day work. That being said, I think under the bucket of installation and exhibition design, I'm very excited by AI's potential to give personalized moments to audience members. Right? I think a lot of what we try to do when we try to make something impactful or immersive or transformative, is we're trying to give an audience member the moment where they go, wow. Or a moment where they go, I am in awe, or I'm in shock. And I think when you. Have the ability to personalize something, it allows you to cut through some of the more kitschy ideas or the, some of the catchier elements, and whether you do it with AI or not. Ai, like I've done it with like forms, I've done it with surveys. I've done it with design. Right. But AI's ability to like get right to it. That personalized moment I think is really exciting for an audience member within the correct story. Right? So the tasseography example I gave earlier in the interview with the Tea Cup leaf reading, I think that's a perfect example. Like no one was giving away their personal data. They were just coming with a very specific tea that had tea leaves at the bottom that were a different color, a different pattern, and just based on that image that our multimodal, installation saw. It was then generating ideas and content and some people walked away and were like, wow, how does this AI know I'm going through a divorce? Some people were walking away going, wow, like this has nothing to do with me. But the fact that they were even talking about it and responding to it, right, especially in this era of kind of people wanting to be very disconnected or not wanting to engage in conversation. The fact that the installation was getting a reaction from people, or it was making people think that's like the, that's kind of the work I like to do with installations and so that's one example. And I think in the last bucket really is in like the export process, right? What I say that, I mean like images and text and video. And as a filmmaker and a director, having these AI tools. Has allowed me to tell stories using the art of animation, which is a very expensive medium. It's allowed me to use still my high level creative art directors, my high level animators, my high level score composers, sound designers, but empower them to tell a bigger, longer story for less of a budget because they are assisted by these AI tools. They're not replaced, but they're assisted. And I think that has been really interesting because now as a creator and as a director, we have a very traditional way of approaching animation and we have to understand how some of these AI tools change that approach or evolve that approach or transform that approach. And that's been a really interesting sort of production processy conversation and creativity from a problem solving standpoint, discussion point for me as a director and a creator.

Brooke:

I think that's a great way to transition. You shared a lot about your process for these interactive exhibits. Let's shift more into your work with VR and animation and AI generated filmmaking. Can you share a bit about the projects you've done on that already? Maybe you can tease what you're working on now. But next I want to go into your whole process.

Michaela:

Yeah. Well, I, I fell in love with the art of animation when I was working with VR for Good on a project, and I realized that, showing live action is very powerful and there is a process of animation where you're creating the world from scratch that allows people to suspend their disbeliefs in a lot of way. This is, especially when we're talking about issues like poverty and children, right. I think animation allows us to do that in a really, innovative way that opens our imagination versus creating poverty porn or creating this sense of disconnect of like, oh, these poor children, or, oh, poverty. And so I fell in love with the art of animation, also from a world building perspective. And so, I pitched Meta, this idea of doing an animated series with my co-creator, Julie Cavalier. We created three volumes of VR animation that was retelling mythologies. Fairytales and folktales from all over the world, and the first project I worked on with Meta VR for Good, went to a lot of festivals, won a lot of awards. Same for the Reimagined series. We showed at Venice, we showed at Tribeca, we showed at South by Southwest. We were Peabody nominated, we were Webby nominated. We won the best directing in XR at the Collision Awards, which is the inaugural Motion Graphics and Animation Awards. So. We've been recognized in a lot of spaces, and I think big part of that recognition isn't just the way we're using the technology, it's also the storytelling we're doing and the script and the characters. And so I really was thinking about this when Tribeca and Sora approached me to do a short film. Last year, I was one of the five filmmakers from the Tribeca Alumni program that was asked to make a film using Sora. This was pre Sora launching to the public. And it posed a big question for me as a creator to be like, okay, am I ready to jump in to embrace these generative tools? Not just from an installation standpoint, but also from a filmmaking standpoint, because that's another part of my practice. And the way I approached that was the same way I approached any project. I called people and hired them, right? So I brought in two animators. I brought in a score composer who was also a sound designer. I brought in a voiceover actress. I said, I'm not sure exactly what film we're making yet'cause I don't know what this tool can do. But I know that I wanna work with you all because I want there to be human animation, human hand drawn graphics in communication with the generative graphics. And so we ended up creating a paper craft style of story called Thank You Mom, that was inspired by my journal entries growing up. And it's a really interesting way for us to say, okay, the way we use the generative tools. Are not necessarily with the intention to replace people. It's not in with the intention to cut people out of the process. It's with the intention of saying, we're a small indie team with a small indie budget. How can we get the biggest bang for our buck? And we've really taken that, that idea or that sense of. Sort of approach into every other project I've done with AI as well. So beyond just using Sora, we've used um, Hedra for Lip Sync, we've used Comfy UI for character design, we've used mid journey, we've used Kling, we've used vo, we've used 11 labs again for voiceover and anytime we're using these tools, it's really just a way of saying, okay, we only have this much budget. We wanna make this story come to life, and we have this team. How do we make sure with these parameters, Make something, and AI is just a very expensive, time consuming medium, right? It's one thing to go out and shoot a quick live action, have an editor. Have a pass at sound design. Audio mixing with animation, you are starting from scratch. There's no capture of the world you get to start with. It's like it's all created. And I love that about animation. And what AI allows us to do, it allows us to take things like concept art and plus it up to be animation ready. It allows us to take still images and animate those still images so that the in between mean. Is what we would call it in between is much faster of a process, right? That doesn't say the animator still is not involved, the animator's still not drawing over in painting, maybe making adjustments to the acting. But it's just to say, how can we make a process that would normally take two and a half plus years and do it in the course of six months? And that's been really exciting to see because it allows more stories to be told. Because animation, the biggest hurdle around animation is just the institutionalization of time, energy, and budget. If you don't have a lot of budget, you can't put in a lot of time and energy, and that's often the difference between an amazing animation and a pretty good animation.

Brooke:

I love how you touched on time and money saved and really just leaned into the positive lens of ai. It almost seemed like a bit of an oxymoron of how can we use AI to bring people together when a lot of people see AI generated content just removing us from like humanity and the creator and the brand, right? It's almost like, it feels like it's a, a further step in between, but through your lens and your interactive exhibits, you're able to make sure people leave talking and have a conversation. And I know what you're saying when. You're walking down the street, people are less likely to even talk to you now than they were 10 years ago. And a lot of people blame technology for that. But um, I love that you're flipping the script and you're using technology to bring people together.

Michaela:

Well, and I think too with like, with the work I'm doing. I really value the idea that if someone walks away and doesn't even know AI was used in the film, or if somebody walks away and no one knows that AI was used for the installation, then I've done my job well. Like my job is not to make the AI look good. My job is to make AI and technology disappear. Into the background so that the story and the audience interactivity and the artwork that we're showing comes to the foreground. And that's just my personal approach to it, and I think that is often why the work I do is centered around people coming together and connecting or centered around people, being entertained. The most recent generative AI project I did is an animated series that is AI assisted and it was an independent series. Each episode's between one and a half to two minutes long. It's nine episodes long. It's a completely original ip. It had little to no marketing budget. We posted it on YouTube and it has received over 2.2 million views and it received that level of views within the first month of it launching. And so I think wow. I think there is a sense that I have that's like, I'm sure some people could see it was made with ai, but I can't help but think not that many people would've watched all nine of these episodes if they weren't at least entertained whether or not they were looking at it because it's ai. Whether or not they were seeing the ai I think, is beyond the point because the fact is like, it's not like we had a hundred thousand views on one of the episodes and then 10,000 views on the other episode. All the episodes across the board have like two hundred, two hundred twenty, two hundred and two. One episode has 510. These e episodes are getting thousands and hundreds of thousands of views. And so yeah, I think it comes back to the story you're trying to tell, the art and the craft you know. The projects we made weren't easier because we made them with ai. Just'cause we saved time and budget doesn't mean they were easier. They were still very difficult. They were still very exhausting. We were just able to do it a bit faster and a bit cheaper because we had some of these tools, but just because we used the tools doesn't mean it was easier. Like that's the thing I always tell people, like, yeah, I was more burnt out making the generative AI animated series than I was making the VR animated series. Yeah. There's this huge misconception that, oh, because you're using ai, you can have a cheaper budget, you can have a faster timeline. Creativity is still creativity. Creativity still takes time and energy. And if you came back to me and said, what would you have done differently? I wouldn't have changed the tools we used. I wouldn't have changed what we did. I would've just said I would've just wanted more time and money. Our team would've been way less burnt out and our team would've been way less overdone. But we did it because we knew this was the money we had and this was the time we had to do it. And so, yeah, I think it's hard when the misconception around AI is that the creative process of the creative form is now suddenly not as important, and that's like not true. If anything, because of ai, it's even more important.

Brooke:

Do you see that getting better? Do you think that has a lot to do with maybe upskilling and learning new ways of putting the output that you haven't or your team hasn't before? Or do you think it'll still be harder to use ai?

Michaela:

That's a good question. I mean. I don't advocate for it getting better or worse personally because mm-hmm. I, as a creative, I'm not looking for a magic button. I don't think any creative is looking for a magic button. Like I talk to engineers and they're like. What do you want the tool to do better? Like I don't want the tool to do anything creative better, but I would like the tool to talk to other tools so that my production time or my admin time's a lot faster. Right? Yeah. I would like to be able to not have to download, upload, download, upload one piece of asset to seven different platforms, but if that's what I have to do, that's what I have to do. But I'm not looking for the platform to get better at. One or two creative things, I like that. Sometimes the characters aren't always super consistent. I like that the environment isn't always consistent. Does that mean we have to work harder? Yes. But that also means like we have to problem solve in a creative way. Like how do I make the audience believe that that shot of the mountain. Is the reverse shot of the other mountain, even though they're two totally different mountains. We use color, we use design, we use our thinking caps. It's the same things we deal with in animation, right? Like you could watch an animated film, you're like, oh, this is all one world. But the reality is that world you're seeing was multiple shots put together to create the illusion of a world. And I think the same way, I mean. I think from a creative standpoint, it would be cool to see more like platforms that utilize 3D renders and 3D models.'cause that's another super expensive area that a lot of people can't make work because it's time consuming and hard to learn 3D modeling and 3D tools and game engines. So if there's generative AI models out there that can help make 3D assets, that can help generate 3D worlds or 3D environments or help suggest game engine, bug debugging, like all of that would be really cool to see as an expansion of the toolkit. I think we. My filmmaking community is just really focused on the 2D film. But because I come from a more expansive community, like I know I have my, my game engine folks, my folks who are constantly working in Unity and Unreal and slaving away in those platforms, and I think if there was some sort of generative platform that helped make their lives a little bit easier. Again, not to replace the developer, not to replace the engineer, but more just to like, my friends don't have to spend three, three days debugging one build, it's the same thing. My animators don't have to spend three days animating one minute. So it's like those kinds of technologies. I think this idea of automation or this idea of expansion of cognition, which is again, something my colleague Rachel Joy Victor talks about a lot. I mean, I think these are all the things that I like and prefer.'cause the idea of automation isn't always the idea of replacement. And I think humans we connect the two together, but it's, it's not, it's not the same. Replacement is, it really comes out of a necessity for. For a skill that is no longer like needed. And I don't think it's that your skill as a puppeteer, your skill as an animator is no longer needed. It's that your skill is being supported and automated by a system, but you still need you in there to make that happen. And that differentiator is like, are you easy to work with? Like, are you creative and thoughtful about the way you work? Or are you just walking in and doing everything haphazard anyway? The idea of AI in the creative field is very scary, and that's very valid. But I go back to like, well then we need to make the case for why humans really matter, and that is emotional intelligence, that is creative expression and. Again, this is why I don't advocate for these tools to get quote unquote better at any one thing, because that, to me is then replacement. It's like if you're creating a tool purely to replace somebody, then that is not okay. But if you're creating a tool to help automate somebody's process, that's great. Like we're, we've been doing that since the dawn of time. The idea of writing on pen and paper is this idea of automating what our brains already do. Oh, if I can write it down and set it aside, I can always come back to it. So yeah, I, I have a lot of like philosophical thoughts on that and maybe I just blurbed it all out and it doesn't always make sense. But yeah, I think this is a big differentiator and upskilling, I think is a good word to say, but it's almost like, it's less about upskilling to me and it's more about just familiarizing yourself with what these things can or can't do. Mm-hmm.'Cause the technology companies are marketing them to you as the best amazing thing.

Brooke:

Mm-hmm.

Michaela:

The minute you get under the hood in any of these creative platforms, even from like, I'm sure other non-creative industry standpoints, you quickly see where these platforms or LLMs fall apart. Mm-hmm. And where they're not viable solutions and where they're not good. So it's like, I'm like, you could upskill but why don't you just start by educating yourself so you can advocate for what these things can or can't do. So you can advocate for why they still need you. Versus like, I think upskilling in the sense that you have to dedicate your time and energy to totally learn them inside out. I don't necessarily know if that's viable because I don't know either how much longer all of these currently, like creative AI platforms are gonna be sticking around. They're super expensive to run. Most of the people that are using them are using them under a free subscription because they're artists who have like an alpha license or some sort of license that is like a creator program. The people who are using them just for fun aren't necessarily the people who are buying the highest level license. So I can't help but think like there might not be as many as we think there will be in 10 to 15 years because these closed source platforms are looking to make a profit. And I think there's a sense in community, and again, I know I'm lingering on and on this question, but I do think there is a sense in the communities right now that. Especially people who are coming up through the ranks have this deep nostalgia and desire to connect to analog. They want to take their photos on a camera where they have to go get the film developed. They want to be back in spaces talking to each other and playing records, right? They're so digitally fatigued. So it's a question of like. Also, if there isn't an audience to buy into the market of generative AI, creative tools or buy into the market from both an audience and a consumer standpoint, this is also the end of creator tools that are powered by generative ai. I think there still will be a few, but I don't think there'll be synonymous with creativity. I think that's actually a very false way of thinking about this.

Brooke:

Thank you for that. It seemed like one very long answer, but your mind went in so many different ways. Touching on key points, I usually like to ask everybody, so you did me a, a favor. You just went through a lot of the questions I typically ask and. I feel like as you were describing your personal process and experience, it mirrors so much of what the general workforce is experienced, right? A lot of people assume it's some sort of light switch and really it's getting us to that next point where all of these ais are talking to each other and there's better interconnectivity, is definitely what I'm excited to see next. And you chatted a bit about tools or platforms or next level features of what you want to, to have in your space and talked about these landscapes and environments that you're hoping will really ease the workload for your team. And, it gave a little insight into where your industry is headed in the next year or so. But um, to wrap up and close out our conversation, I would love to first hear what's next for you? What are you working on if you're able to share about it? I'm sure there's some just really creative, exciting things in your pipeline. If you don't mind touching on that.

Michaela:

Yeah. I can't speak to details right now, but I am trying to continue to expand my practice as a director. I am looking to do more commercial work right now. So I am currently in the midst of being signed by an agency that would represent me to do commercial work, that's also a production company. I am pitching my own original IP to major studios as a way of, getting my voice as a writer and director in the animation space scene on a more larger mainstream platform or in a more larger mainstream Hollywood sort of way. And I am continuing to consult for really amazing nonprofits and organizations here in New York. One of them is my favorite is the Museum of the Moving Image. Just to continue to think about not just the emerging technology programming they're doing, but even just like the community program they're doing, the digital artist community programming they're doing, and just thinking about it as like a really beautiful space and institution that we can do the things that people who have a bit more, narrow curatorial vision can't necessarily do like the Museum of the Moving Image. It's such a big, broad idea of how the moving image connects us all beyond just films. And I'm really excited to continue to, collaborate with that team and be a part of that team and think about ways that mission and that statement and that curatorial vision can embrace not just what people think that embrace, but also embrace a much larger scope of the moving image and a much larger scope of community

Brooke:

Well, I'm looking forward to following along your journey and to what's next for you. And thank you so much for taking the time to connect and share today. How can listeners reach out to you? What's the best way to connect?

Michaela:

Yeah, I would say over my website just my name, michaelaternaskyholland.com there's a contact form in there. Mm-hmm. There's also a way you can just book time with me directly just to consult, if you have a project or an institution or even an idea that you would like help germinating or strategizing around or just asking me questions, you can literally get on my calendar right away to start having those discussions with me. You can also of course follow me on Instagram or LinkedIn. Both are just my name as the handle. And if you would like to just stay in touch but not necessarily engage, there's also a newsletter you can subscribe to on my website as well.

Brooke:

Wonderful. I'll be sure to link all that in the show notes and description as well. Thank you so much, Michaela. Thank you,

Michaela:

Brooke. Thank you for having me, and I hope you have a wonderful rest of your podcast season. It's great to be here. Thank you so much. I appreciate it.

Brooke:

Wow I hope today's episode opened your mind to what's possible with AI. Do you have a cool use case on how you're using AI and wanna share it? DM me. I'd love to hear more and feature you on my next podcast. Until next time, here's to working smarter, not harder. See you on the next episode of How I AI. This episode was made possible in partnership with the Collective AI, a community designed to help entrepreneurs, creators, and professionals seamlessly integrate AI into their workflows. One of the biggest game changers in my own AI journey was joining this space. It's where I learned, connected and truly enhanced my understanding of what's possible with ai. And the best part, they offer multiple membership levels to meet you where you are. Whether you want to DIY, your AI learning or work with a personalized AI consultant for your business, The Collective has you covered. Learn more and sign up using my exclusive link in the show notes.