MEDIASCAPE: Insights From Digital Changemakers

From Clickbait To Copilots: How Media Professionals Thrive With AI

Hosted by Joseph Itaya & Anika Jackson Episode 89

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 45:29

The ground is moving in media, but not the way the headlines scream. We sat down with journalist-turned-AI strategist Pete Pachal to map the real shifts: where models help, where they harm, and how to build resilient workflows that amplify human judgment instead of replacing it. Pete draws on years across top newsrooms to explain why reporting still hinges on trust and access, while research, formatting, and distribution are ripe for automation.

We compare models by job-to-be-done: ChatGPT for deep, iterative research with memory and web access; Claude for quick, clean drafts with minimal prompting. Then we get into policy: how to define “AI slop,” why many outlets draw a red line on machine-written copy, and what mature co-authoring looks like in practice. Think sports recaps, earnings briefs, and tightly scoped beats—edited, verified, and disclosed. The goal isn’t free words; it’s freeing reporters to chase consequential stories.

Beyond the newsroom, we explore the business model hiding in plain sight: own your corpus. Vetted archives can power branded agents, internal research tools, and licensing deals without handing over raw IP. We also tackle creative domains. Music is nearing a viable licensing marketplace for synthetic style and voice, while video remains an assistive tool for b-roll, packaging, and multi-format distribution. For PR teams and small businesses, Pete lays out a playbook to build a critical, not flattering, AI thought partner that scores guests, audits competitors, and stress tests strategy at a fraction of old costs.

If you’re serious about journalism, comms, or digital strategy, this conversation gives you a clear map: where to insert smart human oversight, which tools to use for which tasks, and how to turn your content into a durable asset. Subscribe, share with a colleague, and leave a review to help more media pros find practical guidance that actually works.

This podcast is proudly sponsored by USC Annenberg’s Master of Science in Digital Media Management (MSDMM) program. An online master’s designed to prepare practitioners to understand the evolving media landscape, make data-driven and ethical decisions, and build a more equitable future by leading diverse teams with the technical, artistic, analytical, and production skills needed to create engaging content and technologies for the global marketplace. Learn more or apply today at https://dmm.usc.edu.

 

Setting The Stage: Media Meets AI

SPEAKER_01

Welcome to Mediascape, Insights from Digital Changemakers, a speaker series and podcast brought to you by USC Annenberg's Digital Media Management Program. Join us as we unlock the secrets to success in an increasingly digital world.

SPEAKER_00

I am privileged to have Pete Poschel on the show today. Pete, you have a long background in journalism and you've been at many great institutions, and you've taken kind of the age that we're in right now and tried to help media professionals, publicists, just people in our industry and comms understand how they can move forward with AI. Because they're we were talking a little bit before this, it's changing every day. New tools are being whipped out. Publications are coming up with they're either in lawsuits with some of the LLMs or they're trying to figure out a way to make the model work for them. So, you know, you've been a consultant for so many great publications and organizations from the BBC to the Washington Post and many more. So uh thank you for being here and being able to offer us a fresh perspective on these topics.

SPEAKER_02

Yeah, it's my pleasure. Uh first thanks so much for having me, Annika. It's been it's uh I'm really excited to get into all this stuff with you. And yeah, 100%. This is a strange time to be in media or journalism. It's a it's a or PR even. You know, just the way people are getting information is changing very rapidly, and the tools that are underlying that are changing even faster, I think. So naturally, there's a lot of consternation about it, not really understanding which sort of direction you should go. And then, of course, there's the existential part of it. A lot of what these tools do are tasks that used to be exclusively human, you know, specifically writing and sort of research and related tasks around that. So there's there's naturally a lot of which way do I go? What should I adopt? What's my strategy? And what I try to do through my company, the Media Copilot, is help on a couple of different levels. One is the sort of ground level. If you are a journalist, if you're an editor, if you're a PR professional or in comms, and even adjacent industries, I have very specific thoughts and ideas and tools, and I offer these in my newsletter and my courses and all that kind of thing. But at the same time, I'm very interested in the broader ecosystem, as you could probably tell. Like I want to know how is media strategy changing when a lot of people, millions of people, change their habits from you know looking at feeds or going to websites or what have you, and now they're doing things through these AI interfaces. What does that do? And then then how do you, as the person who's creating or building that information and that content, how do you even think about that, let alone like before you even attack the idea of you know clicks or distribution or whatever? Like, what is what are even the rewards of succeeding in this arena, let alone, you know, how do you even do that, right? So so I th I write about that quite a bit.

Pete’s Path From Newsrooms To AI Strategy

SPEAKER_00

Well, and I'd love to take a step back and talk about what prompted you to realize, you know, that this is an area you're really, really engaged and interested in. And it's something that you have a subject matter expertise on that you can help share with the world of comms, with media professionals, with PR professionals.

SPEAKER_02

Yeah. So when AI sort of first came out in its current form, right? It was a ChatGPT, like November 2022. I was working at Coindesk. So this is a it's basically a global newsroom because it covers crypto, but it was one of the first sites and it did it for the longest time. And there's a lot of journalists, like you know, people hear crypto and they immediately sort of have their biases, I think, now. But believe me, Coindesk was very, very legit. There was a very like some of the best journalists I've ever worked with have worked at what worked at Coindesk. So I was in charge of editorial strategy there, and I have to credit some of my staff who were going to conferences that fall, even before Chat GPT, saying, Oh, yeah, I hear about this AI thing. And AI uh was a big point of conversation at some media conference, and I was like, Oh, that's interesting. Okay, I wonder what that means. And then shocker through a couple months later, oh, that's what it means. You know, like with the large language models that can actually produce content. So when that bomb dropped, I knew things were going to change. Now, I had been a tech journalist for 20 plus years at that point. So I knew AI and the tech scene very, very well. And once I sort of saw this level of interest and the sort of level of, I guess, concern from the media industry, I knew it was going to be a huge area of interest. So I started sort of mapping out my next move then. And so when I left at CoinDesk, I knew I wanted to do that. Because it was, you know, at Coindesk, we obviously had to figure out an AI strategy and we prototype use cases and guidelines, and I wrote those and they're still on the site. That's great. But that was only a small part of my job there, and I knew I wanted to do that like kind of 100% of the time. So I essentially gave myself a job, which was like, okay, there aren't a lot of these like AI-focused jobs in media right now. I'm just gonna make my own. I could launch my newsletter, uh, launch courses pretty quickly after that. And the luckily, you know, I I uh was able to get a pretty big audience pretty quickly on Substack. And now I'm still just trying to navigate this. And luckily, I've been able to focus on it for a couple of years. So hopefully I've learned a couple of things. But honestly, in this in this space, you feel like you learned something and it's already obsolete. So, you know, you just try to keep up and hopefully everything that you have built up until now doesn't go obsolete and you can adapt it as you go forward. And I've generally found that, even though in my courses, the ways I teach prompting, for example, have evolved a lot. I don't think I could have gotten to this point without doing all the sort of hard work of figuring out how these LLMs work, the best ways to talk to them and sort of how to integrate your tools and workflows into them.

SPEAKER_00

Yeah. So talk about the courses. I mean, I have so many questions for you, but yeah, you mentioned that you help teach how to prompt. And do you also talk about which LLM might be best for different use cases, which one's better for research, deep research versus just surface research versus writing? Or have you seen a big difference in the different LMs that we have?

Choosing Models For Research And Writing

SPEAKER_02

Yeah, so I definitely have my opinions on this. And a lot of it, as you might expect, it's not like one LLM is the best for everything, or even everything in a single use case, for example. Like you mentioned writing, and obviously it depends what exactly you mean by that, right? Like it's like, do you mean like is the LLM actually writing for you? Is it a writing coach? Is it editing? There is a like there's I've got lessons around all that, by the way, and different prom. So it kind of depends on on exactly what you're trying to get out of it. And the things that a journalist might want to use it for are going to be similar, but very importantly, and sometimes in subtle ways, sometimes in big ways, different from what a uh say our PR professional might use it for, even though it's kind of like sometimes it's changing the lens on those things. So that's my big caveat at the start. There's a lot of subtlety to this, but I will say there are some things I think are work better than others. Now, these are my opinions, your experience might be different, but you mentioned research. I definitely think ChatGPT deep research is the most thorough of all the major ones. Google is pretty good, it's pretty close, but I do feel like whenever I get something out of ChatGPT, it just it's hard to a lot of this is vibing, to be honest with you. Like, and that's and that's okay. I think it's the more you use a tool, the more you sort of get a sense of its strengths and weaknesses, even if you can't necessarily articulate them. But everything just seems a little more robust, a little more checked with Chat GPT. That said, writing like stuff, if you actually just want stuff written by the LLM, raw, again, you never take raw text and just use it in some application, but in terms of like what is the state of the raw test with the most minimal prompting, Claude. So, in other words, like if you just want to go to something, get something written for you real quick, you don't want to spend a lot of time crafting the prompt, just give me the thing that's closest to what I want and make it make it kind of original and unique ish for me. Claude can do that better than most. That said, I barely used like as we're writing this, we've just had a model drop. GPT 5.1 haven't used that much. Maybe it's better. But I tend to find again, even though you you qualify that, I qualify that as like, oh, model drops might change things entirely. I have a feel for chat GPT's model drops. And even though they might have some sort of improvements on things like for writing, especially, I feel like it's like, oh yeah, there's a maybe a little better, a little better there, but nothing ever blows me away. So I'm I'm pretty sure we're not gonna see, you know, that suddenly, oh, everyone's gonna switch to this new thing because suddenly Chat GPT finally got it right or sort of pulls it over Claude. I just don't think it's gonna happen. Uh that said, everything is remarkably better than it was years and years ago. And it's one of these things where what we thought of as AI text, you know, like the M-dashes and the repeated sentence structures and stuff like that. I'm not saying that still doesn't happen, and maybe a lot of people's experiences that it does happen to maybe an unacceptable degree. But I do think that has gotten better. You know, like if you even if you just prompt it a little bit, oh yeah, don't use that, you know, sort of predictable AI phrasing. If you just literally say that in your prompt, you'll get something maybe a little slightly more vanilla, but at least it won't be like these dead giveaways of marketing and speak that used to be so common six months to a year ago. So yeah, those are a couple quick thoughts and recommendations on using AIs, I guess, specifically for writing and research.

SPEAKER_00

I will co-sign on that. I love using Claude, although it's had some hacks and you know, it's been down a lot more the past few days. But I love it for writing, where I can do a data dump of like, here's something I wrote that needs to be refreshed, here's some of the new data. And I want to add in this experience that I just had and this meeting or this conference or this trip where I learned about XYZ to include in it and then have it kind of mishmash it together. And then I can continue refining. But when it comes to like for my podcast, I built out a tool to better vet guests and I'm testing it with some other people. And that I'm using, I found that chat is a much better researcher to like be able to quickly find everything I need to know about a potential guest and give me back real information that's justified with scores for each of the four things that I might want a guest to be able to speak about or the vibe that they have coming onto my podcast. So I found similar to you and maybe different use cases. And so I appreciate you sharing that.

Memory, Web Search, And Habits In ChatGPT

SPEAKER_02

Yeah, I think there's a couple of things that whatever you think of the company, whatever you necessarily think about the outputs, that just feature-wise, Chat GPT tends to hold over others. One is memory. So they've had memory for a while, but it's way back, you used to have to sort of deliberately use it and clean it out. And there were specific memories that you could either ask it to do, you know, remember my style or remember this, or I mean it would sort of randomly sort of pick things sometimes. And so if you're looking for a gift for your spouse or whatever, and it's sort of so like it keeps bringing up these gift ideas, and you wonder why it's probably because of just trapped in memory. It's gotten better about that, but now there's like persistent memory, which isn't it basically it can just remember things more from chat to chat, and you can even make all of your chats available to the memory, which isn't to say it it knows everything about all your chats in every interaction. It's just those are accessible in a chat if you want to refer back to something. So those are helpful that I don't really see quite the same corollaries in other services. So that's one. The other one is its ability to search the web, which again is not unique anymore. Claude has it. Almost all LLMs have web search now. But I will say when you have a certain, I think most people when they use AI, there's a there's a sort of an unlock to use a word, an overused word, that is when you start collaborating with it, right? You're not just going to it for outputs and you're you're having longer conversations, you're having back and forths, you're doing real problem solving. In those conversations, it is really helpful to ask it, just go check, go bring this external data into our conversation really quickly, and then let's talk about it. So basically, ChatGPT had that ability to search the web as a large number of people were making this unlock, including me. And it just got part of our habit that then we were talking about ChatGPT. So the combination of those things, I think, is really sticky. And again, I'm not I'm I don't want to sound like I'm a marketer for ChatGPT. This is just the reality of how the the product has evolved. And you know, I'll be glad to talk about its weaknesses and and things that I think others do, others do better, like including writing. But because of that, the habit of using it, I feel like it's it's just really stuck with a lot of people, and you can see why it's got millions of users and it's sort of almost synonymous with consumer AI. Whereas, you know, something like Claude, they while it's a great consumer product now and has very similar almost feature parity, either their focus has now been more on the enterprise because I think they haven't quite captured that same momentum that open AI has.

SPEAKER_00

Right. Yeah. Well, and now we have other Chinese models that are supposed to be even better than DeepSeek and better than GPT, and you know, that are open source. You know, there are other concerns perhaps there with using different models in from China and how much information different models have access to. But I want to ask a question about how you see people really using AI. And I think we're we've seen a shift in like talking about it as a tool to see talking about it as a partner. And so yeah, I'm sure things have changed a lot too, from when people first started using chat or any other LLM to start writing things. We've seen some differences in some major publications now saying this article is written with AI. Perhaps if you're looking at forecasting, real estate data, some financial forecasting, that's a lot of stuff that can be predictive analytics, right? So then it's easy. But do you see a lot of AI slop in the organizations that you're working with or in the market in general? Do you see that people are are using it to fine-tune and that there's still, of course, the human element that's needed?

Defining AI Slop And Editorial Red Lines

Co‑Authoring News: Processes At Fortune And ESPN

SPEAKER_02

Great question. So it's almost like I'd have to define slop, right? And how I define it is that it's obviously AI generated, but it's also the purpose of why it's AI generated matters. So if you are using it in the same way that you know, content farms were using and still do content to sort of just attract eyeballs for ad impressions, and it's it's literal clickbait. Like they get you to go there and then you're there and like, oh, it's not even what I promised. That's obviously slop. It's probably that's that's I I don't want to set too low a bar for slop, I guess. In that I think when you have you might call good faith content that is entirely AI generated, I wouldn't say that's super valuable. Maybe it's just better tasting slop, but it is like, you know, if you have a content operation where, say, it's mostly podcast driven, like maybe I don't know, people we know, and they want to do some content derived from their podcast and they use an AI assist, something like Notebook LM. I understand that. And I while that has aspects of slop and is arguably slop, I think that's not quite the same thing as someone just doing it for the sake of SEO clicks. So that said, you can do better than just whatever Notebook LM spits out, and you obviously you can go all the way to human written, or you could just have human touched or human added to. And you know, obviously the equation reverses too. So the way I teach this, and I know you were talking about what people are doing, but just as context, I sort of look at it as a spectrum. So, in the same way you have self-driving cars that go from level zero where it's just a human driving a car, to level four or five or whatever it is, where it's the AI entirely, like it's Waymo. There are there are levels in between where it's like adaptive cruise control and and whatever else. And so I sort of see uh level one is like kind of what we're doing now where AI is in the background, it's an advisor, it's a coach, it's a research assistant. Then level two is like, okay, maybe some AI copies entering the actual copy, and and but it's still mostly human written. And then level three is like, okay, it's mostly AI written, and then the human adds to it. So that's kind of what I'm saying. So what I'm seeing, what people are doing, is a lot of level one and a lot of level three. So as I just defined that, the at a media company, the red line is between level one and level two, which is to say if anything AI generated that has come out of an AI gets into your article, then there must be very clear guidelines about that. A lot of them outright forbid it. Those that don't are being more real, to be honest with you, because I'm sure in the ones that outright forbid it, a few lines have come in, I'm sure, that that some writers have done. But they have very strict restrictions around it. Obviously, you know, human in the loop is very important in almost any AI process. So there's that. And then it's like what parts, right? Like, is it the headline? Is it the background information? Like, and then what percentage I think is very important too. And then there's level three, which I think is sort of the more interesting part where I think this is level three is where a lot of publications got into trouble years ago. So when AI was still very new, a few publications like CNET very famously uh and a few others, Ports Illustrated was another case, even though that was a little more invertent. They published AI, AI written content, ostensibly checked by humans, but it was not good, it was error-ridden and there were problems outside of basically they were figuring out the process, but they were figuring it out very publicly and ran into trouble. Now we have publications doing that with much better process. So a couple places, ESPN, they do a couple of different sports, at least, at least two, I believe, that they're covering with AI because they don't have reporters to cover them, like it's like lacrosse and things like that. But they are checked by editors and then it lets them do game write-ups that they couldn't have otherwise done. And the other one I'm starting to cite more is fortune. So fortune has basically that as they cover the news, they pair an editor with an AI, uh, with basically with an AI author, and to some extent they co-author the piece. I don't know the portion, you know. I would imagine they're doing their best to keep it sort of 50-50. But I also think once you open the door to this process, you start to sort of like you wanna gonna you're gonna want to let the AI do more and more, frankly, even as uncomfortable, like like just from a pragmatic standpoint, you're gonna start to realize, oh, it can write, it can do this. And if I'm allowing this level, why not more, as long as I'm checking it, you know, and this is sort of gets into the existential part of things. And for some organizations in the media, I think this arrangement makes sense for news and very sort of specific types of news. So, like if you think about any beat, not any beat, but a lot of beats, there's a lot of news, and you have to be selective about what you cover. I used to cover tech, there was so much news every day. You could literally write a hundred articles a day, you wouldn't cover everything in you know, tech. So you had to be selective, and you know, you'd write whatever five or six or twelve or twenty, depending on the scale of your publication. And even within sort of your narrow focus, there's probably a couple of things that to tell the complete story of say it's even just Google, you're gonna want to have. If you want Those stories, now you can do it. Like basically, now it was something that would have taken you a half hour or plus to write, not a lot of time, but it's significant for someone's day, you can do in minutes. So that's an option now. How much value that has as more and more publications do it, it's probably going to get less valuable. But I think in today, where we are now, there probably still is some value in that. The other reason you might want to do it beyond just the audience part of it is that if you want a corpus of your own, you know, if you're like again, I use the example of covering Google, if you want a corpus that includes all the stories about Google that your AI, assuming you build one, would want to have, you're going to want that content in there. So that, and this is sort of an emerging business model in media, is that, oh, the more comprehensive your corpus, the more valuable it is. And this, I'm not sure how much this applies to smaller level media, but you're seeing publications, notably Time, just did this. They just sort of debuted an agent that accesses their archive and it's it's a chat bot. But the point isn't the chat bot, really. Like chatbots, you know, whatever. They have some value, but I don't think they're going to save the media. What might save the media is if you have a big corpus, and then you can make that AI ready and then sell that to whoever you want, you know. So, because the value of a media corpus as opposed to the whole internet is that it's all vetted, it's all human-generated, it's all factual, and it's all found to be like basically it's already had that human layer of checking it all. So if you have if you're a big publication with archives that go back a hundred years, as some of our veneerable newspapers have, that could be valuable. And we're already sort of seeing the first sort of steps to that with these licensing deals. But what I'm talking about is less about take my website and do what you want with it, use AI service. It's like, no, I'm going to create the AI layer to this and then sell that so that you're not like giving up your IP and stuff. It's a subtle difference, but it's important. It just sort of puts the control back in the media companies. So that's that's a thing that's new and possibly could emerge as the way some media companies, anyway, end up uh sort of adapting to this AI, AI world.

SPEAKER_00

Yeah. Well, I that it's such an important distinction, knowing that a lot of the lawsuits and different things, right? LLMs have taken from YouTube creators, from authors of books, from different publications that we all are familiar with. So the fact that ownership can go back to the organization itself, that's a I think a step in the right direction. So I did want to ask, because uh this is not about writing, but it is a little bit about writing. Seeing a lot of the number one country sung on Spotify right now is from an AI generated band. There was a band a few months ago, the Velvet Sundown that went really big, viral, millions of downloads, people liked the music, another AI band. We're seeing more, you know, we're seeing big actors license their music to 11 labs. We are seeing AI influencers on social media. So it seems like we're getting to that almost tipping point, right? Where AI can do some things better, perhaps, or as well as humans. So what have you seen in the industry? How do you think people are going to be able to shift that mindset, knowing that journalists, PR professionals, comms folks might be losing their jobs? Like what are the things that we can do to perhaps salvage work and to continue riding this way of AI getting increasingly better?

Build Your Corpus: The New Media Business Model

Music, Video, And Where AI Fits Creatively

SPEAKER_02

Yeah, so okay, a lot of ways for that. So 100% it's a concern. Yes, certain tasks are being automated and being taken from, so to speak, into an AI, the basic AI is taking them over. Not all of them, right? Like if you took at the entirety of what a journalist does, so a reporter, certainly reporter is the best example of like there are human things that cannot an AI simply can't do. Like, even if they could physically do them, which they can't right now and are probably decades away. Like, even if you could say, Oh, I want to give this AI reporter a goal of reporting on this area, making calls, getting information from sources, absorbing all that, like essentially a reporter agent that would do all that. Again, conceptually, I could see it in it would uh hit slam right into the real world of like who the hell is gonna talk to a bot? Yeah, like nobody, like not only is it in unable in the way the world is structured today to cultivate trust in any source, it actually has the opposite effect because you're thinking, like, well, now I'm not just talking to a person, I'm talking to like open AI or whatever model they're using, and what are the privacy implications of that? Like, who who the hell would ever talk to that? So there's no way in the near or even I would say mid, possibly even to long-term future, that an AI could ever really be a reporter, like it just can't happen. So, good news reporters, you're still valuable, but there is an obvious layer to reporting, the drudgery of it, the like filing, the research, the promotion, the distribution, all of those things that reporters have been asked for the last 20 years now to do more of can now be sort of taken by AI. So, in a sense, if you're like like one of the few storied true journalists that that still exists, you're fine. You're basically AI is just gonna do more and more for you and give you superpowers of things like investigations, which we're already seeing with some of the big publications. What I think is healthy now is that the folks that weren't that class and were doing more rote things, those jobs won't necessarily exist anymore. And so if you think about like anyone whose job was to make clickbait, well, we have bots for that. Or even if you might not even thought of it as clickbait, but if you were if your job was to write the post, what are people on Twitter saying? You know, a bot can write that. That was never real journalism. I hate to break it to you, dude. Like I've written them myself, I'm sorry, but they we weren't really doing important things. So this is exactly why I say it's kind of healthy in the sense that yes, the profession is like might be smaller to some extent, but AI has this filtering effect, I think, that is going to, again, if it is the right incentives, I should caveate all of this with that. And and that depends on who's buying it and what are people sort of typing into it. But with those incentives aligned, it's going to ultimately be healthy for that part of the business. Now, you touched on some of the creative industries in different media, right? So you're talking about audio and and music and then uh video to some extent. And I think these are very different, even though the similar forces are there. So let me take them one at a time. So one is like for songs and songwriting and what we listen to, I think it's ultimately like kind of an audience decision almost. And the thing is, audiences aren't monolithic. So if you think about how I think what most people, how most people listen to music, uh, at least most of the time, you're less concerned with like precise songs and artists and more concerned with just having something on that fits your vibe. And in that sense, I think AI is kind of ideal. Now, there's also the factor of like you want to hear sort of familiar tunes and and things like that that that radio has filled for a long time, and sort of that's why it got super predictable and programmed and still is, to sort of this general music kind of thing. I think there is a use case for that, and I think it's a significant percentage of listening, not necessarily the majority, but I think it's like, oh, you could have AI created music that fulfills this sort of market need. And I do think for most people's ears, you probably don't discern too much like what's AI generated and what isn't. And this is, I think, an important, sort of unique aspect of songs, like especially something you've never heard before. If you hear an AI-generated song and say, like, this is the new, I don't know, Drake song, which is the thing that went viral a couple years ago, and and you just someone just tells you it's it's the song, I don't think you'd be able to tell. I think most people would be like, Oh, I couldn't really tell. Because like a lot of music, let's be honest, like most music is overproduced anyway. You know, it's not so it's like, okay, it's just there. So I see a lot of value in creating that music as long as there's a good compensation mechanism. And I'll say this the MP, the R IAA, they've been very good at being at protecting the IP of that their industry. So I have every confidence they'll figure this out that artists will license their voices and their styles to specific services to, and then we're already starting to see these deals starting to happen, right? And then as a user, I can pay a subscription to that service and go in and tell it to you with a prompting and whatever. Give me a song in the style of this and whatever is produced. And if I listen to it, they get a cut of that. And if I distribute it, they get a cut of that too, and those will be different. And that's why wouldn't you if there's money to be made? So that'll be interesting. Now, obviously, there'll be restrictions and guardrails, you won't be able to say certain lyrics and there might be things even out of bounds, and a lot of artists won't participate for many reasons. But I can definitely see a model there for visual media. I think the technology isn't quite there yet, as much as we there's a lot of hype around it. You can always sort of tell when it's AI, even today. And I do think the shorter the clip, the harder it is to tell. But if we're talking about storytelling, you're gonna need longer clips and longer things, and it's just gonna be easier and easier to tell. So I don't think uh for video it's going to enter a realm outside of being a very helpful assistive technology quite yet. Now, let's get back to the news media really quickly. So, video is a huge, huge area of interest. I do think we'll probably see more AI-generated, sort of general videos. Remember, like when social videos were really popular in the 2010s, and you had Facebook video and Facebook Live. The media industry adapted really quickly to that. There were uh easy ways to essentially cut video and get sort of general, not quite generic, but it's actually this the opposite. With AI, it's it's gonna be different. It's gonna be specific videos for every story you write, will be very easy to create. They'll just have generic imagery, you know what I mean? Because you you don't have a studio and you're probably just writing something about whatever, and maybe you'll feed them a picture of something. But given the ethics of news, creating actual video from from sort of even static images, there's ethical problems with that. That said, there's always b-roll, right? And the magic of AI is you can create B-roll essentially on the fly. Now, again, you still need human in the loop, but again, the the the bar, the whole point is the production, the time it takes, the cost is lower, the time is lower. So you'll be able to distribute your content in video form much, much quicker. There are already services that that that help with this. One of them is uh called channel one, which is sort of more of a production back end, and then uh another, I know another company called Stringer that will essentially allow you to put your feed of content into it and it'll give you videos of each thing that you can then vet and publish.

SPEAKER_00

Interesting. Oh, this has all been really helpful. Thank you. And I did want to ask, I wanted to go back to the other side because you teach journalists, but you also have a upgrade for PR and media professionals. I'm imagining that even if you're not a professional but you're a small business owner and you want to kind of understand that ecosystem, you could also take the course.

SPEAKER_02

Oh, absolutely. Thing is, every business to some extent, I think, is in the business of content now because everyone's selling and you got to sell, you got to distribute on the places people have their attention in order to sell. So again, you could advertise on someone else's content and give me a call if you're trying to meet media, media MPR professionals, but you're gonna want to just get out your own stuff. So, again, the cost of doing that used to be, or used to mean you know, either hiring someone or an agency, trusting their content strategy and whatever. Now, what seemed like a thing you needed to outsource for a very high amount of money, you could probably hire someone at a much lower rate, probably even a fractional person, use these AI tools to build your own content and distribute it with at a much lower cost. So that's possible. Uh again, I I teach to what it to some extent a lot of the stuff that goes into that, particularly on the content creation side, but also on some of the distribution strategy side. I do engagements that are more like consultations, which involve some training of those tools, but also advising on the strategy around it. So yeah, there's there's a lot you can do with a fairly modest monthly budget and just the determination to reach a customer or a certain type of customer.

SPEAKER_00

Yeah. Amazing. There are so many other questions I could ask. Obviously, we know this is an unlimited uh category, especially as things are changing every single day. I will say I'm very grateful that the people who've tried to just do AI podcasts and game the system that way have not been as successful. I think hopefully, I fingers crossed, this is still a medium that needs engagement. And it's so funny.

SPEAKER_02

I I there were a couple podcasts that tried to essentially export notebook LM audio overviews and as a feed, which I thought it was like, okay, go for it. But I every now and then I would run into one. And the thing about Google and the way they do that product is that the voices never change. So it's like, oh, it's it's notebook LM guy and notebook LM girl, and they're they're tired, like it's like you know who they are. So I I kind of feel like if you are a AI product designer, again, uh actual audio platforms like 11 Labs, maybe notwithstanding, but that's kind of an interesting, almost fail-safe. Like you you can't really use it to create a podcast because everyone can tell. Everyone's just wants it, oh, it's a notebook LM. I I see what you did there. So yeah. Got to get a little more involved. You gotta actually go to 11 Labs and create your voices and pay some money, and you really wanted to go for it.

Small Teams, Big Reach: PR And SMB Playbooks

SPEAKER_00

Yeah, and there are a lot of checks and balances to make sure that you're really who you say you are and that you have the ability to use that name image likeness, the voice, all of that. It's a pr it's a pretty interesting world. Um, so you have webinars, you have classes, you have a newsletter that people can subscribe to, you do consulting. And I'm just wondering, because things are changing so quickly, how often do you feel like you have to update everything that you're doing? And with your extensive tech background, are there some publications that you like to read or podcasts you like to listen to to get your information?

SPEAKER_02

Sure, so many. And also I do pretty good barbecue as well. So uh just to add that to the but yeah, absolutely. Uh, for so for AI, there's a number. So I love uh the AI Daily Brief. And I know the guy who does it, it's NLW. He actually did the breakdown before that, and that was something we distributed at Coindesk, so I got to know him a little bit. He's amazing, he's such a workhorse, and he's still on top of this.

SPEAKER_00

One of my must-listen to podcasts.

SPEAKER_02

Yeah, this is always my first recommendation that because it's daily too, because he's just always putting out interesting stuff. So for more AI stuff, I really like this day in AI. It's a couple guys out of Australia, but they have just such a good sense of humor and they're really on top of all the developments, but they also they also really know their stuff. They're they're very smart about it.

SPEAKER_00

I also misdescribed that.

SPEAKER_02

Very good on the media side. I really like the stuff Brian Morrissey's doing with uh rebooting show, people versus algorithms. Both of those are good. AI for humans is fun. Sometimes it's a little, it's a little uh high octane, so you gotta be ready for that. But if they they're also very smart guys, uh publications I read, I really like Jacob Donnelly's a media operator, even though they're not so much into AI, it's very smart about the business of media, and they certainly touch an A AI fairly often. And there's also so a friend of mine, Mark Riley, does something called Hanna, which is his personal it's it's H-A-N-A, and it's a platform that he does. It's basically you can create your own AI-driven newsletter on any field that you like, but his showcase is his own newsletter called Hannah, that is all about AI in the media. So it's a good sort of news, weekly news digest of the most important stories. And so if you ever miss, if you ever miss my newsletter, which is the media copilot, by the way, please subscribe, mediacopilot.ai. You Hannah's pretty, pretty good in terms of like just the breakdown of the news. I do the breakdown of the news too, but I do that on Thursdays. And it typically a newsletter of mine is uh mostly my take on what sort of what I think is the most interesting AI trend uh that's affecting the media right now.

SPEAKER_00

Nice. Yeah, well, we will have your website in the show notes as well so people can click through and click to all of the things that you do and can engage with your newsletter, your courses. I would like to ask you, what is one piece of advice you'd give to a student or somebody who is a young professional coming up in media, PR, comms, journalism?

Smart Human In The Loop: Where To Intervene

SPEAKER_02

So it'd be dumb to just say use AI or whatever, but I do think, and and it's not gonna be also equally dumb to just say whom human in the loop, but what I'm kind of trying to get at is the human in the loop isn't as obvious as you think most of the time. And the piece of advice I would give is when you are using AI, the most important thing you can do is think about where in the process you're needed. And that's usually not just right at the end and at the beginning, which is I think what most people do. I think if you're looking at bigger things beyond just the immediate task, you want to sort of layer in where you need to sort of vet things. Because the question I often get in courses is like, well, couldn't I have just done this whole process you just did with like one step at the beginning and just ask for it? And it's kind of like you could, but then you're missing several key points along the way here where you need to make the judgment. So for example, you might ask for as a deep research task, like a source list or something like that, right? And so beyond just like give just give me all the data and the phone numbers and whatever else you can find on on all these people, like maybe start with something more along the lines of refining the profile you're looking at, and then sort of going back and forth with the AI and that, then get maybe a clean list without any of the additional information to to pick out people you know and and see which areas it's maybe overemphasizing, and then and then sort of do it. So break it down into like three different stages so that you're not just accepting its process as it goes. Because if you're not in there, then you're just trusting it to kind of make those decisions that you probably should have been a part of before the end. And then you have this output that you probably trust less. And you're gonna be like, well, why was this guy on the list? I it shouldn't have happened. I reached out to someone, now I look like an idiot. It's like, well, if you had like been in there and like treated it like the intern it is and sort of guided it along, then you wouldn't have ran into that problem. And I think a lot of AI processes are like these. And then when you see sort of the disasters in the media, it's probably because someone just, you know, obviously was rushing, but could even if they weren't, they weren't inserting themselves in the right places. So smart human in the loop, considered human in the loop.

SPEAKER_00

I like the phrasing. Well, we're gonna have to we're gonna have to use that to kind of shift some of the narrative language around along. And what about for the small business owner, entrepreneur who's also trying to figure out how they can best use AI tools or AI as a thought partner to, you know, and I know obviously your courses, but just a little tidbit that you can offer as somebody's trying to figure out how to navigate and speak to or pitch to journalists, pitch to podcasts, pitch to media.

SPEAKER_02

So again, I think we we talked about the AI as a thought partner, and I think it's kind of underutilized in that way. So you might think you're using it as a thought thought partner, but I would say I would challenge you as a business owner to loop it in, it get in the habit of literally looping in AI to almost any problem you have. Like whatever you're doing. If you think uh just some you I need some software or whatever, like I need a password manager for my small business. You go to it, like, okay, what's the best password manager? Blah, blah, blah. Can I do this? And then, or you know, I'm thinking about I think I need this employee or this contractor to do this. Do I? And then what are some alternatives? And you know, if you and the more you do this, the more you realize, oh, you know what I could do? I could give it some very specific prompting around being this kind of strategic thought partner, and then I don't have to start from zero every time. And that's sort of building like an assistant, right? Uh uh either a custom GPT or a cloud project or something like that, with your custom prompting around like being this pop partner. And at the same time, when you do that, I would definitely say make sure there is some outlet for critical thinking, which is another thing a lot of people overlook. Like AIs are basically designed to be suck-ups and love your ideas and give you a lot of praise about whatever you're doing. Tell it not to do that, and you will get better results, particularly in this use case, as you're trying to think strategically. You want you really want to stress test your ideas and get it to course correct you when you're in the wrong place. Because sometimes, you know, as you're a small business owner, you're you're you're by definition, you're almost out of your comfort zone more often than you are. And you know, you're no longer alone out there. Like again, I'm not saying always take the AI's advice and just go with it. That'd be dumb. But you're often in areas that you're non-familiar with, you at least now have this interpretive layer over the world's information and what it thinks about that area that you might not be familiar with, but also get advice from your network.

SPEAKER_00

Fantastic. And Pete, I always ask one last question, which is do you have a favorite quote, Mantraverse poem, family motto?

SPEAKER_02

Everything's impossible until it's not. But it's my my buddy Captain Jean-Luc Picard, it's a quote from Star Trek The Next Generation. I think about it a lot. Everything's impossible until it's not. So I think that that was sort of said in the context of science or engineering or something, but I think as individuals, particularly entrepreneurs, trying to sort of do new things and and interesting things with AI, a lot of things are possible now. You might think you're someone else has thought of the thing that you want to do, particularly if it's like an AI-powered thing. Yeah, you know, obviously do a search, but like there's a good chance no. Like you might be the first to think of doing, I don't know, like uh media monitoring in a specific way. This is a good use case that comes up a lot in PR where people media, a lot of things get bucketed under media monitoring, whether it's like daily reports or alerts and and brands and and journalists and publications, and the way you specifically do it is probably very different from a hundred other people. And the great thing about AI is now you have an outlet and a way you can do it, and you don't even know how to have to know how to do it. AI will tell you just keep talking to it and collaborating with it. And if you have access to the right tools and they're very, very cheap now, you should be able to very quickly build the thing you need without knowing a lick of code. I mean, it's it's an incredible time to be able to work in the business of information because there is so much out there that can get it, interpret it, process it, and like get it in exactly the form you want. So keep going at it. It's not impossible.

SPEAKER_00

Fantastic. Pete Paschal, media co-pilot.ai. Thank you so much for joining me today.

SPEAKER_02

My pleasure. Thanks, Annika.

SPEAKER_00

Yeah, and thank you to everybody who's watching this episode or listening to it on your favorite platform. Be sure to follow, subscribe, leave us a rating review so that there's better discoverability for all of us. And definitely check out the show notes so that you can learn about how to engage with learning how to use AI most effectively in journalism, in PR and media, and many other things from our friend Pete.

SPEAKER_01

To learn more about the Master of Science and Digital Media Management program, visit us on the web at dmm.usc.edu.