Talk To Me Petey D

Ep. 55: AI Literacy – Marketing

Petey D Season 1 Episode 55

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 16:57

What does AI literacy really mean—and why does it start with marketing?

In Episode 55 of the Talk To Me, Petey D Podcast, I kick off a new AI Literacy series by exploring how the term artificial intelligence was born, how it has been marketed since 1956, and why that history still shapes AI hype, fear, investment, and policy decisions today.

AI literacy isn’t just about learning how models work. It’s about understanding:

  • How AI claims are framed and marketed
  • Who benefits from bold AI promises
  • Why hype and “AI doom” narratives often reinforce each other
  • How marketing influences funding, media coverage, and public perception
  • Why society has agency in shaping AI’s future

🧠 Key takeaway:
When someone makes a bold promise about AI, always ask: who benefits from this claim?

📚 Book

People Management from the Ground Up
👉 https://www.amazon.com/People-Management-Ground-Up-Aspiring/dp/B0DBGQ57XT

🎧 Listen to the Podcast

  • Apple Podcasts: https://podcasts.apple.com/us/podcast/talk-to-me-petey-d/id1745885025
  • Spotify: https://open.spotify.com/show/4NrlsWzansuCfuApMCZzj0
  • YouTube Channel: http://www.youtube.com/@TalkToMePeteyD

🌐 Connect & Follow

  • Website: https://peterdempseywrites.com/
  • Newsletter: https://peterdempseywrites.com/newsletter/
  • LinkedIn: https://www.linkedin.com/in/pete-dempsey/
  • Bluesky: https://bsky.app/profile/petedempsey.bsky.social

👍 If you found this useful, like and subscribe.
 💬 Drop a comment with AI topics you’d like covered next in the AI Literacy series.

 

SPEAKER_00

What is AI literacy? We hear this term thrown around a lot and see it as a core requirement for people to learn, but it's not always really defined what exactly it means. So we're gonna start uh diving into that today with the first part and probably a long series of these that I'll do just because I think there's a whole bunch of different things that I would put under the umbrella of AI literacy. And it spans a variety of different domains, and I think a lot of what, at least what I would consider AI literacy, falls outside of the technical space or the non-engineering space and covers a lot of different disciplines, and it's really about understanding how artificial intelligence and technology impact our world and our lives and our society, and having this understanding so that we can advocate for different outcomes or understand how to use tools in in certain ways and how there might be harms in other ways. So I'm gonna try and organize all these episodes under a single AI literacy playlist so you can check that out as as more come out. This is our first one today. So welcome to the Talk to Me PDD Podcast. I'm your host, PDD. This is where I talk about all things knowledge, work, technology, society, AI, as often these days. Um if you like the podcast, please like and subscribe. You can listen to the audio version, um, follow me on LinkedIn, check out the newsletter, uh, you can get to it through my website at peterdempsey riots.com, and looking forward to sharing lots of content and my thoughts about AI literacy. Uh, so today is episode 55, AI Literacy Marketing. So, marketing may seem like a strange place to start in the world of AI and AI literacy, um, but the birth of the word the term and kind of the umbrella of artificial intelligence comes out of marketing and a marketing perspective. So I thought it was an appropriate place to start, and it's a theme that we continue to see through throughout the industry and artificial intelligence, so important to understand. It's all kind of started back in 1956 at a summer program that was organized at Dartmouth, getting a bunch of researchers and professors in the computer science space together. Um, this is where the term artificial intelligence is sort of first coined in in a way that that is captured and and is memorable. And you know, the kind of different parts of the field were under different umbrella terms at that time, and this brought it all together. Um and their goal was in, I think it's eight weeks or so, however long the summer program was, to uh simulate human intelligence. So um, again, there's these really broad claims about what can happen in a really short period of time, and a lot of them involve capturing things that we do as human beings and being able to duplicate that in a synthetic or digital computer-driven format. Um, and again, this is a theme that we see throughout artificial intelligence continuing to this day. Um so what were some of the reasons why they might want to do that? Well, this is a good way to get funding to pay for these professors and researchers to come together. So, a common theme that you want to look at when you see um large claims about what artificial intelligence can do and how quickly that's going to happen, is you know, who is benefiting from those claims? Is that being used to to generate funds that will allow those those people to go and and do something? Um, and is there um a reason behind why it's being presented that that way? Um so artificial intelligence isn't one thing, it's an umbrella of different technologies that um try and do different types of quote-unquote intelligent tasks in a in an automated fashion. Um, you know, right now everybody probably is most aware of large language models and generative AI, and those are certainly one part of artificial intelligence, but they're they're certainly not the only ones, or even the only ones that are pervasive in our society. Um, kind of before the emergence of Gen AI, recommendation engines on social media, or even things like Netflix or content platforms to recommend content are a form of artificial intelligence where you're trying to match up types of content and categorizing content with particular user behaviors that you're trying to incentivize. And that uses a different approach or a different algorithm, you'll hear it called, than generative AI and large language models use. Um, I believe that's sort of called two towers, or where you have kind of a set one tower where you're categorizing the content that you have, and another tower where you're categorizing what a particular user would want to see, and putting those two together to drive a particular outcome. So that's one that's been around and has been something that people have likely interacted with for quite some time now. Um, even things like search falls under the umbrella of artificial intelligence, um, which people may not think of now, kind of in the current Gen AI boom. Um, but there are a lot of techniques that are similar about how you represent data and you find similarity between what a user is searching for and what's in a document, how you can do different types of natural language processing, looking at the words and trying to figure out how to associate meaning with those words and to combine the two together. So a lot of different areas of artificial intelligence, it's a really broad umbrella. Um, a lot of it can be thought about is that you're using data and often a large amount of data and trying to put that through various processes to optimize for a particular outcome. So with recommendation engines, you're oftentimes trying to optimize for user engagement, keeping uh a user on the platform, and figuring out through trial and error in the system what type of content is going to keep them there. Um, with search, you're trying to find different ways where um you can match up the right documents from a very large set with what a user is intending to find and figuring out how to do that best. Um with large language models and generative AI, you're looking at a huge set of text and figuring out ways that the large language model can take a part of that text and guess what's coming at the end and use that to build a system uh that can kind of then do that in a in a more general sense with text it hasn't seen before and produce an outcome that that is useful or makes sense to the users. Um so a lot of bit of a lot of different things there. Um and then thinking about you know how the marketing is relevant today, um these this kind of theme of securing funding, where it was maybe very small in 1956 when you're trying to get you know seven or eight professors funded for eight weeks in the summer versus what you see some larger AI research labs doing now. Um they're you know building up funding from private investors, from other larger companies, things like that. And a similar type of theme of making these claims of um simulating human um capabilities with machines and promising to deliver that in a very short time frame. And I think with with all of these types of promises and marketing around doing that, it's it's always good to ask who is who is benefiting from those claims and why might they be making uh those claims. Um and it's also I think good to put a critical eye to what's happening today and to look, you know, from a if you were an investor in these technologies, um do those claims influence how you would look at your investment? Would you be willing to put in the same amount of valuation or the same amount of funding for one of these companies based purely on their financial returns? Versus if you believe that they're going to be able to replace or simulate a significant part of human behavior, does that change the financial calculus that you would look at as an investor? And then if so, um does that go back to why someone might choose to market or present artificial intelligence or these products in a particular way so that they could gain more investment, maybe more so than they could based purely on their financials alone. Um so with AI hype and kind of the marketing side around that, um, there's also the the side of doomers or people who are sort of looking at this technology that it's it's going to have negative outcomes, or we won't ever be able to align these systems and they'll destroy us. And on the surface, you might think, well, that seems like someone who would be um against the interests of the AI industry, but I think in a lot of ways it actually contributes to this marketing story about how AI is incredibly powerful and it's going to replace or destroy human beings. So I think that's why you see some of these um doomers or doomer community um in ways supported by the same people who are running the AI research labs and building the technology, where um on the surface level, they should be enemies. One's building this technology, the other says it's going to destroy all of humanity. Um but actually the Doomers who are saying this is so powerful and so dangerous are actually contributing to the marketing story that AI is incredibly powerful and that these labs deserve the investments and financial support that they're getting, regardless of whether their financials justify that or not. Um something you know also to bear in mind with the marketing, there's been plenty of AI products over the years that have promised that they can do things that just aren't possible. Um being able to predict things based on certain data points that um just can't necessarily um guarantee a particular output in the future. So that's something to bear in mind that when AI systems or data science you're analyzing all of this data, you can find patterns and correlation and connection between different data points. Um but whether that means that those data points actually guarantee that something is going to happen or not may not always be guaranteed. And there's just certain types of things that we can't or shouldn't try and predict with with data, or it's not easy to necessarily separate out all of the different variables. Um, one of the ones that that comes to mind that I've seen is using companies that claim to use facial recognition to do moon analysis and things like this, um, or looking at at students potentially and seeing how their faces are to determine what's sort of going on inside their head, um, which seems dangerous for a lot of reasons. Um but I think it's fair to argue that there's not really science that backs up that that's a valid uh approach. Um that doesn't mean you can't build an AI system that will quote unquote do it, right? It can analyze the data, it can analyze different features and signals of people's facial expressions, and it can come up with some output where you give it an input and it generates an output. Um but that doesn't mean it's going to make an accurate prediction. It's so again, kind of along the marketing lines, uh always be wary of those those claims that just because you can build a system with data and AI tools and it can give you an output, um, you need to question how that output is being created and if it's valid and it's appropriate for the situation. Um a lot of the AI marketing also shows up in how AI is being reported on these days, um, which can be a little bit frustrating. Um you would expect or hope for journalists to be a little bit more critical in some of their analysis. Um, and you'll see companies doing things that are it can be termed AI washing, like if they have um layoffs because they're in a bad financial place or they've overhired or things like that. Um it's often convenient to frame that into a forward-looking, oh, well, we know AI tools or systems are going to allow us to run with less people and be more efficient, so we're just taking a proactive step to get ready for that future of work as opposed to, oh, we made a mistake or we didn't run our company that well. Um and you often won't see many journalists sort of digging into those details and separating out those two. And again, you know, those two things may be correlated in time that they are laying people off and they are expecting some future efficiencies to happen, but it doesn't mean they're they're causal. So, in the same way we can get into this trouble with training AI systems and finding correlation instead of causation. Um, sometimes the reporting is taking correlation and reporting it as as causation. Um, so I'd love to see more critical analysis there, but that's something that that we as consumers of reporting and of technology in general. You know, that's that's on us to be critical consumers there and to take a look at that. Um because that shapes the perception of how powerful the technology is, where it's going, and how we should respond. And if our view of what's possible isn't aligned with the reality, we may not make the best decisions about how to kind of have our society and how how to use technology in society. Um, you know, there's kind of along those lines, there can be a specific vision of where technology is going to take our society, some coming from you know leaders of these large labs or people in the tech industry, this idea of techno-determinism that our job is to just kind of like accept things as they come and follow them along. But these tools are being all being developed by people at the end of the day. And there are decisions and particular outcomes and design choices that absolutely influence what this technology will look like, who it will benefit, who will gain from it, who will lose, and all of these things. They're not predetermined, and it's not sort of uh a natural discovery of a resource. It's being shaped by by humans with particular views and designs. So as consumers of the technology, we need to separate out that there is a decision to be made, and that we should be shaping those decisions as opposed to just accepting what the the marketing stories that were told about this technology and how it's going to progress, and that we don't necessarily have any say or input into that. Um so hopefully that's helpful, kind of giving you some basis on artificial intelligence literally being born as a marketing term, how we see it showing up today and in a number of different areas, always stopping to consider who is benefiting from these claims and why are they being made, and then understanding that we need to be critical consumers and that we have a say in this technology and how it's shaped, and we should exercise that influence. So thanks for sticking with me here. Hopefully, you enjoyed the content. This is hopefully the first in the series. Um if you like it, please drop me a comment, let me know, like and subscribe, support the channel. If there's areas about AI that you're curious about and you want me to cover in this series, please let me know. I have a long list that I want to get to, but always happy to hear more. So uh until next time, remember always ask who benefits from this statement when you're presented with a promise that artificial intelligence can deliver. Thanks and take care.