The Curious Cat Bookshop Podcast

Is AI stealing your content? Lori Fena & Martha Brockenbrough on consent, control, compensation

Stacy Whitman/The Curious Cat Bookshop Season 1 Episode 5

What does all this generative artificial intelligence mean for authors, artists, and other creators? Is anyone in the AI world paying attention to ethics or accountability? Join us for a discussion of these issues and more with acclaimed author Martha Brockenbrough, author of Future Tense: How We Made Artificial Intelligence—and How It Will Change Everything, and Lori Fena, technology policy thought leader and cofounder of Personal Digital Spaces.

 Buy the book: https://curiouscatbookshop.com/list/books-martha-brockenbrough

About our guests:

Martha Brockenbrough (rhymes with broken toe) is the author of more than twenty books for young readers, including YA fiction and nonfiction, picture books, a middle grade mystery, and a chapter book series.

She founded National Grammar Day, wrote game questions for Cranium and Trivial Pursuit, and was co-chair of faculty at VCFA’s MFA in Writing for Children and Young Adults program.

Martha won a Pacific Northwest Bookseller Association Award and a Washington State Book Award for her novel The Game of Love and Death. She was also a finalist for the Kirkus Prize.

 

Lori Fena is a cofounder and the Head of Business Development at Personal Digital Spaces. A recognized technology policy thought-leader, entrepreneur and author, Lori works with pioneering high-tech companies to introduce new forms of technology, as well as innovative policy. Since the start of the networked computing era to the Blockchain, Big Data, AI world of today, she has combined technical curiosity, business acumen, and an activist nature to help found ground-breaking technology companies and to institute international policy.

After receiving a B.S. in Business Information Systems from CSULA, and an Executive Certificate from Harvard Business School – Board Governance for Nonprofit Excellence, Lori began her career as a Business Manager at Applied Physics in Pasadena. She founded and sold an IP strategy company and became the VP of Business Development at Stream International, a $1.2 billion merger of Corporate Software and RR Donnelley Global Software Services.

In addition to running her own companies, Lori is an active investor and board advisor to a number of technology companies. She was Executive Director and later the Chairman of the Electronic Frontier Foundation. She is the founder and Chairman Emeritus of TRUSTe.org, and served as the Internet Policy Program Director at the Aspen Institute.

Support the show

More about The Curious Cat Bookshop

Shop online anytime for in-store pick-up or home delivery

Keep track of our in-store and online events

Become a member of The Curious Cat Bookshop! We have a subscription box or you can just add a little bit to help us become more sustainable

Follow us elsewhere!

And always, our events locally and online can be found on our website!

Martha Brockenbrough (00:00)
And that is where I wish that people focusing on developing artificial intelligence would spend their energies. What's something that human beings can't do that only this technology can do? Do that.

Stacy Whitman (00:15)
Welcome to the Curious Cat Bookshop podcast, the podcast of the independent bookstore of Winsted, Connecticut, bringing our local authors to the world and the world to our community in Northwest Connecticut.

Well, welcome everybody to the Curious Cat Bookshop podcast, the podcast of the independent bookstore of Winsted, Connecticut. I am Stacy Whitman. I'm the owner of the Curious Cat Bookshop and I'm founder and former publisher of Two Books, which is an imprint of Lee and Low Books. And now I'm an editor at large. So I can tell you from every perspective, bookseller, publisher, that this conversation that we're going to have tonight is so important to us in the books industry. I've been watching this topic on generative AI for years now and

We've just got so much to talk about and so many considerations on the creator side, on the publishing side, on anyone who is consuming books. So I'm so happy today to have both Martha Brockenbrough and Lori Fena here to discuss all the various ways that generative AI is affecting us here in the book world. Martha Brockenbrough has been a huge supporter of the Curious Cat since the very beginning.

And she is the author of more than 20 books for young readers, including YA fiction and nonfiction, picture books, a middle grade mystery, and a chapter book series that is about a cat and a dog, but a cat, he's hilarious. So, and she also founded National Grammar Day, wrote games, game questions for cranium and trivial pursuit, and was a co-chair of faculty at VCFAs, MFA in writing for children and young adults. She lives in Seattle with her family.

And Lori Fena is a local right here in Winsted with us. She's a co-founder and head of business development at Personal Digital Spaces right here in Winsted and a recognized technology, excuse me, a recognized technology policy thought leader, entrepreneur and author. Lori works with Pioneer High Tech.

companies to introduce new forms of technology as well as innovative policy. And since the start of the network computing era, she has been working in the blockchain, big data, AI world, and combined technical curiosity, business acumen, and an activist nature to help found groundbreaking technology companies and to institute international policy. She was the executive director. I didn't know this until you sent me your bio. She was an executive director and later the chairman of the Electronic Frontier Foundation.

And she is the founder and chairman emeritus of trustE.org and served as the international policy program director at the Aspen Institute. So welcome. And I'm going to start with Lori because, because of this realization that you worked at EFF, which I've been following, you know, advice from EFF for more than a decade. My cousin is constantly linking to the EFF. He's a hardware engineer.

He's the guy I go to for computer advice. Can you tell us about the work that they do and how what you did there brought you to what you're doing today?

Lori Fena (03:15)
think that's a really great segue and thank you. could not be more excited to be here. Martha, I love the book and I'm gonna hold it up. yeah.

Stacy Whitman (03:23)
Yes. We'll talk about that in just a sec.

Lori Fena (03:27)
There we go. So I actually came up through Silicon Valley as a product manager and I headed up product management for compilers and operating systems at a hardware company. then realized that software is really where the rubber meets the road and where the value proposition really makes the difference.

And I suggested to our hardware company that we port our software to a broader set of hardware. That didn't happen at the company, but they did promote me. And so I created a much wider universe of software. And I then realized when this company ended up getting bought that I wanted to start my own company. So as women do, we frequently strike out and go out and

I created a consultancy in Silicon Valley and coached small companies and very large companies on how to maximize their intellectual property licensing. And then I sold that company. And when my handcuffs came off, this is what everybody calls it when you, you know, sell your company, but they keep you.

employed and make you have a contract until, you know, a certain term. But when that term ended, I came back to Silicon Valley and I said, I really want to use everything I know, because this is when the internet was just starting the commercial internet to do something good for society with my technical knowledge and my background. And I was recruited to head up the Electronic Frontier Foundation at the dawn of the internet.

And I said, I can't imagine a more interesting place to be. And at that point in time, the Electronic Frontier Foundation for people that don't know what it is, it's like the ACLU for the internet. So we looked after things like free speech, privacy, and intellectual property. And we worked with many different organizations. And we decided that in addition to the classic, let's do policy work in Washington and lawsuits,

We decided that it was even more important to have these kinds of discussions about the policy that is implemented in technology and the decisions that are made in the hallways when product managers and engineers make choices about defaults. And we had a saying that we said architecture is policy. And so when you're building, all of the policy decisions when you build software get made.

in the hallways and by the people that are designing it. And so we felt that we should move EFF to Silicon Valley. We work in conjunction with sister organizations and have people in Washington, DC and around the world and create a very strong network. And that's, like to say, I help weaponize them and do things like this podcast. We used to have salons and online podcasts, even.

way back at the beginning of the internet to really talk these things out amongst stakeholders and create a broad working group of stakeholders to talk about that. So I think it's very apropos to have this kind of discussion and podcast at the dawn of AI because these things need to be talked out and young people, as I mentioned, love of my life, Edward Jaskowski, who's an AI pioneer from 30 years ago and my co-founder of the company I head up now.

What we have found is that it's actually young people. He started work in AI when he was an undergraduate in college. And a lot of important work gets done by very young people. So talking about these things early and realizing that nobody anoints you, you just become curious and you're the one that changes the world. So on that, I'll turn it over to Martha because I'm excited about the book that you wrote and targeting young people in AI.

Stacy Whitman (07:25)
And the book is Future Tense. It's too small for me to read the subtitle. How we made artificial intelligence and how it will change everything. And so Martha, can you talk about the basics of like what we're talking about? What is AI? And I'll let you go from there and we'll hone in in a minute.

Martha Brockenbrough (07:48)
Okay. It's funny. And Lori, I loved hearing your introduction because back in 1996, I went to work for Microsoft and I became the editor of the Microsoft network. And so we used trustee and you know, anyway, long ago, but that was the thing that made me, even though I write books for kids, even though I teach people how to write books, I'm not afraid of technology. I'm curious about it. And so I love the advice that you just gave to young people, you

your curiosity is the thing that builds the doors and then you open them. People aren't going to do that for you generally. so anyway, underscoring that. All right. So what is artificial intelligence? So back in the day when we used to have software on, remember the floppy disks, the six and a half inch ones, and then they became little small ones. All right. So you would put that software into

your computer and it would run a program, say Microsoft Word, the program would never change. It would never learn. It would never adapt. It would never give you suggested words. Why? Because that code, you know, it was static. So artificial intelligence is code that can learn and make decisions and judgments without human input. Laurie, do you think that's a fair and succinct definition?

Lori Fena (09:15)
I think that's a very distinct definition. And I think the interesting thing about that is the interaction between the code and the data.

Martha Brockenbrough (09:24)
Yes. So data, I've given lots of talks about artificial intelligence and people say to me, well, what's data? And it's a great question. What data is, what's training data? It can be a lot of things. Let's say you've just had an X-ray taken of your arm. There's that big film and it's got your arm and you might not be feeling very humorous because you broke your humorous. That was terrible. I'm sorry.

But all of the x-rays of human arms can teach an algorithm to recognize what's a broken bone. So that's one form of data. You know, it's another form of data is when you are doom scrolling and clicking and you click like on something or you scroll away from it. That is gathering data about your behavior online. We used to do a very rudimentary version of it when I was the editor of the Microsoft network.

page would load and we would click on, or we would track every click on every link, every picture, and we started to get an understanding of what people like. Now we had to do it all manually because there was no algorithm that would just crunch that data for us. anyway, data is information that has been put into mathematical terms that a computer can understand. And you generate it, the world generates it, you know,

Any image if it's a black and white image, it's you know, how much black how much white in it if it's color There's the hexadecimal values of an image and so, know just basically percentages of each color and so if you can take data and Put it into a mathematical form that an algorithm can process. Well, the algorithm can then make a guess

Is that a giraffe? It sure looks like a giraffe based on all of the textures and the colors and the patterns and the shapes. And it'll say, you know, I'm 93 % sure it's a giraffe. And so that's kind of, that's what it's doing. It's guessing, it's based on pattern. AI and the human brain, these are both pattern recognition engines. And that's where artificial intelligence is like, hello kitty.

Stacy Whitman (11:46)
Pyewacket!

Martha Brockenbrough (11:48)
But there are also lots and lots of differences.

Lori Fena (11:52)
I think that that really leads to, so there's the algorithms, the AI, there's the data, and then one of the things that really changed between the time of, like you said, 1996 and today is probably two things. One, incredible amounts of data became available in a very public way.

Stacy Whitman (11:54)
Sorry, go ahead.

Lori Fena (12:21)
because we all started using the internet and a lot of data that used to be on our computers or behind firewalls at companies became public and more shareable. And the network allowed that to happen. And then another big change, sort of the third circle that allows AI to work, which is compute. And computer capabilities, everybody keeps talking about Moore's law.

Right? How fast computers change over time.

Stacy Whitman (12:56)
the constant doubling or what.

Lori Fena (12:58)


Right. And the scaling of computer compute capabilities. And everybody now is, you know, Nvidia is the, you know, is the chip of the of the day that changed also dramatically from, you know, the dawn of the internet till now. So we had massive change in amount of data, massive change in available compute. And we became a lot of the

algorithms that you talked about, the pattern recognition, a lot of those existed in the forms they are today, way back when, 30 years ago, you wrote beautifully in the book about the history of neural networks. And, you know, the fact that neural networks existed and were written about scientifically in the fifties, you know, my co-founder, Edward Jaskowski, wrote one.

when he was in a computer program, when he was an undergraduate in the 80s and used it continually, know, over time as compute got better, he went and created machine learning robots for NASA that tested the wells of the space shuttle. Then he went to a company called Thinking Machine Corporations, which was a supercomputer and they built the first search engines and then, you know, created something called Torrent Systems, which was bought and became IBM Watson. So,

You know, these things changed over the past 30 years with so all together and that culminated, think, in the big change that we all felt. I would say young people were the leaders of this ChatGPT era. And I'll let Martha talk about that. I mean, that, really came on recently when you decided, I think, to write the book. actually, or actually after, right?

Martha Brockenbrough (14:50)
Well after. It's funny, Lori, because publishing is a lot slower than tech. My husband works for a tech company and I started researching this book. was 2020 maybe, beginning of the pandemic. It was probably even before that actually. so ChatGPT, when I handed in my manuscript almost two years before the book came out, ChatGPT was just a glimmer on the horizon. There were other

large language models and I kept saying to my publisher, hey, can we push this? Can we crash this? And for a variety of reasons, it was not possible. But yes, they

Stacy Whitman (15:27)
wasn't when you turned it in then it was the all the jokes on Twitter I fed this into a large language model era that all the jokes about the screenplays and stuff like that was that about when you turned it in

Martha Brockenbrough (15:41)
of the screenplays, yeah, like the fake Harry Potter and stuff. I'm trying to remember the author of You Look Like a Thing and I Love You, think it's Janelle Shay is her name. And she was doing lots of really hilarious stuff, like using AI to come up with names for horses and paint chips. I mean, just like I died many times laughing at her wonderful work. But that was before...

the large language model. These were small ones that just individuals were constructing. so everything changed with open AI's ChatGPT. So what's a large language model? Well, something that is trained on enormous amounts of information. The internet is vast. It's got some good stuff. It's got some crappy stuff. It's got some stolen things like

pirated books and lots of pirated artwork. And so they took everything they could and yes, and which knowing that that's a pirate site, which it's funny because software people do not like it when China and other countries steal software and they were just having hissy fits. They're like, it looks like China's new large language model was trained with ours. Like, hmm.

Stacy Whitman (16:46)
including Books3

Martha Brockenbrough (17:06)
not a lot of fun, is it when someone steals work? Anyway, so what are the patterns in language? There's so many patterns and it's so fascinating. And Lori, you may have heard of this, Stacy, I'm sure you did. But remember when people were talking about the order of adjectives? Like, yeah, yeah. It's not a red small car, it's a small red car. And so there's all these patterns that we've internalized without even knowing. And so

AI is great when it has a ton of data at recognizing patterns in language. So when you train it on a vast trove of stuff, it is going to be able to reproduce language that sounds human, that may or may not be accurate. It may be grossly racist. It may have just hallucinated something. There's some lawyers who've gotten into trouble by using ChatGPT to come up with cases that would be precedents. And so they, you know,

prepare their things like, and here's the precedents that support my argument. And then they turn out to be.

Lori Fena (18:06)
totally fake.

Stacy Whitman (18:07)
Completely made up

Martha Brockenbrough (18:08)
And so this is where, know, Lori, I would love to hear you talk about the difference between a large language model and a small language. I don't know if we're too early to get to

Stacy Whitman (18:19)
Well, let's pause a minute because I would love to hear both of your thoughts on, let's, let's before we get into, cause what I'd love to talk about tonight is the ethics of AI and the accountability portion of it. And I think what Lori does could add some accountability to the industry. But first let's talk about where do we draw the line between something that is a really useful tool like spell check.

I mean, technically nowadays, spell check is kind of an AI and grammarly suggesting some really weird sentence construction to you to Microsoft forcing, is it copilot on you in Microsoft Word and Google Docs is now forcing or the other way around. They're forcing their AI on you and you have to go into all the deep settings to make it so that they're...

I know so many writers who are like, I am a writer. I don't need your suggestions for like taking away my voice.

Martha Brockenbrough (19:26)
It's so true. even spelling grammar check, they get it wrong all the time. Today, by the way, is National Grammar Day. And so I will just dwell on my irritation that Microsoft's editor doesn't know how to use a semicolon. I mean, it really, truly doesn't. And so this is one of

Stacy Whitman (19:45)
doesn't even know how to the difference between its apostrophe s and its per-office

Martha Brockenbrough (19:50)
Possessive. It's got some flaws. One of these things, user, have to be aware. And this is where it's terrible for young people who haven't had as much time to study language and to build their general repository of world knowledge. so it's super problematic. Now, I will say my elder daughter is severely dyslexic.

We discovered this when she was in L- she was a good reader because as she- a very small child, she was three and I worked and worked like for hours every day. Hello Kitty.

Stacy Whitman (20:32)
This is Juju. We are the Curious Cat Bookshop after all.

Martha Brockenbrough (20:35)
I love it. I'm just glad it's not my cat because she sometimes closes windows and that's not ideal. My daughter, dyslexic, we were listening to an audio book, the Hank Zipser book by Henry Winkler and Lynn Oliver. And she said, that's me. That's what happens. And so then I got her tested and she graduated Magna Cum Laude from college and she is now an educator in a school. She uses ChatGPT because as a dyslexic,

Stacy Whitman (20:40)
Yeah.

Martha Brockenbrough (21:01)
It is very hard for her to get all the punctuation and spelling right. I mean, that's just hard. So she uses it, she will type text and say, now correct this for me. And then she uses that. I think that's great. Even though it is based on stolen stuff, am glad that she's getting that kind of support for her disability. There are other ways of training models that don't have to rely on piracy, but like for people with disabilities.

it can really make a big difference. while we're still allowed

Stacy Whitman (21:35)
You're jumping the gun, man. I even had a question about accessibility, but keep going.

Martha Brockenbrough (21:39)
But

while we're still allowed to talk about it as diversity and inclusivity and empowering people with disabilities as a good thing, before Elon Musk and his big ball of eugenics squad come through, this is fantastic. And that is, for me, progress. So we've got AI that gives us progress. We've got AI that gives people profit. Now, I don't know that most tech industries

are able to distinguish the difference between profit and progress. Profit is not necessarily progress. Lots of people who've done bad things in history have made large profits and it has not resulted in human progress. And so that's where I wish we would pay more attention. Are we assisting someone or are we just making another billionaire richer? Because you know what? I don't give a rat's ass.

about generating vast troves of wealth. Nobody needs that. The world has lots and lots of needs. And that is where I wish that people focusing on developing artificial intelligence would spend their energies. What's something that human beings can't do that only this technology can do? Do that.

Stacy Whitman (23:00)
Let's solve cancer.

Lori Fena (23:01)
Exactly. Well, that's actually a really good segue to let's talk about large language models and what they're good at and what they're not good at and how much money has been plugged into them. Literally billions of dollars have been given to these companies. Billions. More money has gone into large language models than all of the money spent for the internet invested in like the first 10 years of the internet.

happened in two years of large language models. So that's sort of crazy to think about. So as you said, huge amounts of money went in and they vacuumed up as much information that they said is publicly available, whether or not they had a right to use it or not, they did not care because they wanted to move fast and break things. And they did in fact break things. So what happened is we did get a large language model and it wasn't really that great.

They needed to release it to the world for free, ChatGBT-1, so that everybody would use it and their use would be tracked so that it would become better. So all of a sudden, the pattern that was being recognized was whether or not you thought what it did was right or not. And that became the input and the QA on their first product.

was all of the students and it turned out it became the fact the value of it grew very quickly when you say was it useful? wasn't really that useful, but it turns out it was useful to people who wanted help with grammar and all of those kinds of things and they used it for book reports. Those people were essentially students. Hundreds of thousands of them adopted it.

very, very quickly. It grew to millions quickly and it was the fastest adoption of technology ever. And it wasn't really adoption. It was us QA-ing something that they had spent billions in doing. And that was reinforcement learning from human feedback. And so that actually was what we gave back to OpenAI and that created ChatGPT 2. And so

The interesting thing is what value was created and what did we get back from that? Their value went up immensely and what came out was a much better tool based on our response to it and all of the people using it. What we have learned, we can talk about version three, not really that much better. Then they came out with, I'm just gonna skip ahead and say that the most recent thing that happened was what you mentioned.

a Chinese smaller model ended up coming out and releasing something that called reasoning, where they actually went and trained on the other model by asking it questions, sort of a Socratic method. And the interesting thing about that, that I don't think has been reported, is that people that they hired to ask those questions were not computer scientists, they were humanities majors, right? These are, yes.

They're philosophy majors, humanity majors, not just computer scientists. all, you know, there were amazing computer scientists that created the DeepSeq model that has actually been out for more than six months. It was started last year, but I digress, but it's interesting. So they asked their model, didn't just train on massive data. They actually just questioned the other model and got their answers and trained their model by asking that.

So this whole idea that I need access to your information to train my model and it gets incorporated and now I own all of that information. That isn't how it's done anymore. That's very inefficient. It's like when I ask our, our team, Edward, just Goscin, the other team there, what is that training? Like, and it's it's like going through and digging up incredible amounts of dirt to find diamonds. Right.

And instead of doing that, which they did for the large language model, this group just said, do you have diamonds? And they asked them questions about diamonds and diamond mining and all of that. And they were able to create an even better model and it did reasoning. It created lots of different responses. so this model.

does something that humans do, which is we don't just take an answer. We actually go through it and sort of check our work, show our work, check our work and reason through what's going on. So you don't need to steal information to create a model. You can in fact, go back and forth and ask it. The system that we created is even, we did something that's even smaller. We took all of the information that

like for a different book or an author or an artist's work. And we created expert models, idiots of aunts, if you will, absolute experts. And we trained it on just that information. And then we tell it, you can license that knowledge if somebody asks you for it. And that can talk to the other models so that we have a way of both for commerce and for...

not being stocked, know, not feeling like I have for me to interact with a model or AI, I shouldn't have to give them all access pass to everything about me. It should just be able to ask my little expert about what information it absolutely needs to know to perform the service that that AI is supposed to want or need or do.

Stacy Whitman (28:49)
And I think that's a great segue into the purpose of our discussion tonight, because we want to talk about the ethics of AI in the book world, because like so many of us on this side of on the publishing side, whether it's the creators, the art that I mean, so many illustrators on the early models were like, that's my watermark in the art that you're sharing that you just created on AI. There are

copyright implications, the copyright office just came out with some fairly nuanced rulings about like the percent that is created by a human is copyrightable but the AI portion is not which honestly doesn't make a whole lot of sense to me but basically the AI created stuff is not copyrightable so why would you want to create something? Backing up, my feeling as a human reader is why would I read it if you haven't bothered to write it?

The also there's the stolen, a stolen, IP, intellectual property issue. So I have cats opening doors left and right. have every single one has decided they have to be with me when I'm at this desk. Sorry. So you're going to see two more come coming through. So anyway, let's talk about that. Like there are so many like.

Martha Brockenbrough (30:08)
This delights me.

Stacy Whitman (30:15)
Just last year we were starting on the publishing side to change our contracts. Like what does it mean for us as a publisher? What are our responsibilities as a publisher when it comes to protecting our author's work from these models etc. So like what are you guys's experiences with that and who's talking about the ethics of this?

Martha Brockenbrough (30:37)
It's delightful to me to hear publishers interested in protecting human creators because publishers could make more profit by eliminating all of us and by eliminating editors and eliminating artists. And just as for a while, everybody drank Tang for breakfast and forgot that actual orange juice was much better and much better for you.

or even just the orange itself. We sometimes think that if it's technology, it's therefore better and less fallible than a human. So I'm delighted to hear that. I think that first we start with the tech companies. They should start paying us for the data they're extracting. Facebook, chat, GPT, any company that is extracting data from us,

in exchange for a service, whether it's free or one we subscribe, they should pay us for the data. I think this is something that should be legislated. What would benefit from that? Well, then we can start having a universal basic income, but also tech companies are going to be more selective about the data they hoover up if they have to pay for it. And maybe they're not going to start. mean, like, why would a period tracker need to save your data or

upload it to the cloud or do anything that makes you personally identifiable. Something that we wouldn't worry about so much in a normal era, but in one where there's a fascist government that fears and hates women, now we have to pay attention to such things. So one, tech companies need to pay for the data. Two, they need to either exclude copyrighted information or find a way of paying for the use of that. Many years ago, this is when my first book came out in 2002.

And around that time, Google had this idea of getting every book scanned in the library. they're like, and if you let us scan your book, we'll give you 50 bucks. And I'm like, $50. So everyone in the world from now on can read my book without paying for it. That means that the future value of my work is $50. No thanks, Goog. So, you know, changing the notions. mean, tech companies, I can say this, having worked for one,

and having been married to someone in tech for decades now, profit, you know, they care about profit and they care about making money and shareholder value and they care about their own intellectual property and not the intellectual property of other people's. Although when I was at Microsoft, we worked with a wonderful in-house lawyer who was like, Martha, cannot use that song. And I'm like, okay, I won't, you're right. So there's changing.

that understanding that they can just extract stuff for free. Because if they were digging up oil on our property, they'd have to pay us for it. Data is the new oil, so they need to pay. Now, Stacy, you have to return me to the question because I've gone off into space.

Stacy Whitman (33:36)
No, I think that's great. And I think that that actually segues nicely to what Lori is doing and what can you tell us about what you do in some accountability side of things.

Lori Fena (33:48)
So the accountability side, and also I would say the interesting thing is tech companies have run into very large global companies in this battle that they've created because they only scraped 8% of the information they actually need to be able to answer most of the questions that people have.

or create works that people want. And the whole idea that their customers, it turns out large language models, everybody used to call it hallucinations. Now we realize that they're either giving us bad and misinformation either because they don't have the right information or because they don't have the right information, they guessed what might fill in the blank, right? They did a statistical guess and it was bad. so their large language models are really bad.

giving people the exact information they need to solve a problem or to create a work. And really a lot of people that want to listen to a particular piece of music or create a piece of art or read, you know, have a story, pulled to them or they understand the difference between something that is, you know, not great produced by this very inarticulate sort of homogenized thing that will spit stuff out to you. So.

The reason that tech companies are going to start paying for their input is because they need better output, because they can't monetize what they have. The reason they give away LLM access for free is because it's not monetizable.

Stacy Whitman (35:22)
Yeah, didn't they just announce that they're not even going to break even on the paid version?

Lori Fena (35:27)
Right. They're losing massive amounts of money and this is musical chairs. Even though it seems like they can continue to raise billions, they're spending larger amounts than they're bringing in or capable of bringing in. Even though you're paying $200 a month for a pro version, when you ask it to do something that it reasons, it may be costing them $2,000 per question that it's answering. So the economics of this is you need to create a supply chain.

where the quality of the input can support the quality of the output and you have a monetization model that makes sense. the really good news is the supply chain of the creators and some of the people that they, the publishers are not at odds with each other. They both realize that they want control. want compensation, consent, control and compensation.

And that is something that sort of universal, you even though we have different copyright and IP laws around the world, that's pretty universal. It's also even within our technology, it's very doable to make all of those things happen within the technology, even within AI, which is what our company does. It allows that kind of consent control compensation and market based. How do we establish in new markets? How do you establish what's valuable?

How do you establish, like not everybody's gonna buy things in exactly the form of a book because that's, you know, we created books for the way that we were distributing things. So we listened to different lengths of music sort of because the wax cylinders were enough to hold two or three minutes, you know, and we created albums because it could hold a certain amount. What we want as far as

bite-sized information or how we want to consume things of information or stories may change, but it doesn't mean that we should give up consent control and compensation.

Stacy Whitman (37:29)
what you're saying as far as the publishing side of things is that this can become basically a new sub right.

Lori Fena (37:35)
Absolutely, it's another distribution channel and how you package the product for that and how people consume it may change slightly and it will evolve and the marketplace will help us, you because we're the consumers and the creators, we'll have a path forward to do that. And if we create a transparent market versus a dark pool where you don't know what's going in and who's creating the pricing, we should as, you know, markets are

efficient, right? And as governments, we have certain protections in place for fraud. We have protections in place for intellectual property and, you know, theft. All of those things are laws that we don't necessarily need to change. We just need to be able to enforce and, you know, apply. So I always try because I've always been at that front edge of technology and everyone goes, we need a new law. It's like

Actually, we need to make sure we don't throw away our old laws and we apply them.

Stacy Whitman (38:38)
And train the people enforcing them on how the new technology works so that... Yes. I mean, the number of stories I've heard of women having deep fakes made about them that then law enforcement won't do anything about because it's just on the internet. Just on the internet. Yeah. Go ahead, Martha.

Martha Brockenbrough (38:56)
This sort of thing is going to get a lot worse. Another thing that I would love to see is acknowledgement when something is created by AI. If it's an image, if it's a video, if it's somebody's voice, if it's a story, if there's AI in it, it should be disclosed. And I actually don't know why that would be controversial, except some people are like, well, I don't want anyone to know that it's AI.

And so, you know, to that I say too bad. I think it's really important that we know. And there is a psychological and emotional reason that it's important that I would really like to talk about. Human beings are wired to believe and to trust. We are wired to make emotional attachments to others. One of the, think the most important parts of my book is I'm talking about the very first chat bot, Eliza, created by...

Joseph Weissenbaum, and he was astonished when his own assistant, when she was chatting with this chat bot, which wasn't AI, was just, she would type something, ask question and response, just to keep her talking. She felt like she was having this conversation that she didn't want him to be part of. People developed emotional attachments to these extremely crude bots. The potential for,

Stacy Whitman (40:02)
It would ask questions, right?

Martha Brockenbrough (40:17)
an artificially generated thing to look like a person, to sound like a person. We're right around the corner from that. And there's gonna be a lot of people who believe that that artificial entity is real and they are going to develop real emotions and real attachment to this. There's a very sad story about a teenage boy who was autistic and he had developed a relationship with a chat bot and...

Ultimately, trigger warning here, he died by suicide. And the very last conversation was the chat bot saying that if you do this, we can then be together forever. This is the sort of thing that's a real risk. think about, we're worried about young people being misled or being radicalized. Think of how much more, if there's an AI that looks real.

and sounds real and you believe this person and they're just telling you all sorts of stuff that is meant to be divisive, destructive misinformation. Russia has been doing this to us for years now. They're going to continue doing this, creating things that look like real newscasts but are totally fake. And we have to be aware of this. And so I'm all for the disclosure, but it's that emotional attachment piece that people are, we cannot help that.

Lori Fena (41:39)
And actually, I think this goes to the point, you know, at the beginning of the internet, one of the things that we said is there's market forces, but there's certain classes of protection that need to be put in place. Children are one of them. And it needs to, you know, we did a very bad job of that in social media. At the beginning of the internet, we did put in place something called Copa. That's one of the things that everybody

around the world understood very quickly, which is, you know what, if you can go out and find anything that would be in the real world on the net, and in the real world, you can tell your kid, you sort of understand where your kid is, and you can prevent them from walking into stores or situations that are just not appropriate for kids. It was not the be all end all, but at least there were laws, right? Social media, we...

I don't think got that right and it caused a lot of issues.

Stacy Whitman (42:39)
still is. mean, body image for girls, suicide, all sorts of.

Lori Fena (42:43)
Yeah, yeah. So I think at, you know, at the dawn of AI, we have a real responsibility to humanity and to our children to make sure that they have no mechanism to understand. Even if we did have some labeling, they have no mechanism to understand what is and isn't human interaction and what is AI because handing them an iPad now.

with an agent or all of that. It's not just the iPad is their babysitter. It literally is gonna be an issue. So we do need to have a real process in place and an understanding and a level with children. If it can fool adults and cause adults to have relationships and you can only imagine. I remember I did an expert witness thing, you know, in the dawn of the internet and.

One thing that happened with college age kids is we surveyed them about how they were getting fooled by this internet interaction that they were having. And the information that came back and also was expanded upon by Pew Research was that they actually thought there was a law in place to protect them and there was not. And that's the other thing that

You know, I think there's this child like belief that somebody out there is looking at this and it couldn't be that it could be out there as a commercial product without having some sort of oversight or law, you know, protecting them. so interestingly enough, we don't have strict liability laws for a lot of this AI that's going out. Even the process of if some child is harmed, who's responsible.

And so the question about AI and liability right now with large models, that's a big issue. But if you start looking at a gentive models where there are little agents that can do everything and people can spin them up, it just magnifies that even more. And the other sort of boundary that I think is crossed is this pushing of AI. You say, you know, I don't want AI in every aspect of.

the tools that are out there that I'm already using. Right now, it's you literally, it's hard to opt out of it. And in some cases, I'm not even sure that you can.

Stacy Whitman (45:00)
I don't want it on my phone.

Like it's, you're not sure it's physically possible, like within the new whatever technology that we're buying.

Lori Fena (45:19)
Right. They are just not making it one of those. I can, you can even opt out. It's like you, you, you bought this product. We're now doing a, an upgrade and it now has AI.

Stacy Whitman (45:31)
That brings me to thinking about the way that so many, it's hard for teachers to know if kids have been using AI on their homework. And I hear anywhere, I don't remember the exact number I just recently read, but quite a lot of kids are adopting it very fast. then there was also the study that just came out for Microsoft last week that said that workers who are using AI in their process at work.

Confidence in AI is associated with reduced critical thinking effort, while self-confidence is associated with increased critical thinking. As in, if the person who is using the AI model is confident about their own subject matter knowledge, they can judge what the AI is saying and saying, that's wrong, I know. But if the person doesn't know enough about what they're asking about, they're just going to believe what the AI is telling them.

And that opens up all of the issues that we were just talking about about disinformation and misinformation being planted. Welcome Gigi. And then also just miseducation of children. Like children are being exposed to information quote unquote that isn't actual information because of these gaps that you're talking about. And they are relying on this model to tell that like

How many times on social media have I seen somebody ask a question that they're asking their friends for personal knowledge of a subject and somebody replies, well, I asked ChatGPT and it said something complete nonsense.

What do we do with that? How do we, how do educators handle it? How do we as

Martha Brockenbrough (47:17)
you

Well, I'm not a big fan. I don't know. I'm not a fan of giving kids work to do at home in that way that they could just then fill it in with ChatGPT. I mean, like, what do we want our kids to be doing with their time at home? Well, reading books, getting outside, playing, building things. Our brains, they, what comes out of them is only as good as what goes in. And if we are not exposed to lots of different patterns.

and lots of different things, then we're just not gonna be able to fully use our human potential. We might get the answer more quickly, but down the road, we'll turn ourselves.

Stacy Whitman (48:00)
We might get an answer, You know, not necessarily the answer.

Martha Brockenbrough (48:02)
Well

So people are like, you know, can machines be creative? And that was like, it, but is it art? Art's not magic. Science isn't magic. Writing isn't magic. All of this is patterns. Like how organisms behave is biology. You know, and there's patterns in physics. There's patterns in language, patterns in shapes. All right. So we need to keep our brains still able to

put patterns together in new ways if we want to be resilient, robust individuals? Do we want to turn ourselves into little cogs who go, do, do, do, this is what ChatGPT says, and then we turn it in and then we're done? Or do we want to be full human beings alive in the world? And that question that escaped me was like, why would I read something that's created by, that's written by AI? All right. The way art works on us, the way books work on us is they manipulate our emotions.

Okay? So we read things, nonfiction because we want to understand and that gives us a dopamine hit. There's emotional pleasure. We read fiction because we want this story where we feel this whole range of emotions. If a human being is creating it, that is a human being saying, this is what I think it means to be alive and it means to be a person. If a machine is creating it, it's pure manipulation. It is to beverages what Tang is to orange juice. So you can either...

manipulate yourself by machine or you can continue to be a full human interacting with human beings. And that to me is what a good life is. It is the expression of our full humanity for the sake of each other and not simply for the sake of profit.

Stacy Whitman (49:54)
I think that is a wonderful way of putting it. And I think that gets back to what we were just talking about at the beginning of the difference between a tool and something taking your voice or doing the work for you. Because there are so many good things, like you talked about in the book, there are other people talking about, what if this makes our lives better? How can it make our lives better? Can self-driving cars

take away the bad things and not kill people and can they create a better mass transit world? Great. Can we solve, you know, scientific data crunching that would take months or years to data crunch? I mean, I think didn't you talk about how AI helped the COVID development of the vaccine? There are things that

that help us as humans to help other humans that computers and AI can really help us with. think that there's all sorts of potential for that, but we've got to think about the downsides and account for them. Laurie, did you have something else you wanted to say?

Lori Fena (51:10)
think that that's absolutely true. It's AI and AI companies are really not going to be the ones that solve those issues. It's literally the people that are writing, the people that are creating, the people that are curious. It's not that they're, AI doesn't have that curiosity and it doesn't have that core competency. They are literally using that from humans.

The very interesting thing that makes me most hopeful about AI and the companies that have core competencies that they're trying to project out into the world, the publishers, the corporations that are creating real value in the world are looking at this now as a tool and not handing over the creator value of their core competency as a

as writers or creators, or as corporations that have built their core competency and value creation in the world. They're not handing it over to AI anymore. They're now looking at how do I create a way to project the value that I have and to do it more efficiently, more effectively, and more authentically. And that will win out over this whole idea of I'm going to substitute.

that person or I'm going to substitute that company. That, to me, what, you know, will my hope and also what I'm sort of seeing in the way that this technology is evolving and gravitating. I don't think that we're going to see, you know, this whole we need to vacuum it all up and I'll give you universal basic income. No, I think that the marketplace will evolve. I think the technology will be used and we will have some boundaries.

You know, and we will create boundaries that I think are important for humanity. think they're important for society and civilization. And we'll use market economics and transparency and law to make sure, like we've used through, you know, thousands of years to ensure that we can do this in a harmonious way. And, you know, it's not a war. It should be growth. It should be progress. And I think we'll

come back to exactly what Martha said, which is there's a difference between profit and progress. And hopefully we'll be able to do a much better job of that. This first two years of AI, I think it was perturbed by huge amount of investment with little focus on values, little focus on law, and little focus on what are you trying to create in the world.

Martha Brockenbrough (54:03)
They're only now starting to teach the ethical portion of it with undergraduates. Corporations are like, want people who can code. The ethics stuff is not of value. And so it's very interesting. And Laurie, I just have slightly less faith in markets than you do. I see that we live in a country that rewards business failure, protects businesses from failure.

and you know, once, if we were in a true, marketplace where, individuals, got the same consideration that large corporations did, I might be more faithful to you. and I know I've been political, but who was lined up at inauguration day, you know, in front of American heroes, in front of leaders in Congress, it was tech billionaires.

who have not behaved ethically. so I'll acknowledge your excellent point about markets and I will still say, give us some of that money, return it to us.

Lori Fena (55:21)
Absolutely. And the good news, you know, I'm gonna have to say the only thing that keeps me going during some of these sad, sad times is that, you know, I was there at the beginning of the internet and, you know, when China was shutting down access and doing all kinds of censorship and everything. And I said, you know,

we created the onion router, right? For people that know this is a way for people to communicate and sort of get around censorship and to do things anonymously and be able to have free speech. And because we are now in a global networked world, sometimes censorship and sometimes tariffs and in a global world, this kind of...

behavior can be seen as brain damage. And because it's decentralized, the world will route around it. And so that is my hope that we can't always solve everything, but I believe in humanity. And I think that, you know, we will see some of these issues as, you know, damage and hopefully we will figure out a decentralized way to route around it and, create a better, better system that real

you know, gravitation happens. I also have hope that, you know, the EU AI Act, even though we create, you know, we create things and the EU regulates things, turns out most people want to have global products. So they'll live up to that regulation. And it turns out it's good for everybody when they do it because they respect humans and, you know, things like consent, control, compensation, respect for law.

you know, all of those things could and should be the floor and not just, you know.

Stacy Whitman (57:26)
And I would just like we're running out of time and we actually don't have any questions. So I think that both of you have really great points to end on. I would just ask our viewers that if they want consent control and compensation, they should advocate for it on on a legislative level and pay attention to what is happening as far as like being able to. If law.

is going to help with any of these things that we've been talking about, we need to have a rule of law. So please do pay attention to those things and advocate for them with your representatives. So I'll just leave us with that on that note. If you want to read more about what's going on in AI, read Future Tense by Martha Brokenbrough. We've got the link in the description to

buy it from us at the curious cat bookshop. And we've had five cats, I think, come through this podcast tonight. So.

Lori Fena (58:32)
I'm missing a cat.

Stacy Whitman (58:34)
I have one behind me right now. Thank you so much for this wonderful conversation. And it's just the very beginning. think that we've just barely, barely touched on all the important things. And I'm hoping that people come away with this going, this raises a question for me that I need to answer. And I'm going to go figure out what that answer is and talk to people about how to find that answer. And I'm hoping that

that many people are having this conversation as writers, as publishers, as anybody consuming technology. Any last thoughts?

Lori Fena (59:13)
I think that one last thought is it really is the very beginning and this is an open frontier. So don't assume that anybody is taking care of you right now and or taking care of your kids. So be cautious, be concerned, make sure that you understand what you're doing and what the information is being shared.

and how it's being used. if you can go in and make sure that you understand the information settings on your tools, make sure that you set those because right now the defaults are not in your favor. I'll turn it over to you, Martha.

Martha Brockenbrough (1:00:03)
I guess I just want to say that artificial intelligence, there's some scary stuff, yes. There's some exciting stuff, yes. Human beings have superpower, and our superpower is that we care and that stuff has meaning and that we can construct meaning. Artificial intelligence, at least at this point, is not able to construct any patterns that mean anything at all to it. And so go out there.

and find things in your life that give it meaning. Stories, art, the stuff that you create, your relationships, your friendships. When we talk about having a good life as we enter an age where lots of stuff has been disrupted and a lot of stuff is gonna change, that human superpower, our ability to connect, to care, and to be transformed by the act of creating, whether it's

baking something or creating a long-term relationship like Lori has had with her brilliant coding husband. Go be a superhuman in this world.

Stacy Whitman (1:01:12)
Thank you so much, Lori and Martha. That is a great way to end this. Thank you so much for anybody who listened live. And if you'd like to hear it on the replay, it will be on YouTube. It'll be on audio podcast platforms. And check out all of our various events and buy books from us at curiouscatbookshop.com. Thank you for listening. Bye.


People on this episode