Understanding IP Matters

3-D Chess: AI's Race for Market Share and IP Supremacy

The Center For Intellectual Property Understanding Season 5 Episode 5

Send us a text

Allison Gaul, senior counsel at BCG-X, an invention development and commercialization company, discusses the evolving AI landscape, where intellectual awareness meets real-world strategy.

As both a former patent examiner and litigator with a Harvard graduate degree in business analytics, she offers insider perspectives on how companies secure IP rights, why investors now prioritize AI risk policies, and how open source licensing drives market adoption.

The conversation explores copyrighted training data challenges, how small learning models compete with foundational LLMs, and why publicly available doesn't mean free to use. Gaul shares practical IP protection strategies for startups and established companies navigating content attribution, energy-efficient blockchain solutions, and the misconceptions engineers hold about software patents.

Key Takeaways:
• Small, targeted-use models (SLMs) trained on specific datasets are gaining traction because of their relevance and efficiency
• Investors are now scrutinizing AI startup' risk and compliance policies more carefully
• Open source licensing has become a significant tool for capturing market share
• Publicly available content is not automatically free to use; not all LLMs ascribe to this
• Blockchain offers potentially reliable solutions for IP tracking despite energy concerns
• IP and AI strategy require balancing innovation with responsible ethics
• Gen AI adoption began with easy productivity wins across industries
• Businesses that are mindful of AI risk are in a better position to attract capital

Subscribe to Understanding IP Matters on your preferred platform or visit understandingip.org for more episodes exploring intellectual property with leading innovators and experts.

00:00 - Introduction to Allison Gaul
01:07 - AI race and investor expectations
02:36 - Risk policies investors demand
03:01 - How companies leverage Gen AI
04:21 - Working with foundational model providers
05:34 - Day in the life of a product attorney
06:40 - Multi-dimensional AI competition
08:34 - Open source as market strategy
09:10 - Small learning models vs LLMs
11:02 - Copyright challenges in AI training
13:29 - Content attribution and data rights
15:41 - Licensing deals and fair use debate
17:34 - Legal frameworks catching up
19:20 - Transparency in AI systems
21:25 - Attribution standards discussion
23:38 - Geographic variations in AI law
25:44 - EU regulations and global impact
27:50 - Cross-border compliance challenges
29:33 - Energy concerns in AI development
31:18 - IP education for engineers
33:11 - Patents in software development
35:27 - Ethical IP strategy and responsibility
37:54 - Patent troll misconceptions
39:50 - Attribution vs permission clarified
40:54 - Blockchain solutions and limitations
42:08 - First exposure to IP rights

[Allison Gaul] (0:00 - 0:10)
I think we are in like a multi-dimensional chess version of an AI race right now. It's kind of a Wild West right now of data acquisition and everybody's trying to approach the problem in a slightly different way.

[Bruce Berman] (0:14 - 1:06)
Hello, I'm Bruce Berman, host of Understanding IP Matters, the acclaimed series that provides leading innovators and experts the space to share their IP story, the good, the bad, and the incredible. Alison Gall is a former Ms. District of Columbia, but don't let that fool you. She is also a leading AI and IP expert that has helped dozens of companies and investors feel their way through the current AI maze.

Alison holds a graduate degree in business analytics from Harvard. She's a registered patent attorney, former litigator, and patent examiner. As senior counsel to BCGX, a leading design and build firm, she is responsible for evaluating digital products with an eye towards intellectual property strategy and value creation.

Hello, Alison. Good morning. It's great to meet you and thanks for joining us today.

[Allison Gaul] (1:07 - 1:16)
Thanks so much, Bruce. I'm glad to be here and looking forward to talking about IP, AI, and hopefully not too much about being Ms. District of Columbia. We can touch on that too if you want.

[Bruce Berman] (1:17 - 1:26)
Okay. Well, you're close to the front lines of AI and IP. Are AI businesses attracting more or less capital these days?

[Allison Gaul] (1:27 - 2:36)
I would say more, but I do think investors are becoming a little more discerning. I think two years ago, if you were applying for an AI venture capital raise, then you were probably going to get it because we really didn't necessarily have all the tools at the disposal within venture capital firms to really get a good sense of how to evaluate the quality and what the market space might be for AI startups. I do think it's a little bit harder now.

One thing I think is really interesting is that we're seeing some of the distinguishing factors from startups really relate to their risk and compliance policies. One of the things that venture capitalists do want to see, and angel investors, anyone who's trying to invest in AI startups, is that they want to know you have a maturity level with respect to your AI risk understanding, your compliance policies, and so you're not going to create substantial risk down the road if you turn on whatever your app is, whether it's a wrapper on a LLM or if you've done something really unique.

We want some kind of assurance, I think, that people know what they're doing and aren't going to create risk.

[Bruce Berman] (2:36 - 3:01)
There are a lot of those risk-related issues to AI, obviously. It's such a new, it's a wild west out there in terms of law and precedent. Allison, we work with many young companies.

How are they securing and using IP rights? Is it mostly patents on infrastructure or copyrights on content or trade secrets? What are you seeing these days?

[Allison Gaul] (3:01 - 4:06)
At Boston Consulting Group, we advise companies, large and small, on a variety of business issues. I would say one of the super cool things about being in a management consulting space at this time is you get to see all the different ways that people are leveraging Gen AI across industries to solve all kinds of business problems. I can say that it really, the early wins, I think, in Gen AI adoption relate to productivity.

That's where I think most C-suite execs were really trying to get their budget in place with respect to AI and upskilling on it, getting people opportunities to do things more efficiently and faster. I think that's also a great way to introduce people to these tools, getting your workforce upskilled on AI. We're now starting to see people moving a little bit more into some of the logistics, some of the back office kind of work.

I think we're going to continue to see that evolution as comfort and adoption in AI literacy grows. We're going to start to see it permeate more and more aspects of the business.

[Bruce Berman] (4:06 - 4:20)
So you're working with both AI companies or companies that have LLMs or small learning models, and you're working with the companies they may be selling to, I assume. Am I right?

[Allison Gaul] (4:21 - 5:17)
Yeah, so we have a lot of tech partnerships with the foundational model providers themselves, and we do have research institute arms that help do research in the AI space, help grow those models, their capabilities. But we also do a lot of work for, again, sort of corporations across various industries, helping them figure out how to leverage those tools strategically so that they can actually solve the business problems they have. And it's a really interesting kind of place to be as a lawyer because you're getting to learn about the technology, but you're also getting to see how people are using it and how clients want to address those problems.

And so there's kind of different perspectives on risk that you get to see, different perspectives on the value of IP, and where both the IP owner and the IP user see that value line.

[Bruce Berman] (5:18 - 5:33)
Yeah, and your role at BCGX, I know you work with companies for value creation and strategy, but what's your day like? Who's a typical kind of client? You don't have to name names, but just to give us a sense.

Yeah.

[Allison Gaul] (5:34 - 6:26)
A lot of my work as in-house counsel really focuses on my internal stakeholders. So a lot of my clients are actually our engineering teams, our product teams. And one of the coolest things about being a product attorney, I think, and getting to be a tech attorney at this time is that I really get to be integrated with the product teams, talk with them about what they're doing, see the cool stuff that they're coming up with, and the asks that our clients have, and how our engineers want to solve those problems.

And then I get to look at that and say, okay, well, is there something that we've already monetized here, something we want to protect, something that we're concerned about, something that we want to go to market with a client over? And then also, do we have an opportunity here to try and help mitigate risk for ourself or our client moving forward to make sure that what we are building, we're doing as responsibly and ethically as possible?

[Bruce Berman] (6:27 - 6:40)
Interesting. It feels like we're in an AI race with the deepest pockets achieving as much brand recognition as first mover advantage will allow. What's your take on this?

[Allison Gaul] (6:40 - 8:33)
I think we are in a multi-dimensional chess version of an AI race right now, because on the one hand, we're all racing for market share. And then you also have this added component of this nationalistic cross-border competition to see which countries can really have the strongest AI models out, which ones can really corner the most of the global market. And I think that's created this super interesting moment in time where we are being driven in a lot of ways, not necessarily just by what people say in their marketing, but by user adoption.

And the way that a lot of companies are getting that user adoption is through open sourcing and open weighting parts of their models. So we're seeing these super interesting licensing schemes pop up because all these different foundational model creators want people to just jump on their tool. They want to use it as fast as they can.

They want people to just get as many users as possible. So they get network effects going and they don't have to worry necessarily about trying to keep people going on new models as quickly as when they're ramping up. And I think that that's a really fascinating thing that we haven't seen as much since maybe the 90s, where open source is really a driver of market share and one of the primary tools that people are using to try to capture that.

But it's created a lot of really interesting jurisdictional differences. The EU recently had to back off and redefine some of their open source safe harbors because too many companies were trying to take advantage of it. I won't name any names, some of them have been in the news.

And so I think it's a super interesting time for IP and that IP licensing is really a driver of what's happening or what's helping a lot of this take shape.

[Bruce Berman] (8:34 - 9:09)
Yeah. Yeah. It's interesting.

Open source, I've spoken to some investors, venture investors and others, and they embrace for them, their portfolio companies, they embrace open source. They feel it's too expensive and too limiting to focus on a proprietary single model, LLM, like open AI. The world, the technology is moving so quickly, they don't want to align with that, nor do they necessarily need to.

If they're a more limited database, for example, a small learning model might be more appropriate for them and a lot cheaper.

[Allison Gaul] (9:10 - 10:18)
I think so. And I think that we're starting to see a transition there. This is a projection that I've been making that I think as AI continues to evolve, we're going to see people drifting more and more to these targeted use models that have been trained on highly specific datasets.

One, it's more cost effective for companies to build those rather than the chat GPTs and the anthropics, sorry, the clothes of the world. And two, I think it gives us better results, right? When we're looking at highly specific trained datasets, it also provides us as selfishly as an IP person, it provides us an interesting opportunity to have better control over the IP ownership component of data that it's being trained on, right?

So if it's a smaller application, it's a smaller model, then it's a lot easier for IP owners to make the argument of like, okay, well, we really should be entitled to something here because you came very specifically after our IP because it met the need, right? That you're looking for. And you didn't just do a broad sweep of the internet.

And I think that that's going to be a lot more useful for users, but I think it's also going to help some of us as IP attorneys sleep a little better at night.

[Bruce Berman] (10:20 - 11:41)
In an excellent discussion, Alison, you had with Gene Quinn from IP Watchdog, you told him, it seems there are two classes of Gen AI, LLMs and SLMs, the big companies, OpenAI, Microsoft, Gemini, Alphabet, Lama, et cetera, and Anthropic, which Google is putting billions of dollars into and have the servers. NVIDIA chips, capital and quickly established market share. The other class is much more modest in their aspirations.

And that's what we had been talking about. But do you think there'll be more shakeout a la internet 1999? I mean, the big guys are going to be around, they'll make mistakes, but they can afford to absorb them.

But the smaller guys, I was in San Francisco for IP Awareness Summit in April, and I spent quite a bit of time there, so I got to see what's going on. And the AI folks there, the smaller groups, were kind of paranoid. They were nervous.

They didn't want to come to an IP conference because IP and AI often don't mix. It was really kind of interesting. I didn't feel it was like an exciting, forward-looking culture.

I felt it was more of a defensive mindset, despite all the activity in San Francisco.

[Allison Gaul] (11:42 - 12:08)
Yeah, I agree. And that's such an interesting perspective. I think it goes back to what you were saying with the sort of race to market, where we're really trying to be the first out of the gate, trying to get the most users so that we can try to leverage that as potential investment income.

And I think that we're seeing those smaller companies really kind of have a fear of telling anybody about what they're doing.

[Bruce Berman] (12:09 - 12:09)
Exactly.

[Allison Gaul] (12:11 - 13:58)
I'm seeing people leverage more trade secret rather than going for patenting. I think because a lot of the magic is in your data set, and that's not individually protectable, people really do not want to disclose how they did it, what they trained it on, or how they're leveraging it until they've got enough users in the tank that they can kind of guarantee that there's a good chance they're going to be able to get some money to help the company go forward. But even if it does take off, one of the things I think is so interesting right now, again, about this moment that we're in an IP is we're seeing the small companies not care about intellectual property.

And I've been asked this question, and it's a valid question, I think, what is the value of patents to a small startup if somebody else using Codex or Windsor can come along and generate a very similar competing product within an hour? And on top of that, you can have people in jurisdictions that don't have the same IP reciprocity or the same IP protections that maybe we do in the US doing that work, making that competing product within an hour or two and without repercussion. And so if I am a startup founder right now, I think there's a valid question about why should I spend $10,000, $20,000, $50,000 on IP protection when what's really going to be the most important for my business is actually just not telling anybody what's in the sauce until we can get up and running?

[Bruce Berman] (13:59 - 15:04)
Absolutely. Yeah, that's going to be my next question. But that's a really good point.

I think Afrat Krasnik, who's a foresight valuation in Palo Alto, she teaches part-time at Stanford, she's really astute about valuation and what companies are doing like Unicorns and others. And if they're internet, e-commerce or software, they're not likely to rely on patents for very much. It's just not a priority.

And the speed at which they're moving and the technology is moving also makes it very hard to put a stake in the ground and say, we have this great patent. It's not like pharma. It's different.

Data is the fuel for AI. I think everyone agrees. And while it may feel like it's infinite and it's not, currently data is not protected.

We talked about this a bit by IP law. It's considered a collection of facts. Protection can occur through creative configurations of data, such as databases under copyright, but copyright can be easily designed around.

How are companies dealing with this? I mean, you mentioned trade secrets, but what are some other things you're doing?

[Allison Gaul] (15:04 - 16:21)
Yeah. So trade secret, I think is one of the biggest ways we're seeing an increase in licensing of data. And I've been seeing very recently these interesting trends around, we're hearing these whispers about compulsory licensing of data.

Some of the countries in the EU are really pressing hard for that. And there's a delineation that I've been noticing amongst the countries who are really ahead in the AI race and then the countries who have more startups and are trying to catch up. And some of those countries who, they have the scientists, they have the engineering, they have the will, but traditionally maybe they've had more rigorous data protection standards or more of a sense of IP ownership without necessarily the safe harbors that we have.

They're interestingly leaning more towards compulsory licensing in order to ensure that those startups have the ability to actually get the data they need to compete with some of these bigger foundational models. There's kind of this divide happening amongst countries who say, no, no, I think we're fine kind of as we are. And then countries who say, no, no, no, no, we really need to kind of like consider whether or not IP issues are actually preventing us from getting ahead in the AI race.

[Bruce Berman] (16:22 - 16:27)
Now, Alison, what would they be licensing? The database? Are they licensing the technology?

[Allison Gaul] (16:27 - 16:39)
Yeah, databases, web data, copyright protected content. I think a lot of it probably starts initially around web data, but then also, you know, maybe large research data sets.

[Bruce Berman] (16:40 - 16:48)
Interesting. And is licensing like based on like a FRAND model or how do they determine what the royalty is?

[Allison Gaul] (16:49 - 19:19)
That's a great question. And one of the things that's really concerning, I don't know that the countries who are proposing this have got it figured out yet that I've seen, some of the suggestions have related to governments providing, you know, that sort of here's the mandatory fee for using this much data. And that creates a question of, well, okay, but if you say my data set is worth five cents or something, then what if through individual licensing, I'd be able to get 20 cents on that.

Now for some people that may be advantageous, because if you're getting nothing, then that five cents is great. But if you could otherwise leverage that, then that's going to be a bit of a challenge for IP owners. I think there's a really big tension that we're seeing, not just with gen AI, but I think it's really brought it to the forefront is with any AI, we need data to make it run.

But at the same time, globally, we're seeing consumers and individuals and even companies want to push back a little bit more and be more protective of their data, keep more as trade secret, keep more as proprietary, because that is the value in so much of business in so many models and so many tools. And we're getting this tension where startups and model builders want access to everything. And the actual target audience is like, no, no, no, I don't want to give you that.

And it creates a really interesting opportunity, I think, for licensing structures. But right now, there's no global standard for it. And we're just these different ways and different approaches for handling it popping up all over the place.

As someone who works for a global company, as a lawyer, it kind of gives me a little bit of pause, because I think, well, okay, that means that we almost need a different data acquisition scheme for every jurisdiction in which we sit. And if we use a model, that let's say we chat GBT, we'll just throw them out there, that is trained on data in one country, and they have a compulsory license. Do we then have to pay that country, even if the open AI sits within the US?

How is that going to work? And how are you going to get that attribution? There's a lot of companies, I should say, there are some startups popping up that actually do tracking through websites in much the same way that robots.txt does, to try and figure out when those things are scraped by AI crawlers, so that they can get that website can then charge for use of their data, which I think is a great solution. It's kind of a Wild West right now of data acquisition, and everybody's trying to approach the problem in a slightly different way.

[Bruce Berman] (19:19 - 19:48)
Compulsory licensing, especially in patents, is sort of a dirty word. No one wants to hear it, but at the same time, if it generates some income, as opposed to no income, for a lot of creators, could be companies and or individuals, it's meaningful. But both sides will not be happy, but there'll be some movement in a direction of perhaps fairness with compulsory patent licensing.

[Allison Gaul] (19:49 - 20:40)
And you could imagine, I think, that if we start adopting these broader technical solutions, which I think are going to get there before the regulation does, for sure, but these technical solutions where you can put a marker on your website, and anytime your website gets scraped, it charges the scraper that much for use of your website. If those technical solutions can be tiered, so that I know the value maybe of my content, maybe it's a little bit higher than something else, maybe I can charge 10 cents every time it's scraped, and those people can charge 5 cents, and then whoever is doing the crawling can have their crawler take a look at what the cost is for that website and make a decision about whether or not it's worth doing it.

And you could see where that might introduce some equity and some fairness, even if we are compelling people to make their things available, we can allow them to dictate the price.

[Bruce Berman] (20:40 - 21:39)
It's somewhat analogous to like ASCAP and EMI for music royalties. They get pennies for a performance or for something heard in a supermarket or something over the loudspeaker, but they get paid, they get something. And there's someone monitoring it too, which is also important.

But content is different, obviously, from patent world, so it's not as easy. Regulation, Alison, we've gone in a couple of different directions, sometimes simultaneously. California seems to be really aggressive on regulation, and ironically, that's where a lot of the big players are located.

But other people say we need to allow the AI world, the business of AI, to kind of regulate itself, let the market determine what it is, what it isn't, what's dangerous, and we should just wait and leave it alone. What's your take on that?

[Allison Gaul] (21:39 - 23:53)
I think regulation is always important. I think we are in a time period and probably have been for about the past 50 years, definitely the last 25, in which the pace of innovation definitely outpaces the ability of regulators to keep up with and stay knowledgeable about it. I have a challenge sometimes finding outside counsel who are up to date on these technologies, because they're just evolving so fast.

And I have great outside counsel, so it's not a knock against them. Things are just happening so quickly that I think it's unrealistic to ask regulators to really be on top of these things. So while I think that their role in helping guide us is extremely important, I don't think you can sit around and wait for them to tell you what to do in these situations.

So one of the things that I think is super important is actually companies having an AI code of conduct to help govern their own internal behavior. I think it's super important these days for companies to have sort of their own moral compass, if you will, and have a sense of like, how do we want to behave? How do we want to move in the world?

How do we want to engage with our partners, be they customers or other businesses? So that we know when something is kind of right for us and when it's not the right opportunity. And then I think it's also really helpful to have a public facing version of your AI code of conduct.

For us, it's a responsible AI policy. And that kind of gives you something that you can share with your partners, with your customers. It's a little bit of a marketing tool, but it also helps people understand where you're coming from and what those safe lanes might be for you.

So they can just determine, is this somebody I want to do business with? Is this somebody whose values align with mine when it comes to what it means to implement AI in an ethical or responsible manner? Because I think we're all aware that AI can be leveraged in a way that's really not good for humanity, customers, the environment.

And so on some level, industry does have to lead on that because the technology is just going way too fast. And if you wait for regulation, you're still waiting. Chad GPT has been out since 2022.

[Bruce Berman] (23:54 - 23:57)
He'll be waiting. We're still waiting for internet regulation.

[Allison Gaul] (23:57 - 24:24)
Exactly. Exactly. So if you were sitting around waiting, you would be using typewriters right now.

And so I think, well, I never ever want to say you shouldn't follow regulation. We absolutely should. And that's the most important thing.

We can't wait for that. And we have to kind of find our own way and take our best guess on some level at what regulators are going to say and do, and then just be able to course correct as needed.

[Bruce Berman] (24:25 - 24:34)
Is there a model out there that you find attractive or viable, perhaps? Something, some state or agency?

[Allison Gaul] (24:34 - 25:49)
Yeah, I think in general, California actually does a pretty good job. I'm really interested to see where the EU is going to land now that they're kind of backing off a little bit. I think the EU AI Act had some really great frameworks with respect to how it was classifying different types of models.

Didn't always agree with it, but the fact that they were trying to minimize the distinction and just say, okay, we have sort of high risk use cases. We have sort of general use cases. And I think that that was kind of a great way to go grouping things according to sort of the social harm that could be done by them.

But they didn't really fully flesh out how to actually proceed with a course of action against that, or how they were really going to monitor that, or do people need to file things? And so now that they are backing off a little bit, I'm really interested to see where they land, because I think they were kind of headed in a good direction. But it was maybe a bit overly burdensome at first because it just wasn't clear.

So we'll see, I think, where that goes. I wish maybe we had a little bit more of a uniform framework in the US, but that's not the way we do things.

[Bruce Berman] (25:49 - 26:41)
No, no. It's a free market. It's interesting that some of the bigger players are actually, because they're so well funded, as we all know, their capital is chasing them, they can afford to pay for licenses.

And I think Chachipti and some of the other players, I mean, Anthropic just had this huge settlement, $1.5 billion, $3,000 per novel, which if you think about it over time, it's really very little money. But it's a stake in the ground. Maybe it's a settlement that will be financially attractive down the road.

It doesn't look that way now. But there seems to be an effort to recognize content and pay for it. It seems to be going in that direction among the biggest players.

The smaller players may not be able to afford to do that.

[Allison Gaul] (26:41 - 28:53)
I think that's absolutely right. I do see us going in that direction. I think it's the right direction to go in.

And I kind of love seeing some of the model providers get slapped down a little bit for making decisions that they probably knew at the time were not in the interest of the IP owners. One prediction that I would make is, I think we're going to see more and more sort of almost independent consortiums of content providers coming together and saying, okay, we're all going to band together under the Allison Aggregate Dataset or whatever. And if you purchase the Allison Aggregate Dataset as training material for your model, or you license that, then you can use the Allison Dataset.

And everybody who contributes to it gets some percentage for what they're contributing so that these smaller businesses can actually utilize the training data that they need. And I think this goes back to what I was mentioning earlier about how these smaller LLMs are actually maybe going to give us a little bit more control because they need more targeted datasets. So I think that by identifying those specific types of data they need, the quality, the scope, it gives us a better ability to say, okay, you need these 10 things.

Great. We'll bundle it together and we'll license it to you under the Allison Dataset. And it's this much.

And every time you upgrade or you use it, they get a percentage of something. And I think that's going to give us a little bit better control over licensing opportunities, particularly for small content creators. The graphic artists, I see this interesting distinction happening where some of the artists using AI are now starting to generate things specifically for AI.

So you now, even though AI is in some ways destroying part of the customer base or part of your ability to serve your customers, you're still able to do that for some component of them. And now it's opening up this other opportunity where now you can start to generate things specifically as input for AI because they need actual real human generated content.

[Bruce Berman] (28:54 - 29:14)
Yes. Interesting. Sort of like the recording industry, every time there's a technological advance, they get a hit and then they learn how to incorporate it and then they make good use of it or they try to.

The Content Authenticity Initiative, CAI, you may or may not be familiar with them, Allison?

[Allison Gaul] (29:14 - 29:15)
Not off the top of my head.

[Bruce Berman] (29:15 - 31:11)
It's so funny. And I have about 4,000 members and they use open source technology to develop. Actually, Adobe and New York Times, I believe, was the original founders and developed by Microsoft, the open source tech.

And it embeds a signature. It's more than a watermark in images. So that stays with the image forever.

You can't take it out. It's actually created with that technology. So you can monitor unfair use.

They don't talk about how to deal with infringement. That's for an outsider or another company. But you certainly can monitor use.

And their concern, it seems to me, is more abuse, manipulation. One of the folks involved, Santiago Lyon, is an interesting guy. He's a war photographer.

So he's very concerned if war photography images are manipulated or not totally authentic, that's a problem. That's a very dangerous problem. But a lot of content creators and distributors have signed up for this Content Authenticity Initiative.

But I think that notion of tracking is really, really important. Tracking how LLMs are scraping, tracking how they're compensating for it, or are they using the data they collect, or are they just scraping it and not using it? Who knows?

To me, it seems like, and I've said this before, we need bots to monitor the bots. We're not good enough to do that. We're just not smart enough.

So we need tools or weapons, if you will, to be on top of it. Yeah.

[Allison Gaul] (31:12 - 31:58)
I really love the idea of the watermarking content that's used as training data. I advised a startup a while back that was working on that same problem. There are a couple of technical players in this space.

And I think it's such a great thing to do. And also because I think the technical solution is going to get there long before the IP legal community has figured out the licensing scheme. But no matter what we do with respect to licensing, I think we need an ability to attribute.

And having digital watermarking, salting bits, however you want to do it, so that we have a sense of how much of what went into creating something, I think is going to be absolutely important for infringement.

[Bruce Berman] (31:59 - 32:00)
Enormously, enormously important.

[Allison Gaul] (32:00 - 32:14)
And assigning liability as well, because if somebody creates deep fakes and then goes and creates harm based on that as a result of what they've released, we need to know kind of who did that and where do they get it from.

[Bruce Berman] (32:15 - 32:50)
And also for creation, does the AI play, the LLM play a role, a small role in the creation of a new invention, say 10%? Yes, it was helpful. It pointed us in the right direction.

We found data. Or is it 80 or 90% of what was created? So when the person files the patent, it's really, well, 10% of my work, but 80, 90% of the AIs.

So those kind of delineations are very difficult, obviously, to make. But I think you're going to have to try, because we're not going to know what goes into what.

[Allison Gaul] (32:51 - 33:19)
Yeah. And it's creating a couple of interesting situations from the copyright perspective, you can actually now have, thanks to LLMs, what could potentially be like, you know, multi-party infringement, depending on how much of what image, you know, if I take 35% of this and 35% of that and 25% of this, then all of a sudden, like now I'm at a point where maybe there's a substantial contribution from each of those, such that I actually am infringing on three different photos.

[Bruce Berman] (33:20 - 33:20)
Exactly.

[Allison Gaul] (33:21 - 33:54)
No, I think you're right on with the inventorship issue is kind of where I see it with patents, because we have this interesting problem right now where the USPTO is really never examined for inventorship. We know that AI can't be an inventor, but how do we submit evidence around that? How do we submit evidence of like what we did use it for?

How do we know, you know, what it was capable of at the time of invention and get a sense of what that contribution is and actually attribute to human actions? So I completely agree with you. Attribution and tracking is going to help solve a lot of problems for us.

[Bruce Berman] (33:55 - 34:45)
Yeah, I couldn't agree with you more, but that's going to have to happen. And we have the technology for that. You know, someone has said that Facebook and Google, you know, they're very paranoid.

Well, not paranoid. They should be concerned about pornography. They don't want anything to do with pornography because they'll be shut down.

So they filter for that sort of thing really, really well. They say if you try to put in like, you know, certain terms, search terms, that it will spit out. We don't do that or something.

But they can apply that technology, or some of it, I'm told, to say tracking their use of data or of content, tracking other kinds of content. You know, they choose not to because they'll have to wind up paying for it. But the technology is not that advanced to do that, from my understanding.

[Allison Gaul] (34:45 - 34:45)
Yeah.

[Bruce Berman] (34:46 - 36:07)
IP behavior is often suspect. Now, we're talking a lot about AI behavior, but what about IP behavior? I don't have a good sense, and I asked this to almost of all my guests, of what good IP behavior, I don't have a sense of what good IP behavior is on the part of a company, a business, and a part of a consumer.

I think it's become so befuddled, and it's so acceptable to grab stuff, whether it be content of the internet or, you know, inventions that, you know, if you're a large company, everything that's invented, not by you, is suspect. You know, it's not patentable. Only what you have is patentable.

That's right. As you know. Who's stepping up to show some IP behavior that is, you know, actionable and credible?

You know, one time, companies like IBM, which had a lot of operating company with a lot of patents, and they took patents seriously and respectfully, and they didn't sue folks. You know, they were pretty interesting company. You don't really have that anymore.

You know, if you're an operating company, you don't like patents other than yours, usually. And with content, there are other issues. So what's your thinking about?

What is good IP behavior? Who embodies that in your mind?

[Allison Gaul] (36:08 - 38:31)
You know, I'd love to say that as IP lawyers, we're kind of responsible for helping carry that flag. But I think on some level, that behavior is kind of molded by the goals of our clients. And in the 90s, and certainly in the first two decades, the 2000s, we were in this large scale aggregation race, you know, standards, essential patents, people building these massive portfolios of who knows what, and then, you know, just kind of gently forcing them on people or not so gently sometimes.

And that was the standard operating procedure amongst a lot of the large patent holders. We're seeing a little bit of a trend away from that, you know, partly because of cost, but also partly because it's the innovation pace is so fast now that the value of those patents, if they're not truly standards essential, is a lot lower than it used to be. And so you can kind of design around and build around it more.

I think as IP attorneys, it is a little bit incumbent upon us to try and maintain the boundaries of if nothing else, good sportsmanship within business, if you will. I think we've all heard stories about corporate C-suite executives, or in some cases, lawyers who, you know, said, oh, just file on everything because it stops our competitors from getting it, right. And while that's certainly a valid strategy, it's maybe, you know, feels like maybe a little bit icky to some of us, like, is that really, you know, in the best, you know, competitive nature.

So I think, for me, a lot of good behavior in general, and I say this in my own corporate practice is, quite frankly, just sort of thinking through the golden rule. It really comes back to what we were taught as kids. And like, is this a way that you would want an IP lawyer, you know, in a partner you're dealing with or in a merger and acquisition to treat you?

You know, is this a way that you would want somebody to try to leverage assets against you or collect assets to prevent others from doing things? And while we all certainly want to make more money for our shareholders, I think we also do have a responsibility to try to do things ethically and honestly. And I think that's the best way that I can model good IP behavior.

[Bruce Berman] (38:31 - 38:44)
Well, in the patent world, the tech world, licensees tend to get demonized, you know, any licensee, it doesn't matter if you're licensing on good terms with great patents, it's like, it's going to cost me money, so I'm going to call you a patent troll.

[Allison Gaul] (38:45 - 38:46)
Yeah, absolutely.

[Bruce Berman] (38:46 - 38:52)
And that's really unhealthy and it's bad for innovation. Forget about everything else, it's bad for innovation.

[Allison Gaul] (38:52 - 39:49)
I think there's also a lot of misconception. I think we do need better IP education. I think one of the things I'm constantly trying to convey to engineers, whether I'm giving a talk somewhere or sometimes when I'm talking to our junior engineers and they're upskilling, is just because something is publicly available does not mean it is free to use.

And I think that's honestly a common misconception. I think there's another honest, common misconception in the software world that if you are trying to patent something, it is because you don't appreciate software on a philosophical level, because you are not open to innovation and letting other people try to build off what you have. And I don't think that that's true.

Well, there are certainly companies that have embodied that philosophy. I think that in general, people are trying to use patents and IP to help protect the base of what they have, so they can build from it.

[Bruce Berman] (39:50 - 40:11)
Crediting a content or an image is not the same as paying for it, because you credited the image and showed the source. Well, what you do in that case is that you're showing if the owner really wants to be paid, you're making it obvious that you're using it. So I guess that somewhat mitigating.

But it's not the same as getting permission.

[Allison Gaul] (40:12 - 40:13)
Right. Absolutely.

[Bruce Berman] (40:14 - 40:53)
It seems like it should be, but it's not. Yeah, it's interesting. Do you think the future of what we're talking about, I think, is the speed of technology and of AI.

And things are moving so quickly and really in some very good ways, great ways, but also it's sort of losing control of it. Will blockchain transactions, which have been spoken about, talked about for decades, if they can really work, these mini transactions can take place transparently, quickly, do you think that will help a lot of this confusion?

[Allison Gaul] (40:54 - 41:54)
Yes and no. From a technical perspective, I think blockchain is absolutely a way to help with the attribution and tracking problem. It's a great way to track ownership.

We've seen that in the NFT space. So from a technical perspective, it's a great solution to helping with auditing, tracking and attribution. The problem with blockchain is that generative AI is already a massive drag in a lot of ways on the environment.

We already are seeing energy problems coming up there. Now, if you think about every time you write a chat GPT.com, now it needs to make several calls to blockchains to go pull the IP and see who owns what. You can only imagine how exponentially that's going to change the energy situation involved in it.

So do I think that it's a solution? Yes. Do I think that we need to figure out how to make that more energy efficient?

Absolutely.

[Bruce Berman] (41:54 - 42:07)
Solution with problems. Okay. We're almost out of time and I need to ask you, I ask most of our guests, what was your first exposure to IP rights and was it positive, negative, neutral?

[Allison Gaul] (42:08 - 43:26)
Oh, I love this question. I was actually working as an administrative assistant for a law firm in Boston, a large law firm, and they handled the patents for MIT. And I was making copies one day and I was copying patents.

This is the first time I've ever seen patents. And it just so happened that the patent that came up was actually for a pair of glasses. And this was like 20 or 30 years ago, way before metaglasses that would analyze an environment and generate an image.

And it was to help visually disabled people who are not able to see well. And as it turns out, I'm actually visually disabled. So I saw that and I was like, are you have to be kidding me?

That's so amazing that MIT is doing this. And I went to the partner and I said, how can I get involved in this? How can I get some of these to try them?

And he was, oh, they're years from production. Like this is just a prototype that we're patenting. And I made him explain to me the patenting process and how that worked and how far out in advance these things were developed for they ever went to production.

And I was just fascinated from then on. And so that was my first kind of aha moment of when I thought, well, this is something I want to be a part of if I can actually get a front row seat to looking at what new technologies are potentially going to change people's lives.

[Bruce Berman] (43:26 - 43:47)
That's exciting. Interesting. Okay.

Well, we're out of time. This was wonderful. It went really quickly.

We'll have to return you to the stage here and hear more from you, but you gave a great overview of where we are and where we'd be like basically optimistic, but with challenges, certainly.

[Allison Gaul] (43:48 - 44:05)
Absolutely optimistic. I'm very optimistic for the future. I think we're going to be agile.

We're going to adapt and there's going to be a lot of changes in the way we work and a lot of changes in the way that we do business and the way we think about IP, but we're going to adapt to it like we do with every other technology. So I'm very optimistic.

[Bruce Berman] (44:06 - 44:53)
That's great. Yeah, I am too. Hello, I'm Bruce Berman, the host of Understanding IP Matters, the critically acclaimed series that provides leading innovators and experts the space to share their IP story, the good, the bad, and the incredible.

Understanding IP Matters is brought to you by the nonprofit Center for Intellectual Property Understanding with the generous support of its partners and sponsors. Subscribe on the platform of your choice or email us at explore at understandingip.org. Content provided is for informational purposes only and does not represent the views of CIPU or its affiliates.

This episode was produced for CIPU by PodSonic. Thank you for listening.