Clarkslegal Law Bites
The Clarkslegal Law Bites offers guidance and insightful discussions on the latest topics for businesses and individuals covering employment, immigration, corporate, construction, property, litigation and more.
Clarkslegal Law Bites
AI Podcast: AI and Intellectual Property
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In the second of our three-part ‘AI Podcast’ series, Jacob Montague and Lucy Densham Brown, members of the data protection team at Clarkslegal, will be exploring how artificial intelligence (AI) interacts with intellectual property rights (IP rights). This includes:
- What are Intellectual Property (IP) rights?
- Can AI produced work have IP rights?
- Recent issues for copyright infringement by AI systems
- What is generative AI?
- Why is legal reform needed for effective AI regulation?
If you have any questions or want to discuss data protection law and how it applies to you in more depth, please contact our data protection team, who would be happy to help.
Jacob Montague 0:02
Good Morning and thank you for joining us for this, the second of our three part series discussing artificial intelligence. My name is Jacob Montague, I am a solicitor in Clarkslegal’s corporate and commercial department, and I am joined by Lucy Densham Brown, a solicitor in Clarkslegal’s employment department. We are also both members of the firm’s data protection team.
Lucy Densham Brown 0:25
In this podcast we will be exploring how Artificial Intelligence interacts with intellectual property rights, also known as IP rights. We will be discussing some of the recent issues with AI that have been raised by IP rights owners, and how the legal framework may need to adapt to address these issues in order to balance the rights of IP owners but also not stifle the huge economic, social and political benefits that many predict will come with better use of AI.
Jacob Montague 0:55
So, lots to cover then.
Lucy Densham Brown 0:58
Yes. I think it’s important to state first and foremost what we meant by IP rights. These can be broadly split into two categories, those that need to be registered: being patents, trademarks and design rights and those that don’t need to be registered: copyright and moral rights.
The first topic we want to talk a little bit more about is rights that attach to AI produced work and the question: if the author or inventor of an image, a piece of software, an algorithm or literary work, is an AI system – can the work be protected using our existing legal framework?
Jacob Montague 1:37
This was a question that was raised back in 2018 when, Dr Stephen Thaler, an AI academic and researcher filed two patent applications in which he listed an AI system, that he himself created, and known as DABUS, as the inventor of the patents. The applications were rejected by the UK’s intellectual Property Office for two key reasons: The first, the DABUS did not meet the definition of “Inventor” under the UK’s own patent legislation: the Patents Act; and two, that Dr Thaler himself was not entitled to apply for the patent due to his ownership of the AI system that created the invention. This decision was confirmed on appeal by the High Court, Court of Appeal and finally, the Supreme Court in December of last year. The key finding for the UK courts was that as DABUS was not a natural person, it could not meet the definition of “inventor”.
Dr Thaler also filed a number of patents in other jurisdictions and these were met with similar responses. The European Patent Office, like the UK’s own, also found that, what is effectively a machine, could not meet the current definition of an inventor. They also rejected Dr Thaler’s insinuation that he acquired the right to ownership of the patent and the ability to file the European Patents by virtue of being DABUS’ “employer”. The European Patent Office rejected this on the grounds that DABUS does not have a legal personality.
Lucy Densham Brown 3:18
So, did anyone grant a Dr Thaler a patent?
Jacob Montague 3:22
Interestingly, yes, the South African Companies and Intellectual Property Commission. I understand that under the South African Patents Act of 1978, inventor is not a defined term and as such, there was no reason to distinguish between a natural person and a machine. To be clear though, this goes against the findings of the UK, European and Australian counterparts and was met with some interest by the intellectual property community.
Lucy Densham Brown 3:51
That’s really interesting. So what does this actually mean in the UK?
Jacob Montague 3:57
Well, looking solely at the UK decision, this means, that currently, an inventor can only be a natural person not a machine, and the owner/creator of that machine does not currently have an automatic right to a patent in respect of something created by that machine. It’s important to state here that content DABUS was seeking to patent was autonomously made, i.e. without Dr Thaler’s input.
Lucy Densham Brown 4:27
And why is this significant?
Jacob Montague 4:29
Well, in one very crucial way, it shows the UK, and European legislation, is not necessarily equipped to deal with technological advances and that innovation may be stifled.
If sufficient protections are not in place, this could disincentivize AI innovators from creating machines that have the ability to create technological breakthroughs, as these would have less protections than those afforded to inventors who are “natural persons”.
Lucy Densham Brown 5.05
Clearly changes are needed. What are the next steps?
Jacob Montague 5:10
Well, first and foremost consultation. In particular consultation on the need to amend existing law or draft new law in order to accommodate new means of producing technical advances. It will come as no surprise to our listeners that this is not an overnight affair and harmonising legislation across many jurisdictions (even if they are in agreement at the moment) could take a very long time.
However, they have to start somewhere. Which is why it makes recent reports from the Financial Times, that the UK has effectively shelved its highly anticipated AI Copyright code particularly concerning. There is a huge appetite for policy among the creatives and the AI proponents with both parties desperate for clarity on copyright infringement.
Lucy Densham Brown 5:59
This brings me neatly to the second of our main subjects today. The recent allegations of potential for copyright infringement by AI systems in two key ways: Firstly, alleged infringement by unauthorised use of works during AI’s data gathering or “learning stage” and Secondly, alleged copyright infringement in works created by so called “generative AI” systems.
You will likely have seen from recent news stories that alleged copyright infringement is becoming an increasingly popular topic and we are seeing more and more high profile individual and outlets alleging that new AI services are infringing their IP rights. This new UK code of conduct had been due to clarify what protections currently exist and, hopefully, guidance on working with key AI stakeholders and, should it be required, compensation. That this code would be voluntary raises all sorts of other questions.
Jacob Montague 6:57
I have seen recent stories about this, but how did we get here in the first place?
Lucy Densham Brown 7:02
Well, this is not a new debate for the internet age, the use of AI assistance in the creation of IP and the use of existing IP by AI is not a new phenomenon. Importantly, nor is the use of data scraping in order to provide a service – think of google and everything it needs to do to provide you with the answers to your searches? Many believe that this is now at the forefront of people’s minds due to AI’s accessibility and the increase in the sophistication of generative AI services.
Jacob Montague 7:32
Lucy, you’ve mentioned Generative AI a couple of times, can you explain what you mean by Generative AI?
Lucy Densham Brown 7:40
Of course, by this I mean the AI image generators we regularly hear about creating new drawings, graphics and paintings in half the time it would take a human to do the same or similar act. I also mean the AI text generators such as Microsoft’ ChatGPT or Google’s Gemini – those that could produce, or generate, a new poem, play or novel in a matter of moments.
I think its first important to understand, in very simple terms, how an application like ChatGPT, to name but one, actually works. ChatGPT effectively works in a semi-unsupervised manner, having been given a distinct set of ground rules on which to review, analyse and, and then regurgitate and reformulate information. It is fed a truly vast amount of data, some believe to be the entire internet, to digest and, in inverted commas, learn from.
Jacob Montague 8:35
So ChatGPT and other systems can just use anything on the internet?
Lucy Densham Brown 8:40
OpenAI, ChatGTP’s creator, has been largely transparent about its resources and states on its help page that ChatGPT and our other services are developed using; firstly information that is publicly available on the internet, secondly information that we license from third parties, and thirdly information that our users or human trainers provide. Where information is not licensed the theory here is that generative AI systems are relying on the American Doctrine of “Fair Use”. The UK has a semi-analogous term to copyright infringement which is known as “Fair Dealing”. Jacob can you explain what Fair Dealing is?
Jacob Montague 9:24
Of course. Certain actions, such as research, private study, parody and criticism are exempt from copyright infringement when the use of the material is seen as “fair dealing”. What is Fair Dealing: well, typically, it’s not a simple definition, in fact, there isn’t even a statutory definition that we can point to. Instead, the question you are supposed to ask is: how would a fair-minded and honest person have dealt with the work? Again, not an easy question to answer here. I feel like we need a whole other podcast for Fair Dealing!
Lucy Densham Brown 10:03
So although these AI systems seek to rely on Fair Dealing, news outlets, creatives, artists and also the inheritors or assignees of certain moral rights in historic IP are not happy and are alleging that the initial scraping of these works, without a proper license, to train AI software which then generates works that are similar cannot fall within the exceptions we have discussed previously and that infringement must have occurred.
This argument is at the root of the New York Times allegations against OpenAI. The New York Times alleges that the many of its articles have been used without permission by ChatGPT. It has further alleged that ChatGPT has in some instances repeated verbatim extracts from new York times articles when a user has asked queries regarding current events. In similar circumstances, a group of authors have also alleged that OpenAI used their books without permission to train ChatGPT. Whilst it has been reported that the US district judge has dismissed some of the claims by the authors that content generated by ChatGPT potentially infringes the authors’ copyright, the key question whether tech companies' unauthorized use of material scraped from the internet to train AI infringes copyrights on a massive scale – has not been ruled on.
Jacob Montague 11:029
So, what is the outlook then?
Lucy Densham Brown 11:31
Well, we can’t really be sure at the moment besides what is to be a major battle between AI innovators and IP rights holders.
Jacob Montague 11:40
What do we need to happen to address these issues?
Lucy Densham Brown 11:45
Well, infringement or fair use/dealing questions need to be answered but these will likely be interpretations of existing law, that, as we have discussed already, is unlikely to be relevant to the vastly changing technological landscape. I think everyone is in agreement with the role AI is going to have in our lives, both business and personal, that we don’t want to stifle innovation and that all jurisdictions are desperate for guidance and new law.
Jacob Montague 12:16
We didn’t think we could end the podcast without briefly touching on another dominating news item that we suspect you will all be familiar with: the recent Writer’s Guild of America (WGA) strike. Whilst the strike was about myriad of issues including compensation, minimum standards and profit shares, alignment on how generative AI would be used by the studios and writers was particularly crucial to negotiations coming to an end and an agreement being reached. Similar to the issues we have already discussed, members of the writing guild were concerned about how generative AI was going to be used. The Writer’s guild envisioned, and many believe legitimately, writers being phased out of projects as AI Generative services were used to write, edit or amend scripts or writers not being compensated for source material that was being used by the AI services to generate new work. So what has been agreed? Well, in what many believe to be a landmark agreement and one that other creative arts industries might tailor or borrow from, the keys agreements were:
- AI generated material cannot be used as source material.
- The writers can choose to use AI to assist in the process but cannot be compelled to do so.
- The writers must be informed if they are provided with AI generated material to work on or with.
Lucy Densham Brown 13:41
Thanks Jacob. We appreciate we have covered a lot of ground here, and there are a lot of moving parts when it comes to the IP and AI conversation. We hope you have enjoyed this podcast, as this is such a constantly evolving area, I expect we will revisit this topic soon. Please follow our updates on LinkedIn for details on our upcoming webinars and the next podcast in our series exploring Artificial Intelligence. If you have any questions on the subject we have discussed today, please email our data protection team at dataprotection@clarklegal.com.
Many thanks for listening.