Part3 With Me
This podcast is about helping architecture Part 3 students and practicing architects through discussions on key subjects and tips in preparing for their Part 3 qualification to help jump start them into their careers as fully qualified architects and also providing refresher episodes for practicing architects to maintain their knowledge up to date - For any queries or content requests email me on: part3withme@outlook.com. - Or follow me on Instagram:@part3withme
Part3 With Me
Episode 214 - UK government’s consultation on AI & Copyright
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
This week we will be talking about the UK government’s consultation on AI and copyright. This episode content meets PC2 - Clients, Users & Delivery of Services and PC3 - Legal Framework & Processes of the Part 3 Criteria.
Resources from today's episode:
Websites:
- UK Government Report on Copyright and Artificial Intelligence (March 2026): https://www.gov.uk/government/publications/report-and-impact-assessment-on-copyright-and-artificial-intelligence
- RIBA — What should architects know about the government's new copyright and AI report? (March 2026): https://www.riba.org/work/insights-and-resources/professional-features/architects-and-the-governments-new-copyright-ai-report/
- RIBA AI Report 2025: https://www.riba.org/work/insights-and-resources/ai-report/riba-ai-report-2025/
Thank you for listening! Please follow me on Instagram @part3withme for weekly content and updates or contact me via email me at part3withme@outlook.com or on LinkedIn.
Website: www.part3withme.com
Join me next week for more Part3 With Me time.
If you liked this episode please give it a rating to help reach more fellow Part3er's!
Episode 214:
Hello and Welcome to the Part3 with me podcast.
The show designed to help Part 3 students kick-start their careers as qualified architects, while offering valuable refresher episodes for practising professionals.
If you’d like to support the podcast and help us keep creating great content, check out the link in the episode notes to subscribe. We also provide one-to-one mentoring to help you prepare for your submissions, exams, and interview, visit our website to learn more, connect with us on LinkedIn via the Part 3 With Me page, find us on Instagram at @part3withme, or email at part3withme@outlook.com.
I am your host Maria Skoutari and this week we are picking up from Episode 194, were we covered the UK government’s consultation on AI and copyright. Since then however, some changes have been made alognside announcements by the government, which is what we will be running through today. Todays’ episode meets PC2 & PC3 of the Part 3 Criteria.
And make sure to stay until the end for a copyright scenario.
So, back in Episode 194, I took you through the government's consultation on Copyright and Artificial Intelligence, which launched in December 2024. At the time, the government had set out four options for how UK law might deal with AI training and copyright. You may remember the four options were:
- do nothing;
- strengthen copyright and require licensing in all cases;
- a broad text and data mining exception with no opt-out;
- and the government's preferred route at the time, which was a broad text and data mining exception combined with a rights-reservation mechanism and transparency obligations — that was Option 3.
The key tension, as we discussed, was between two things the government wanted simultaneously:
- To protect the rights of creators, including architects, to control and be compensated for how their work is used,
- While also ensuring that AI developers have access to large volumes of high-quality content to train their models.
The proposed opt-out model was the government's attempt to thread that needle. Right holders would have been able to flag their content as off-limits using standardised, machine-readable signals, and AI developers would have been legally required to respect those signals.
So what has changed since that consultation? Quite a lot, as it turns out:
On 18th of March 2026, the UK government published a formal report on copyright and artificial intelligence, produced jointly by the Department for Science, Innovation and Technology, the Department for Culture, Media and Sport, and the Intellectual Property Office. This report was published under Sections 135 and 136 of the Data (Use and Access) Act 2025, a piece of legislation that specifically mandated the government to report back to Parliament on this issue.
The headline finding from that report was this: the government's originally preferred proposal, which was the broad exception with opt-out that we discussed in Episode 194, has been abandoned. It is no longer the preferred way forward. And the reason for this is striking. The consultation received 11,520 responses. That is a very large number for a government consultation, and it reflects just how much this issue matters to people. The majority of those responses came from right holders and the creative industries, and the strength of objection to Option 3 was decisive.
Creators argued that a broad exception would allow generative AI to learn from their work without compensation and in direct competition to them. There were also concerns from some in the technology and research sectors who felt that even Option 3 would be more restrictive than approaches adopted by countries like the USA, the EU, and Japan where AI training on publicly available content is either lawful or actively argued to be lawful. So the government faced pushback from both sides, and it has responded by stepping back.
The report, therefore, is explicit in that the government will not introduce reforms to copyright law until it is confident those reforms will meet its objectives. It needs to protect the UK's position as a creative powerhouse, while also unlocking the potential of AI to grow the economy. And crucially, it has concluded that there is not yet enough evidence or consensus to act definitively. Instead, the government proposes to gather further evidence, consider alternative approaches, and continue monitoring what is happening in the UK courts, international markets, and the rapidly developing licensing landscape.
So what will happen now from this outcome:
With Option 3 off the table and no new legislation in place, what are the rules now? The answer is that the existing legal framework continues to apply. Under current UK copyright law, permission or a licence, is generally required from a rights holder before their work can be used for a restricted act such as copying. That includes copying for the purposes of AI training.
For architects, this is directly relevant. The Copyright, Designs and Patents Act 1988 includes architectural plans, drawings, buildings, and technical models within the category of artistic works. So in principle, your drawings, visualisations, and BIM outputs are protected under that existing framework. The challenge, and this is where it gets complicated, is that the government itself acknowledges the current situation is uncertain, with live litigation in multiple jurisdictions and ongoing debates about how the law actually applies when AI model training takes place overseas or when scraping activity is disputed.
What is clear is that there is no new exception for AI training. The absence of reform cuts both ways: it means practices using AI tools do not have a clear new framework to rely on, but it also means that your own work retains its existing protections. The government is not opening the door to unrestricted AI training on your content, at least not yet.
So what are the key proposals moving forward:
While the government is pausing on the big legislative question of copyright exceptions, the report does set out a number of areas where work will continue. It is worth understanding each of these because they will shape the direction of travel, even if firm legislation is not imminent.
The first area is transparency. There was very strong support in the consultation for requiring AI developers to disclose what content and data they have used to train their models. The government agrees in principle that this kind of transparency can help rights holders assert their rights and would support licensing and enforcement. Rather than introducing mandatory legislation immediately, the proposed approach is to work with industry and technical experts to develop best practice on input transparency. The government will keep an eye on what is happening in other countries, for example the EU already has transparency requirements in this space, and will consider whether legislation is needed once the evidence is clearer.
The second area is labelling. The report acknowledges that there are currently no obligations in the UK for AI-generated content to be labelled as such. Many consultation respondents were broadly in favour of labelling, particularly for content that is wholly AI-generated. However, there was nuance around how to handle AI-assisted content, where a human has been involved in direction, editing or integration. The government proposes to work with industry on best practice and to engage with international partners to develop common standards, since labelling is most effective when it is consistent across borders.
The third area, and one I want to spend a little more time on because it has specific implications for the profession, is computer-generated works. In the UK, the Copyright, Designs and Patents Act 1988 includes a provision that protects computer-generated works that is, works produced by a computer with no human author with the author defined as the person who made the arrangements necessary for the work's creation. This provision has been on the books since 1988, but it was never really designed for the world of modern generative AI. The consultation has prompted the government to propose removing this protection entirely for works created solely by AI where there is no human author, while retaining copyright protection for works that have been created with AI assistance where there is meaningful human creative input. This is a meaningful shift in the policy direction, and it reflects a wider principle the government is keen to reinforce that copyright should incentivise and protect human creativity.
The fourth area is digital replicas. This refers to AI-generated content that imitates a person's voice, face, or likeness. The consultation responses confirmed that there is broad support across sectors for enhanced protections, though no clear consensus on what form those protections should take. The government proposes to explore options including a potential new digital replica right or personality right. For the built environment, this connects to broader questions about how AI-generated imagery is used in consultation materials, planning submissions, and marketing. The question of whether digitally-created crowd scenes or AI-generated testimonials need to be labelled, and what duty architects and clients may have to avoid misleading communities, is becoming increasingly live.
And the fifth area relates to regulations. Currently, the report concludes that there is no immediate need to create a dedicated regulator for AI and copyright. Oversight will continue through existing frameworks, including the Intellectual Property Office (IPO) and the civil courts.
Now, one thing the report makes clear is that a licensing market for AI training is already developing. Some AI developers are entering into commercial agreements with content providers to licence their works for training purposes. The government's view at this stage is not to intervene in that market, but to monitor how it develops. The concern flagged in the report is that the benefits of these licensing arrangements tend to flow towards large organisations rather than individual creators and small practices and the report specifically notes that this needs to be kept under review.
For architects, particularly those in smaller practices, this is worth watching. If a licensing framework does eventually emerge, whether through a collective licensing body or direct commercial agreements, the question of whether individual practices can meaningfully participate in, or benefit from, that framework will matter. The government has signalled that it is aware of this concern and wants to ensure that any future approach is accessible to rights holders of all sizes, not just major publishers or large creative firms.
Apart from licensing, another area of concern from architect is style mimicry, whereby users use AI’s ability to imitate a specific 'style' through user prompts, for example 'in the style of [creator name]'. While copyright protects specific fixed expressions rather than general styles or ideas, this remains a point of significant concern for those with a unique creative signature like architects.
So where does the profession stand and what does this mean for practices:
It is worth contextualising all of this against what we know about AI adoption in architecture. The RIBA AI Report 2025 showed that around 59% of UK architectural practices now use AI tool, up from 41% the previous year. Larger firms are leading the way with adoption rates exceeding 80%, but smaller studios are also increasingly integrating AI into their workflows. And crucially, the 2025 report found that 69% of RIBA members said they were worried about imitation or copyright. That is a significant majority of the profession actively expressing concern about the very issues we are discussing today.
Despite this, the same report noted that only 15% of practices currently have formal AI policies in place. There is a real gap between the pace of AI adoption and the governance structures that should be supporting it. Over half of respondents indicated plans to develop AI policies within the next two years, which is encouraging, but the profession is still largely navigating this territory without formal internal frameworks.
The government's decision not to reform copyright law immediately does not reduce the urgency of that governance gap, if anything, it reinforces it. Without a clear new legal framework, practices are operating in a zone of ongoing uncertainty, and the best protection available is internal policy and rigorous due diligence on the tools being used and the content being generated.
The key item to highlight for practices and practitioners is that the position is more uncertain now than it was. The government's retreat from Option 3 means that the expected legal framework of opt-out, rights reservation, transparency mandates and so on is not coming in the near term. What replaces it is a combination of continuing monitoring, evidence gathering, and incremental industry-led best practice.
For your own practice's Intellectual Property, the status quo actually preserves your rights in a meaningful sense, as there is no new exception that would allow AI developers to train on your architectural drawings or published work without permission. But practically enforcing that in a fast-moving, global market is complex, and the transparency tools that would help you understand whether your work has been used are not yet mandatory.
In terms of the AI tools your practice uses, the important questions remain the same whether the AI tool you are relying on if its being trained on datasets whose copyright status is clear? Can the developer tell you what data was used? Are there any known infringement claims pending against the model?
Governance is where the profession needs to accelerate. If your practice does not yet have an AI policy, now is the right time to develop one. It does not need to be a lengthy document. At a minimum, it should cover which tools are approved for use, how outputs should be checked and reviewed before submission, who is responsible for ensuring that AI-generated content does not infringe third-party rights, and how the practice manages and marks its own online content. The government's push towards transparency, even if voluntary at this stage, is a signal that practices which can demonstrate responsible AI use will be better positioned with clients and regulators as the framework evolves.
Let’s sum what we ran through today:
- On 18 March 2026, the UK government published its formal report on copyright and AI under the Data (Use and Access) Act 2025, marking a significant reset of its earlier position.
- The originally preferred Option 3, which was for a broad text and data mining exception with an opt-out mechanism, has been abandoned following strong objections from the creative industries. No new copyright exception for AI training is currently planned.
- The status quo continues in that permission is generally required before copyright works, including architectural drawings, can be used for AI training under UK law.
- The government is moving forward on transparency, AI content labelling, and reviewing the licensing market but through industry best practice and monitoring rather than immediate legislation.
- A notable proposal is to remove copyright protection for wholly AI-generated works with no human author, while retaining protection for AI-assisted works where humans have made creative choices.
- Digital replica protections are being actively explored, with implications for how AI-generated imagery is used in architectural consultation and planning contexts.
- Against a backdrop where 69% of RIBA members are worried about copyright, and only 15% of practices have formal AI policies, governance remains the most urgent practical priority for the profession.
The direction of travel is clear even if the destination is still being negotiated in that transparency, human-centred creativity, and responsible AI use are the principles the government wants embedded in whatever framework eventually emerges.