Part3 With Me
This podcast is about helping architecture Part 3 students and practicing architects through discussions on key subjects and tips in preparing for their Part 3 qualification to help jump start them into their careers as fully qualified architects and also providing refresher episodes for practicing architects to maintain their knowledge up to date - For any queries or content requests email me on: part3withme@outlook.com. - Or follow me on Instagram:@part3withme
Part3 With Me
Episode 194 - AI & Copyright
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
This week we will be talking about AI and Copyright. This episode content meets PC2 - Clients, Users & Delivery of Services ans PC3 - Legal Framework & Processes of the Part 3 Criteria.
Resources from today's episode:
Websites:
- https://www.ribaj.com/intelligence/ai-technology-importance-soft-skills-communication-early-career-architects?utm_campaign=23/09/2025 Editor's cut&utm_content=Why early-career architects are developing their soft skills in preparation for AI&utm_term=&utm_medium=email&utm_source=Adestra
- https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence#c-our-proposed-approach
- https://www.ribaj.com/intelligence/artificial-intelligence-ai-technology-uptake-digital-construction-report-2025?utm_campaign=21/10/2025 Editor's cut&utm_content=Half of architecture professionals are now using AI tools, study finds&utm_term=&utm_medium=email&utm_source=Adestra
Thank you for listening! Please follow me on Instagram @part3withme for weekly content and updates or contact me via email me at part3withme@outlook.com or on LinkedIn.
Website: www.part3withme.com
Join me next week for more Part3 With Me time.
If you liked this episode please give it a rating to help reach more fellow Part3er's!
Episode 194:
Hello and Welcome to the Part3 with me podcast.
The show that helps part 3 students jump-start into their careers as qualified architects and also provides refresher episodes for practising architects. If you would like to show your support for the podcast and help us continue making amazing content, click on the link in the episode notes to sign up to our subscription. I also offer one to one mentoring services to help you with your submissions, exams and interview, head over to our website to learn more or reach out to me on LinkedIn through the Part3 With Me page, or instagram my handle is @part3withme or email me at part3withme@outlook.com.
I am your host Maria Skoutari and this week we will be talking about AI and Copyright. Todays’ episode meets PC2 & PC3 of the Part 3 Criteria.
Make sure to stay until the end for an example scenario.
So in this weeks episode, our focus is on AI and copyright. Namely relating to what the UK government is proposing, how it could affect the way architects use AI tools, and how this sits alongside the rapid uptake of AI across the profession. The aim is to provide you with a good level of understand relating to the subject assisting you to ask better questions and make more informed decisions in practice.
Why AI and copyright matter:
AI is no longer an abstract future technology as it is now being embedded in everyday design and construction workflows. The RIBA AI Report 2025 and the 2025 Digital Construction Report both show that AI adoption in UK architectural practice has accelerated sharply, with around six in ten practices now using some form of AI tool this is up from around four in ten the previous year.
For architects, this growth is driven by several pressures. Practices are looking to AI to help with:
- Improving building performance analysis and supporting net‑zero carbon targets.
- Boosting productivity in design development and documentation, historically a weak spot for the construction sector.
- Managing increasingly complex information across digital design, specification and construction stages.
At the same time, AI depends on training data, and much of that data consists of copyright‑protected works scraped or mined from the internet. That is where the tension arises: creative industries, including architecture, rely on copyright to protect the value of their work, while AI developers need large volumes of content to develop competitive models.
So what does that mean in terms of copyright:
In December 2024, the UK government launched a formal consultation on “Copyright and Artificial Intelligence”. The consultation document is clear that both the creative industries and the AI sector are strategic priorities for economic growth and that the current legal situation around AI training is “not working” for either side.
As such, the government sets three core objectives for the AI sector and creative industries:
- Supporting right holders’ control: Meaning right holders should be able to control and license the use of their content in AI training and seek remuneration.
- Supporting access, whereby AI developers should be able to access and use large volumes of online content to train their models easily, lawfully and without infringing copyright.
- And lastly, promoting greater trust and transparency about works used to train AI models, and their outputs.
The consultation then expands on how copyright law currently applies to AI training and how currently copyright protects creative works such as drawings, text, images, software, and databases, and gives owners the right to authorise or prohibit copying and communication to the public. Training an AI system often involves copying vast numbers of works, for example, images associated with keywords, or text from websites in order to extract patterns.
As such, the consultation expands on a number of key points it wishes to address, including:
- Addressing the copying of works, whereby copying of works to train AI models will require a licence unless a copyright exception applies.
- The use of automated techniques to analyse large amounts of information, for AI training or other purposes, is often referred to as “data mining”. To carry out data mining using copyright works, relevant information needs to be extracted from them. If this process involves a reproduction of the copyright work, under copyright law, permission is needed from a copyright owner, unless a relevant exception applies.
- Some works are licensed to AI developers for the purpose of AI training. Others may be available under open licences. But in many cases, AI models are trained using works made available to the public on the Internet. These are often not expressly licensed for AI model training, and the creators of those works are not compensated for their use. There is limited disclosure about the sources of works used to train AI models and creators will often not know if their works form part of a training dataset.
- Because of this, while AI providers and AI users benefit from the rich variety of content that is made by creators and creative industries, those creators often do not share in the value that is generated. Greater transparency could allow both creators and AI developers to share in that value.
- The use of copyright works to train AI models has given rise to debate, in the UK and around the world. This is on the extent to which copyright law does or should restrict access to media for the purpose of AI training.
- Some AI developers argue that existing legal exceptions in UK copyright law allow them to use copyright works when conducting training activity in the UK. If, their training activities take place in other countries, they may argue they are not subject to UK jurisdiction. But right holders reject these arguments. They maintain that, by making copies of their works to train models, AI developers are infringing their copyright in the UK.
Based on these key points, the government wants to ensure that both the AI and creative industries can share in the benefits of AI, and that both sectors are able to grow together. Copyright law should enable creators and right holders to exercise control over, and seek compensation for, the use of their works for AI training. But it should also ensure AI developers have easy access to a broad range of high-quality creative content. Alongside this, AI developers should be transparent about the inputs used to train generative models, and the outputs produced by them, enabling creators to understand when and how their work has been used.
For architects, this matters in two directions:
- Their own outputs, including drawings, specifications, photographs, BIM models, which may end up in datasets used to train AI systems without explicit licensing or remuneration.
- Practice may be using AI tools that have been trained on datasets whose copyright status is contested, creating potential reputational and contractual risks if outputs closely resemble existing works.
So the consultation sets out four options for how the UK might deal with AI training and copyright. Each has different implications for designers and their clients:
- Option 0 Do nothing: This would leave the current, ambiguous regime in place, with ongoing disputes and case‑by‑case litigation. The government explicitly signals that this does not meet its objectives of control, access and transparency.
- Option 1 Strengthen copyright and require licensing in all cases: Under this model, AI developers would need explicit licences to use any copyright works for training, and could not sidestep UK rules by training overseas. This would strongly favour right holders, but the government warns it could significantly reduce AI investment in the UK and make leading models less available in the UK market.
- Option 2 A broad text and data mining (TDM) exception: Under this option, AI developers would be able to mine copyright works for any purpose, including commercial, without permission from right holders. While this would maximise access for AI firms, it would give creators very limited control or remuneration and could conflict with international obligations.
- Option 3 A text and data mining exception with “rights reservation” and transparency: This option is the government’s preferred route. As it would allow AI training on works to which the user has lawful access, but only where the rights holder has not clearly reserved their rights. Right holders could opt out through standardised, machine‑readable mechanisms and then license their works for AI training on commercial terms, supported by transparency obligations on AI developers.
The consultation describes Option 3 as the most likely to balance control for right holders with access for AI developers, and notes that it broadly aligns with the EU’s approach to text and data mining, albeit with details still evolving.
The heart of the proposed approach is a new copyright exception for text and data mining, combined with a mechanism for right holders to reserve their rights. The key features as outlined include:
- That the exception would apply to data mining for any purpose, including commercial purposes, but only where the user has lawful access to the works.
- Right holders could reserve their rights through agreed, machine‑readable means, for example, through standardised technical signals associated with online content.
- Where rights are reserved, the exception would not apply, and AI developers would need a licence for training.
- The regime would be underpinned by stronger transparency obligations, requiring AI developers to disclose information about their training data sources and potentially about outputs.
From an architectural practice standpoint, this suggests a future where:
- Practices and individual architects may be able to mark their online content, such as portfolio images or CPD articles, to prevent unlicensed AI training, while still allowing other uses.
- Collective licensing bodies could emerge or expand to negotiate AI training licences for large sets of creative works, providing payment flows back to creators.
- When procuring AI‑enabled services, clients and architects might expect clearer disclosure of training data sources and whether rights reservations have been respected.
The consultation also flags the importance of practical, interoperable technical standards so that even small practices and individual creators can exercise rights in a realistic way, not just large publishers.
Now, the consultation does not just focus on training data, it also raises questions about the outputs of AI models and who owns or controls them:
Current UK law provides that:
- If an AI output reproduces a substantial part of a copyright work without a licence, that may infringe copyright. For example, if an image generator closely reproduces an existing photograph or artwork.
- Where a human and AI system collaborate, and the human makes creative choices, such as directing prompts, editing outputs or integrating AI‑generated elements into a wider design, the human is generally treated as the author.
- The UK also recognises a category of “computer‑generated works” where there is no human author and the author is the person who made the arrangements necessary for creation, but the consultation notes uncertainty over how well this fits modern AI practice.
The consultation does not yet propose detailed legislative changes on authorship but explicitly asks whether protections for computer‑generated works remain appropriate.
Another area flagged is “digital replicas”, often referred to as deepfakes, which is AI‑generated content that imitates a person’s voice, image or likeness. The government recognises that the volume and realism of such content is increasing and is asking whether the current legal framework gives individuals sufficient control over how their likeness is used.
For the built environment, this overlaps with concerns around synthetic imagery in consultation materials, place‑marketing, and public engagement:
- How should digitally‑generated crowd scenes, “future resident” images or testimonial‑style videos be labelled where they involve AI‑created people?
- What duties might architects and clients have to avoid misleading communities with heavily AI‑enhanced visuals during consultation or planning?
- Could unauthorised use of an architect’s name or image in AI‑generated promotional material raise professional conduct and IP issues?
The consultation links this to a broader push for labelling of AI outputs and standards to support that labelling, again emphasising transparency as key to public trust.
Now translating this to the RIBA’s findings in their recent AI Report data:
The report indicates that AI adoption among UK architecture firms has jumped from around 41% to close to 59% in a single year, meaning that more than half of practices now use AI tools in some capacity.
Some key trends highlighted in these reports include:
- Larger practices are leading adoption, with usage rates reported at over 80%, while smaller studios are also integrating AI, albeit at lower but rising levels.
- Around two‑thirds of architects surveyed believe AI can help meet net‑zero targets and improve building performance, and a similar proportion expect productivity gains.
- Despite usage, relatively few practices currently have formal AI policies or structured governance in place, though many indicate plans to develop them in the near term.
Alongside technical adoption, the report emphasises the importance of soft skills on early‑career architects in the AI era. As AI tools take on more routine analysis and drafting tasks, skills around advocacy, communication, community engagement and ethical judgment become more central to professional value. For AI and copyright specifically, the relevant soft skills include:
- Advocacy within the practice, by being able to articulate why the firm needs clear policies on AI use, data provenance and copyright, rather than just “trying the latest tool”.
- Client communication and explaining how AI is, and is not, used on a project, including limitations, risks, and the steps taken to respect third‑party rights.
- Interdisciplinary collaboration through working with legal advisers, IT specialists and external consultants to align practice workflows with evolving regulation.
To conclude, given the direction of travel in the consultation and the evidence on AI uptake from the RIBA Report, it is worth drawing out some practical implications for practices, including:
- Expecting increased pressure to formalise AI policies covering tool selection, acceptable use, data security and copyright compliance.
- Considering how to manage the practice’s own IP online, including portfolio content and research outputs, in light of possible future rights‑reservation mechanisms for AI training.
- Anticipate clients asking more questions about how AI is being used, and be prepared to show that outputs are responsibly generated and do not infringe third‑party rights.
As AI and copyright are evolving so quickly, the core themes are already clear: balancing creators’ control and remuneration with innovation and access, embedding transparency, and ensuring that human‑centred creativity remains at the heart of practice. For the architecture profession, this is not just a legal technicality, it is part of defining what responsible, future‑facing practice looks like in an AI‑rich world.
Before I move on to an example scenario, let’s sum up what I discussed today:
- First, AI is already embedded in day‑to‑day architectural workflows, and UK adoption is rising fast, so questions of copyright and data provenance are no longer theoretical but immediately practice‑critical.
- Second, the UK government’s consultation recognises that the current copyright position on AI training “is not working” and is steering towards an opt‑out text and data mining model that tries to balance creator control, lawful access for AI, and much greater transparency.
- Third, that preferred “rights reservation” approach could let architects actively mark and license their own content for AI training, while also expecting clearer disclosure from AI vendors about what data has been used and on what terms.
- Fourth, for practices this means two-way risk and responsibility: your work may already sit in training datasets without permission, and you may be using tools whose training data is contested, with potential IP, reputational and client‑relationship consequences.
- Finally, the profession’s real edge here will sit in governance and soft skills: putting robust AI policies in place, managing online IP intentionally, communicating clearly with clients and communities, and keeping human creativity and ethical judgment at the centre of an AI‑rich, “future‑facing” practice.