The Entropy Podcast
The Entropy Podcast is a cybersecurity, technology, and business podcast hosted by Francis Gorman.
Each episode features in-depth conversations with cybersecurity professionals, technology leaders, and business executives who share real world insights on cyber risk, digital transformation, emerging technologies, leadership, and the evolving threat landscape.
Designed for CISOs, IT leaders, founders, and professionals navigating today’s digital economy, The Entropy Podcast explores how organizations can adapt, innovate, and build resilience in an era defined by constant change, disruption, and geopolitical uncertainty.
The name Entropy reflects the growing complexity and unpredictability of cybersecurity and technology ecosystems and the strategic thinking required to thrive within them.
Topics include:
- Cybersecurity strategy, risk, and resilience
- Post Quantum readiness
- Emerging technologies and innovation (AI etc).
- Business leadership and digital transformation
- Cyber threats, regulation, and geopolitics
- Lessons learned from real-world experience
New episodes deliver practical insight, expert perspectives, and actionable knowledge so you stay informed, strategic, and ahead of the curve.
Buy Our Swag:
We now have some slick new swag you can purchase through our Esty store.
https://theentropypodcast.etsy.com
Watch and Subscribe
You can also watch full episodes and exclusive content on our YouTube channel:
www.youtube.com/@TheEntropyPodcast
Achievements
The Entropy Podcast delivered strong chart performance throughout 2025, demonstrating consistent international reach and listener engagement.
- Regularly ranked within the Top 20 Technology podcasts in Ireland.
- Achieved a Top 25 placement in the United States Technology charts, holding the position for one week.
- Charted internationally across multiple markets, including Israel, Belgium, and the United Kingdom.
This performance reflects sustained global interest and growing recognition across key podcast markets.
Audio Quality Notice
Some episodes may feature minor variations in audio quality due to remote recording environments and external factors. We continuously strive to deliver the highest possible audio standards and appreciate your understanding.
Disclaimer
The views and opinions expressed in The Entropy Podcast are solely those of the host and guests and are based on personal experience and professional perspectives. They do not constitute factual claims, legal advice, or endorsements, and are not intended to harm or defame any individual or organization. Listeners are encouraged to form their own informed opinions.
The Entropy Podcast
Architecting the AI-Ready Enterprise with Thomas Squeo
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of the Entropy Podcast, host Francis Gorman speaks with Thomas Squeo, CTO of ThoughtWorks, about the intersection of customer demand, technology strategy, and delivery in the context of AI adoption. They discuss the risks associated with rushing into AI without a strategic plan, the importance of governance and observability in AI systems, and the evolving role of knowledge workers in an AI-driven landscape. Thomas emphasizes the need for organizations to balance innovation with responsible deployment and regulatory compliance.
Takeaways
- AI is central to enterprise strategy and delivery.
- Organizations often rush into AI without a clear strategy.
- Effective AI adoption requires controls, evaluations, and budgets.
- The hype around AI often exceeds the reality of its implementation.
- Governance and observability are crucial for AI systems.
- Knowledge workers will evolve rather than be replaced by AI.
- AI can enhance productivity in knowledge work environments.
- Regulation can support responsible AI innovation.
- Organizations need to adapt to regulatory environments in AI deployment.
- Successful AI initiatives often come from a combination of top-down and bottom-up approaches.
Sound Bites
- "Deploy controls, evals and budgets."
- "POC purgatory participants."
- "Regulation is a mechanism for trust."
Francis Gorman (00:01.257)
Hi, everyone. Welcome to the Entropy Podcast. I'm your host, Francis Gorman. If you're enjoying our content, please take a moment to like and follow the show wherever you get your podcast from. Today, I'm joined by Thomas Squeo, the Chief Technology Officer for the Americas at ThoughtWorks. Thomas is a thought leader, practitioner, and innovator with deep expertise in enterprise and public sector solution implementation. He's a recognized expert in enterprise education and communications technologies, and it's great to have him here with me today.
Thomas Squeo (Thoughtworks) (00:28.438)
Awesome, thank you. Great to be here.
Francis Gorman (00:30.731)
Thanks, Thomas. It's lovely to have you here. And Thomas, I was looking at your background and I was thinking to myself, what do I ask Thomas today? And one of the things I want to ask you as the CTO you sit at the intersection of customer demand, technology strategy and delivery. How do you balance those competing pressures?
Thomas Squeo (Thoughtworks) (00:51.394)
Well, think that, so in my role, sit at the intersection between our strategy, our practices, which we call service lines and our, partnerships. And then that in turn informs how we actually work with our delivery partners. So in that, in my role, I'm very much focused on how do we take what is happening from the market sensing market forces that are happening out there. Like the.
rise of Gen.ai, the rise of agentic systems, multi-agent systems, and so on and so forth, and how do we incorporate that and how we build our technologies for our customers. So we build with AI, we build AI itself, we modernize with it, and then ultimately at the end of the day, we run our systems with it as well. So those are kind of the key dimensions of how we approach it. And then when we work with our customers, we very much look at it as a portfolio of
of what are the problems they're trying to solve in their portfolio to be able to deliver across what they're trying to accomplish. And we typically work on what their priorities are and we look at it as kind of a, we typically organize it around three areas. One would be a run and optimized state. One would be a grow and modernized state. And the last would be exploration and de-risking state.
Francis Gorman (02:08.747)
That's perfect, Thomas. And I suppose you've touched on AI quite a bit there. So it's obviously central to your organization and to how you operate. What do think the biggest risks are that you see today with enterprise Russian to adopt AI in the landscape that we are currently all in?
Thomas Squeo (Thoughtworks) (02:27.278)
well, I think that there's a, there's been a ready fire aim kind of, activity that's happened. you know, one of the things that we kind of look at when we work with our customers around AI's, what are they trying to accomplish with it? So we kind of look at it in our, we've contextualized it around this notion of what we call three loops. The inner loop would be what's happening for the people who are building and developing technologies, what's happening in the product development life cycle, what's happening in software development life cycle. And where can we.
gain advantages of either insight, understanding, throughput, speed, intelligence around what's happening in those platforms. The middle loop would be that notion around like what's happening in the operational state. What are those operational business processes that could be, you know, benefit from taking advantage of AI and how they would actually be able to incorporate it. And then the last area would be kind of that external stakeholder or stakeholder and
or customer, how would that be consumed in that regard? But when we think about how are we ultimately going to be able to ensure success around these things, is to be, one is to ensure that there's a strategy, two, to ensure that there's some controls in place, and then ultimately evaluations and budgets. One of the things that we kind of encourage our customers to think about is that don't deploy hope.
deploy controls, evals and budgets. So that gives them the ability to understand how they're shaping the investment and actually driving the outcomes and measuring it over time.
Francis Gorman (04:02.583)
So you touched on one thing there that has piqued my interest and that's strategy. How many organizations do you see just rush into AI without any strategic intent? It's vendor driven, it's distributed and there's no real measurement because they don't know what that measure of success should be upfront. How important is this strategy and what are the key aspects of a strategy that you consider for artificial intelligence?
Thomas Squeo (Thoughtworks) (04:26.574)
Well, think that one of the things that's out there about, well, to answer your question outright, I think we do see a group of customers that will come to us and they'll be like, hey, we really want to have AI to do something in our enterprise so we can kind of satisfy the management team or the board or something along those lines. Those are usually, we call them the proof of concept purgatory participants. really kind of, they usually don't get to production.
Those companies that have a crisp strategy, understanding those outcomes and really taking it from the notion of responsible deployment of AI. What's the blast radius of what they're as acceptable for what they want that agent or application to do. Understanding that journey is something that's really important for them to be able to go through. We see teams that have strategic intent usually have drive greater outcome.
I think that one of the things that we're seeing today is that the hype is outstripping the reality. The hype is that people think they're further along than they actually are. And there are very few digital or what I call AI native companies where they're going and they're delivering like, you know, huge amount of outsized value where AI is the primary operating model for their organizations. We see organizations doing that, but we find that we need a combination of
a strategy, a courageous top-down leadership that's creating the air cover so teams can actually do that experimentation. But that also requires kind of a bottom-up approach where teams are exploring and looking at what is the art of the possible. And the reason why I talk about it that way is because if it's just from top-down, usually the use cases are pretty brittle.
When it's very bottom up, it's kind of like a thousand flowers bloom. You'll see a lot more opportunities for things. But what's important about that is that needs to be a set of kind of accepted controls for the organization to govern how they're actually deploying technology. I'm very much a proponent of treating it the same way you would treat an API or a product where there's an owner, there's a budget, there's a set of outcomes, and there's a governance process to on ongoing basis, determine whether or not it's achieving.
Thomas Squeo (Thoughtworks) (06:47.918)
exceeding or failing to meet, what the outcomes are. And then being really understand that what gets put into production, really the life cycle isn't until it gets actually retired, that it actually goes through that entire life cycle.
Francis Gorman (07:02.613)
Definitely stealing that POC purgatory term. is a lot of realities to what I think when you put it out there. it's probably, it's the blind leading the blind without strategic intent probably does lead to POC purgatory. I will definitely be robbing that one, Thomas. Thanks very much. I was reading the Top Works report on a state of digital and AI readiness. What surprised you most in the survey findings?
Thomas Squeo (Thoughtworks) (07:06.35)
I'm
Francis Gorman (07:32.331)
Where are organisations still falling short in your opinion of the back of that report?
Thomas Squeo (Thoughtworks) (07:38.0)
Well, think that, so when we published the report, I think the biggest finding was that kind of, know, there's a, everybody thinks everybody else is doing it and they're doing it well. So there's kind of this fear of missing out or FOMO. And what ends up happening is the reality is, that it takes a certain amount of engineering discipline. The way that I say it is that the way I typically arrive at it is that teams that have taken on platform engineering usually are setting up well.
to be able to take on AI or agentic systems engineering as well. But it is a journey through levels of capability as opposed to necessarily kind of just jumping the line. And when we see what we did, the state of AI readiness, what we were really looking at was how are we looking at the tooling associated with each one of those loops? So for example, we see here a lot about like coding assistance, code intelligence,
productivity enhancement things that are happening in that inner loop. In the middle loop, we start to hear about more things around like an RPA to a gentic journey where you're starting to see kind of a controlled kind of increase of autonomy as those systems move through. And then when we see it in the customer experience and stakeholder experience, we see a lot more innovation and then that moves more back towards what can be enhanced in the model, either using things like
Retrieval augmented generation or rag the term that we use it's called rag graph Which is where you're taking a vector store over top of it And then you're starting to now move into like what does it mean to fine-tune that model around your organization's context? we're starting to see those conversations become more prevalent and while we do work on the the underpinning models and the high-performance computing around those what ends up happening is that we have to be
You know, people are thinking like, have to go there. And when in fact the reality is it's like, Oh, well, no, you could do quite a bit without ever ever crossing that chasm until kind of model tuning. It's a very small subset of very sophisticated organizations that need that kind of, uh, singular context around that outcome. So those are the things that we're seeing, uh, around, uh, this journey. think that what the other thing I, I kind of, uh, think about in this context is that.
Thomas Squeo (Thoughtworks) (10:00.697)
Today's product is tomorrow's feature. We see a lot of frothiness in the market. And that frothiness means that we're trying to work with our customers to create good frameworks and policies for being able to understand and decompose their decision rules around how tools are incorporated into their enterprise. You mentioned earlier about how many companies are being faced with the fact that not only are they developing their own use,
there are being, are service contracts and providers are incorporating AI in their tools. But understanding where those guardrails are for those tools, I think Gardner calls it the AI sandwich or the AI hamburger, which is where you have tools that are being brought to bear by your employees. There's tools that the enterprise or organization is buying on behalf of their employees. And there's incorporation of AI in the tools that are driving the enterprise, whether that be.
IT service management, IT financial management, human capital management systems, all those kind of technologies, we're seeing kind of a prevalence of AI being incorporated in, but understanding how those rules and that explainability is a key aspect of driving success or understanding where there's risk.
Francis Gorman (11:18.859)
I have my own term for that and I call it the Matryoshka effect. I'm not sure if you know the Russian dolls, you've got the big doll and you've got the smaller doll and the smaller, smaller. And I call it the spear of visibility. you know, the more tooling that's abstract that you hook on, the further from your spear of visibility, your data becomes, you know, more distributed downstream. It's gone to AWS, it's gone to Azure and, you know, you don't really know where it's gone anymore. So I think that plug and play and everyone is an AI company now is probably creating a...
Thomas Squeo (Thoughtworks) (11:22.466)
Okay.
yeah.
Francis Gorman (11:48.97)
all sorts of headaches for governance and for policy and for AI tools in general. And I think that's something that ThoughtWorks touched on in the macro trends in the tech industry. Have you got any practical steps you've seen to work well in implementing observability and governance and policy around AI tools in large organizations, Thomas?
Thomas Squeo (Thoughtworks) (12:13.551)
The way I kind of think about it is if you cannot measure it or replay it, you can't govern it. It's that simple. So if you don't have observability over these systems or the intentional governance around them, they are likely going to steer you in a direction that you have not operated in control to get there. And when I say operating control, what I'm saying is that the intentionality around the outcomes being driven,
is in line with the business goals or organization goals for what they want to accomplish. When we think about systems and observabilities had proliferation, whether you think about open telemetry, the work that organizations like Honeycomb, Chronosphere, Base 14, there's just a prevalence of tooling that we're starting to see that's moving more into not just what's happening in the application, whether it be a Gen.ai, agent or traditional.
application, you know, mobile app or otherwise, we're seeing AI incorporated into these and understanding what's happening. That explainability is a key, key aspect of how teams can actually understand what's happening. you know, we recently built out a, a set of dashboards for a team just to show like what happens when there's a guardrail strike and a guardrail strike is when the AI moves in your, you have an evaluation framework and use it outside your guardrails.
that are determining what is acceptable behavior. When it hits that guard rail, you think about like just like a guard rail on a highway, you wanna know that that's happened because that then gives you the ability to unpack and inspect that event in order to understand how and why it might've happened. Because these LLMs are very complex, billions of parameters, understanding what happened, being able to replay that with some degree of confidence is important.
These are non-deterministic systems typically. So with those non-deterministic systems, usually they rarely will even make the same exact response twice. So what we're trying to do is understand from a tracing and structured logging, how do you actually capture inputs, outputs, change data capture associated with databases, or what the decisions were so you could replay that. In certain regulatory environments, it's required, whether that be financial,
Thomas Squeo (Thoughtworks) (14:32.848)
public safety and justice, medical, you know, these are areas where these systems add value, but they need to be able to commingle a non-deterministic outcome, like the kind of bread and butter of LLMs and multimodal models with a deterministic outcome where you're actually able to go and say, here's the thing that it recommended and here's the actions that were taken based on.
Francis Gorman (14:57.815)
I can tell you really lived it, Thomas. When I talk to some people in AI, I get the high level fruits of what you hear as clickbait, but you're talking at a lower level that it's intrinsic to having a couple of scars and a couple of bumps and bruises along the road when you've been working on these things, which is great. A question that I often pose is, where should AI sit in an organization in terms of ownership to...
I'd like to get your views on that because it varies depending on organization. But do you see where certain leaders get better outcomes or are you more in that it should be dispersed across an organization type model?
Thomas Squeo (Thoughtworks) (15:38.353)
So I think that it depends on the operating model of the organization. So for example, with in our own, we're a professional services firm. We have a CTO organization, Chief Technology Organization, and we also have a digital and AI organization. And we don't see them as necessarily like, all the AI is going to happen over here and all the technology stuff is going to happen over here. It's just what we see is that we want to be able to have this with as much kind of context, be able to deploy out throughout the enterprise as possible.
In a traditional enterprise, think that we've seen roles like the Chief Data Officer emerge, we've seen Chief Digital Officer emerge. But what we're seeing in our own organization is that when we allow courageous executives and leaders to be able to of give tools to their teams and see those experiments kind of rise up, having a solid governance structure, but giving the freedom for teams to be able to operate.
We've seen this happen in our, our, our people processes. We've seen it happen in our financial processes and AR and AP. we've seen it in our, our marketing organization, which just won awards for their use of AI. If we decided that that was going to be only done in our digital and AI office or in our CTO office, none of those ideas would have really, bore fruit and it wouldn't have actually delivered in the value because we would have been trying to control it from a
from a gating perspective or however you kind of look at how the portfolio is managed. What we were trying to do is go and say, hey, let's let experiments run and then have those teams that own those outcomes set up what is successful, what is good look like, how are they actually delivering it? I think that is a more effective model when we start to see, because what's happening is that even if organizations get...
draconian and start to lock down. Like you could only use this LLM, you can only use this and so on and so forth. What we're seeing is that just like with the Netflixification or Amazonification of our enterprises, people see in their B2C lives and their regular consumer lives, technologies and tools and techniques that they want to see in their enterprises and enterprises that embrace that and understand that this is not a new shadow IT problem.
Thomas Squeo (Thoughtworks) (17:58.981)
This is actually an enablement exercise and so on. There's gotta be, so I come back to evaluations, guardrails, controls and governance because that gives you the framework to give your leaders as well as your teams, the latitude to operate and know where, you know, things are going to be successful or not. Because if we were trying to centralize that, it would actually slow the organization down. ThoughtWorks really believes in.
Small experiments run very quickly, give you better insight into what the next opportunity is. That iterative loop gives something that I would argue very few organizations can predict their future. And then what we find is that if we iterate, we can usually get much, much better dead reckoning around what the outcomes we'd want to be able
Francis Gorman (18:47.617)
Thomas, just on that, what use cases do you see actually working best? So when I look at artificial intelligence, depending on who you talk to, there's multiple levels of adoption from agentic to generative AI, to call centers, to other, but in real tangible terms, you've been on the cold front of this. What actually works and what just doesn't meet expectations?
Thomas Squeo (Thoughtworks) (19:11.429)
Well, think that any, so any, lot of knowledge work where you need to have a large corpus of, of instructions. think about things like call centers, you know, supporting sales teams, being able to do like a lot of, like the, knowledge work of knowing where a document was and how to actually answer the question, creating scripts and so on and so forth. That's a, that's a dead on kind of, area. Matter of fact,
That's such a, such a primary use case. People don't hire ThoughtWorks for that because the vendors, the Genesis, the eight by eights, all those organizations that is their wheelhouse. is the thing that they do better than anybody else. So when, when somebody comes to ThoughtWorks, it's usually like, Hey, we want to build a research assistant in life sciences so we can actually have our researchers go much faster against a corpus that has their specific intent. So you heard me talk about that notion of.
rag and rag graph and fine tuning, that ends up being more the use case where we see exponential outcomes in delivery. We're starting to see this in our retail clients doing things that are very interesting. If you think about the LLMs, they're typically a look backwards. So what we're seeing is that the incorporation between LLMs and search looking forward, what's happening real time in the system and so on, that co-mingling of those two things is starting to drive.
really significant outcomes in retail. If you think about it in the context of, you know, our business and technology services, organizations, they're disrupting themselves usually in that inner loop. that like, what is happening for the builders? How are they going faster? Your earlier question about observability and telemetry. When we think about those, that ends up being this notion of, okay, they're getting greater insights into the running of the product. In our own work,
we are using AI in our site reliability engineering practices. So that gives us the ability to apply AI to running systems. And that's something that's ubiquitous across all industries. So we started to see those patterns emerge. And what it is is usually things like anomaly detection. If you see kind of event storms happening off these systems, things of that nature. Our business is technology. So usually we're contextualizing against a set of.
Thomas Squeo (Thoughtworks) (21:30.436)
kind of like types of systems. And then when we see those in the wild, then they're built around the context of that single enterprise.
Francis Gorman (21:39.254)
And when companies are coming, Thomas, with that value proposition, they looking to augment humans or replace them? Are you seeing changes to organizational structures? And how is that happening in the real? Because when you read the headlines in Ireland anyway, it's AI will replace all knowledge workers by 2030 or whatever. What are you seeing in the States and across the wider enterprise?
Thomas Squeo (Thoughtworks) (22:04.522)
I think the demise of the knowledge worker is largely overstated. The demise of the software engineer is largely overstated. I think that there is a reality that we are now able to go much further downstream on problems that were not in our event horizon to be solved before because AI enables you to go much, much deeper on a certain problems. Now there's kind of...
categorizations of this. And I think in the context of software engineering in particular, which is kind of the domain that we operate in, we see good amounts of success on net new platforms, what we call green field platforms. We see less success and almost a negative improvement on performance on legacy systems that have more complex or exotic languages, things that are very old. The average LLM, if you just fire it up and start asking questions.
If it's a pretty standard language or some things like that, it is helpful for it. But in the case of something that's a lot more specific to a certain domain or a certain portfolio type, we don't see the same advantages there. But what I think is happening is that we are going to, just like when Excel was released, it didn't get rid of accountants. What it did is gave the accountants tools to be able to solve different problems in different ways. And I think that that journey that we're going through as a, you know, kind of a
an industry 4.0 or where you kind of think of this as a foundation technology of where we're headed. I think that it's material to that, but I don't necessarily think it's going to fundamentally reshape how the knowledge worker operates. I think the knowledge worker is going to be able to solve more problems. That said, I do think that team shapes will dramatically shift, especially in the case of software engineering teams, because as...
long running agents start to become more capable and ubiquitous. They're going to be, the teams are no longer going to be kind of pyramid shaped. They're going to be more diamond shaped where there's a set of agents that are working alongside those teams to be able to drive outcomes. And there's a difference when you're in a build state versus developing infrastructure versus running a system with all those things in place. So each one of those has their own considerations relative to modernization as well.
Francis Gorman (24:27.082)
fascinating insight. I think the world, the world is changing. I was at a really good presentation by Rachel Botsman two weeks ago. She's a world leading expert in trust. And she was speaking about how trust has changed direction. So we used to trust institutions and brands and experts, which was kind of the pyramid of trust, but now it's gone sideways to machines and algorithms. But then she talked that that change in trust has also had a direct implication on risk in terms of speed.
scale and opacity. you know, she used the example of if you go to your GP and your doctor and it gives you a bad diagnosis, you know, that's one person. But if a digital doctor, can be many people or for for scale loan, if you go for a mortgage, you get turned down. It's one mortgage. But, you know, if it's an algorithm, could be could be many mortgages. And opacity is a lot of people don't understand the decision making aspects of a lot of these systems. So it's very hard for them to identify the
the other two aspects of it. thought it was a fascinating view on the world from taking trust and then kind of leveraging it into risk. And when I look at AI technologies from a security perspective, they have different levels of risk. And I think agenting is probably one of the ones that I watched closest at the moment at the autonomous ability to go off and do tasks. I know we've had a couple of unusual scenarios like the Ripley example of the.
company database that got deleted, by the apology for, sorry guys, I deleted your database, et cetera. But yeah, oops. I suppose, Thomas, when you look at AI, do you see different levels of risk between generative and agentic? And how do you manage that as part of a deployment?
Thomas Squeo (Thoughtworks) (25:58.555)
Yeah
Thomas Squeo (Thoughtworks) (26:12.048)
Well, I think that, so at the end of the day, mean, agentic systems are software. And if you treat them with good software practices, the mechanics by which you're delivering them, you're going to set yourself for more likelihood of success because then you're not going and saying, Hey, I'm, fundamentally rethinking this thing and all the work that we've done over the last 50 years in software engineering and delivery has gone out the window. What it is is that those same, uh, you know,
platform-based approaches to being able to solve things, being able to do continuous deployment. I would argue that the need for things like software build materials, you know, harden or essentially ruggedize DevOps where you're looking at more about what packages are being incorporated when they, have things like model context protocol or MCP being able to be able to have agents or essentially systems talk to or be incorporated in. That is a risk.
or essentially a risk vector that needs to be factored in. We have good practices for managing those. I think that what we need to think about when we think about agents is that it's a contract with an actor in the system. So we've had our back for years. There is this notion of an agent based access control, a zero trust architecture where you have not only the agent, but whose context is it actually operating in? Is that context long running or short lived? I think that we're
I was just earlier at an event with Google earlier this week. And one of the things they were talking about was that it's not just the agent that needs the controls. It's the agent and the context for which actor in the system it's actually operating on behalf of. There's this notion again, and I think that you kind of hit on another point, which is when you think about an agent, one of the context windows is that how much memory does it have? Right now, the memory space is rather short. So
you if you think about an LLM, you have a conversation with it, your memory space is usually for the context of that conversation. Holding things in memory requires a lot more computational power, a lot more, you know, resource utilization, things of that nature. These are all factors in your security story as well. There are ways you could put kill switches in any one of these. And in some cases, I think that, you know, there was just a recent update for the Mac OS and there's been, you know,
Thomas Squeo (Thoughtworks) (28:36.196)
You know, it's a vibe shift. doesn't trust you as much as it did before. And I think that that's a learned experience over how actors are operating the system. I was recently in a conversation and I said, you know, for the most part, if you're on the internet, trust no one. Okay. And the thing is, is that if you start to go into a trusted environment, there's a reason why we enter into service contracts with trusted vendors, where there's contractual terms and indemnity and so on and so forth. When you indemnify it,
an organization by going in contracting with you, what ends up happening is that that means you've taken the burden of responsibility for ensuring that the controls and guardrails are implemented and vetted before they come into the product. If you think about ITSM tools out there, IT service management tools out there, when they're making systems, they're very clear that they're the system of engagement, not the system of record, in that case.
And then when you get into systems that are of record, there's a different set of operational technical disciplines that are required for those systems to be, you know, changed or anything like that. We work on mainframe modernization regularly. And then when we're doing those, we're dealing with systems of record, the most important systems inside an organization. that doesn't, you know, grip it and rip it is not a strategy.
for those teams. They're like, hey, we want to understand the discipline necessary to be able to ensure that you are modernizing what matters, you're measuring it, and you're keeping the lights on while you're changing the wiring in that system.
Francis Gorman (30:14.141)
that's super insights again. And I just ask before we wrap up your view on regulation, does it hinder innovation or does it help it? I'd be really interested to hear your thoughts on that area, because it's a hot topic in Europe at the moment with the EU AI Act coming in versus America kind of going a bit more towards the deregulation side and more towards innovation. I'd like to get the perspective of do you think it slows down companies or it supports them?
Thomas Squeo (Thoughtworks) (30:44.388)
I believe that all systems require discipline that usually gives them the ability to constraints, create creativity and a completely unconstrained or unregulated environment is usually, you know, has unintended outcomes. And those unintended outcomes usually are something like, I think there's a balance. mean, if you think about kind of the, know, there's, there's the, Hey, everything is fair game is one thing, but I think that we have a set of
regulations, rules and requirements that should be followed. don't think because a new technology comes in, you'd throw out, you know, a hundred years of copyright law just because, Hey, we've got a new technology that makes cool videos. You know, I think that there is a responsible aspect of this and our customers do not, you know, fall for that. Hey, let it rip. It's the wild west and you're going to do whatever you want. I think there's a good set of behaviors.
that if you're operating responsibly, you need to be able to understand that regulation is a mechanism that you operate inside for the outcomes and trust of the people that use those systems. Our business is professional services, so if somebody brings us in to build a technology, we are going to adapt to their regulatory and compliance environment, full stop. We're not gonna be like, hey, you know.
new technology, all rules are out the window. No, I think that that's really my opinion.
Francis Gorman (32:13.611)
Yeah, take it or leave it guys, here's what you're getting. No, Thomas, that was fantastic and I really enjoyed having you on. I think we're just about up on time, but I think there was a huge amount of insight in that conversation and I hope the listeners get a lot out of it. think they will.
Thomas Squeo (Thoughtworks) (32:29.708)
Awesome. Well, I'm happy to chat more if you have more time, Francis.
Francis Gorman (32:33.982)
Excellent. Thanks Thomas.