
The Entropy Podcast
Nibble Knowledge is delighted to bring you "The Entropy Podcast"—hosted by Francis Gorman.
The Entropy Podcast centers on cybersecurity, technology, and business, featuring conversations with accomplished professionals who share real-world knowledge and experience. Our goal is simple: to leave you better informed and inspired after every episode.
We chose the name “Entropy” because it symbolizes the constant flux and unpredictability in cybersecurity, technology, and business. By understanding the forces that drive change and “disorder,” we can create better strategies to adapt and thrive in an ever-evolving technology and geo political landscape.
You can also check out our YouTube Channel here: https://youtube.com/@nibbleknowledge-v7l?feature=shared
Disclaimer: The views and opinions expressed on all episodes of this podcast are solely those of the host and guests, based on personal experiences. They do not represent facts and are not intended to defame or harm any individual or business. Listeners are encouraged to form their own opinions.
Please note: Some episodes may have varying audio quality due to the challenges of remote recording and occasional environmental factors.
We apologies if this occurs as we strive to keep audio at an optimal quality.
The Entropy Podcast
The Future of Trust and Encryption with Dr. David Archer
Dr. David Archer, CTO at Niobium and a veteran in privacy-enhancing technologies, joins Francis Gorman to discuss the future of computer architecture, encryption, and data integrity. David emphasizes moving beyond perimeter defenses toward cryptographically assured systems where data remains protected throughout its lifecycle. He covers the challenges of implementing fully homomorphic encryption (FHE), the role of zero-trust architectures, the interplay between AI and quantum computing, and the need for crypto-agility. Looking ahead, David envisions a world where data authenticity is provable end-to-end—ensuring trust in an era of AI-generated content and disinformation.
Key Takeaways
- From Perimeter to Intrinsic Security
- Current systems rely too heavily on firewalls and perimeter defenses. The future lies in embedding encryption directly into processing and architecture.
- Homomorphic Encryption as a Game Changer
- FHE allows computations on encrypted data, but performance and complexity challenges remain. It could require new paradigms for databases and programming models.
- Zero Trust Needs to Be Cryptographically Proven
- Today’s zero trust is heuristic; the next step is mathematically provable, cryptographically assured trust.
- Major Hurdles Ahead
- Challenges span mathematics, hardware, software design, and programmer education. Hardware accelerators will likely play a big role early on.
- Quantum Skepticism with Urgency
- Archer doubts practical quantum computers in the next 5–10 years but warns of “harvest now, decrypt later” risks—making crypto-agility essential.
- AI: Double-Edged Sword
- AI boosts productivity for experts but risks stunting critical thinking for students. It also raises concerns about privacy, disinformation, and unverifiable outputs.
- Data Lifecycle Integrity
- Archer’s most exciting vision: cryptographic assurance of data provenance. Imagine video footage with an auditable cryptographic chain proving authenticity from capture to broadcast.
Sound Bytes
- “We can’t just rely on a hard shell with a soft middle—data must stay secure through its entire lifecycle.”
- “Zero trust today is heuristic. The future is provable, cryptographically assured security.”
- “Crypto-agility must be built into every software stack. Otherwise, we’ll always be behind.”
- “Quantum computing may not be as close as it looks in research papers—objects in the mirror are closer than they appear.”
- “AI can empower experts but risks leaving students without the critical thinking skills to spot flaws.”
- “The integrity of data will define the future. We need systems that prove where data came from and how it was changed.”
Francis Gorman (00:02.03)
Hi everyone, welcome to the Entry podcast. I'm your host, Francis Gorman. If you're enjoying our content, please take a moment to like and follow the show wherever you get your podcasts from. Today I'm joined by Dr David Archer, the Chief Technology Officer for Niobium. David brings more than 40 years of experience in computer hardware and software development. With over a decade focused on advancing privacy enhancing technologies, he has played a key role in groundbreaking US government research programs, including DARPA's Proceed, Safeware, Brandeis, Sive and DeepRiv, as well as IARPA's Hector. He has contributed his expertise to the Departments of Homeland Security, Education, Energy and the Census Bureau.
Beyond his government work, David is a founding member of the United Nations Privacy Preserving Technology Team and co-ordered the UN's first Privacy Preserving Technology Handbook. He was also appointed by NIST as one of only seven national judges for the US Privacy Enhancing Technologies Prize Challenge and it's an absolute honor to have David here with me today.
David, it's an absolute honor to have you here with me today.
Dave Archer (01:04.472)
Francis, thanks very much for inviting me. I'm happy to be here and looking forward to our discussion.
Francis Gorman (01:09.368)
I'm looking very forward to it myself David and you did provide me with a number of articles that you've written on certain topics over the years and I think one of the observations that I had is you've written about the importance of building security into architecture itself and you draw analogies from Alfred de Great's checkerboard defence and World War Two strategy. How do you see modern computer architecture evolving to embed cryptographic guarantees at a hardware level rather than just relying on criminal defences?
Dave Archer (01:40.098)
Well, so at present, much of architecture today is still perimeter fence oriented. We still rely on firewalls. We still rely on the idea that the shell is hard, but in the middle of the data is all accessible so we can be efficient about using it, which I understand the performance focus there. We're starting to see the emergence in computer architecture of things like execution enclaves and advanced cryptographic techniques that have maybe better security claims in some ways like homomorphic encryption.
While still some challenges exist there, they're the recognition that this idea of it can't just be the hard shell in a soft middle that becomes the future of computing and data security.
Francis Gorman (02:23.962)
That's perfect with David and I suppose as security experts we do put lot of visibility and expense on the perimeter but really what you're saying is we can invert that backwards and potentially take a different lens of work.
With DARPA and others investing in full homomorphic hardware acceleration, what does the future look like when CPUs and GPUs are natively cryptographic aware? Could we reach a point where cryptography isn't bolted on but it's intrinsic to processing?
Dave Archer (03:00.182)
I think there's a chance for that. think one thing to say is the techniques we know today to protect data at that level where data remains encrypted always are going to take some changes in computer architecture today. The mechanisms we know like homomorphic encryption require that data gets either much bigger in some sense to operate on in that special encrypted form or that we enforce a, we are efficient when we're very parallel in the data operations we have.
So either we have to adapt our data processing techniques to make use of that parallelism, or we have to allow ourselves for an architecture that can be less efficient in the short term to get there. But there's definitely a place where that design technique, let's say, keep the data encrypted while you process on it, can fit into processors today. Much like with Intel architectures today, for example, you see
AES is finally integrated into the instruction set. Vector operation is integrated into the instruction set. There can be a future where homomorphic computation is integrated into that instruction set.
Francis Gorman (04:10.578)
And David, when we talk about that, there's obviously a lens on industry development around zero trust and a super breach and all of these strategic strategies that we deploy as security professionals. And when I think about what you're saying here around shifting back to focus on data and to do that kind of rounded protection rather than just data at rest and data in transit, are we almost cutting
ourselves when we a database etc. and you know in order to process that information it's in the plane so we spend all of this energy around protecting the perimeter, protecting the transit, protecting data in different states but the state that it can be read is in plain text and invisible and what you're saying is we can solve that problem by web architecting systems and building that encryption in the ground.
which aligns back to zero trust. Do you think zero trust can evolve to be something provable rather than just holistic as we see it today?
Dave Archer (05:16.534)
I do. think that you have a really good insight there that today zero trust architectures are heuristic. We do our best job of continuously authenticating and things like that, but that means nothing from a real cryptographic protection point of view. It still is a heuristic. It's a good enough. I think there is the opportunity. If we can overcome the performance drawbacks of things like homomorphic encryption or multi-party computation, then
We have this option to say, have cryptographically assured security through the entire process. It's no longer maybe, it's no longer good enough. It's true security. Now, is that an idealistic vision? To some degree, yes, because we have to overcome a lot of technological hurdles, but that's the way we should be aiming.
Francis Gorman (06:06.83)
And what are those technology hurdles for the listeners if you're obviously faced with this challenge daily at the moment as you look at how you bring FHE into the round and into the mainstream? What are the challenges you're seeing on a day-to-day basis? Are there limitations in our current hardware stacks? Is it an also combined with a software problem? Is it both? Is there mathematics there that need to be worked out? What are those problems, David, that you see them?
Dave Archer (06:35.982)
So I would say several things. there further mathematics advances? Yes, I hope so. And we see those things incrementally happening from research community every day. That's great. But there are significant problems to getting to that adoption point. And it's all the way through the stack. Maybe most significantly or most visibly is how do you program this kind of security? This kind of programming is not writing programs in the standard way.
And it requires, at the moment, a significant amount of expertise where the programmer has to know about what's a secret, what's not. What can be in the clear, what's not. Because at the moment, it's too expensive to keep everything a secret. And even if you keep everything a secret, the programming mechanism is still complicated. So there's a big education gap to fill. There is a software gap to fill.
As we manage the unique sort of idiosyncrasies of each of these technologies, things like data tends to expand when encrypted in a way that you can compute on it. We have to get better at managing memory and allocating memory and managing it through the lifetime, the lifecycle. And then the hardware stack, there's definitely things that we need to do at the moment. Right now, we would have to have hardware accelerators that are specific to this kind of computation.
Maybe that gets integrated later. But at the moment, there's definitely the need for that. And along with it, the communication between a host processor and those accelerators.
Francis Gorman (08:09.164)
David, as you talk to that, my brain is doing its normal cycles of trying to tie all the dots together. And as I think about this, could full home morphic encryption change the fundamentals of how we teach and design databases, for instance, and perhaps even require a new query language paradigm to make it operate if we go down that lens? Do you think that if we get this right, it will shift?
almost our entire approach to how databases function or data functions in today-to-day environments.
Dave Archer (08:44.974)
I think there's going to be adaptation at all layers. think that admittedly, my view is limited by the way homomorphic encryption looks today, because it's impossible to see what it will look like tomorrow. But for example, today, because data tends to be much larger and the computation much heavier in that secure computation space, we have to think very carefully about how you organize the data, how you put it into structures that you can deal with homomorphically.
is in direct conflict in some ways with the way we typically manage databases. We typically like to just insert new records and then immediately somebody else queries those records to do something with them. Well, that's not necessarily possible the way we see it today because in the meantime, you would have to reorganize all that data and that takes a lot of time. So the entire way we think about data processing with this kind of technology is going to have to evolve. It's not something we can just adapt and integrate
with little thought at the moment.
Francis Gorman (09:48.591)
As you're speaking, I'm starting to see why you lean into historical analogy so much. You're grounding yourself in that what I know now and what I may have known in the past. Why do you think that military history provides us with such a strong lens for understanding cybersecurity?
Dave Archer (10:06.958)
It's fascinating, right, the idea that you can learn so much from history. I think we learn, I personally, tend to learn the most from historical periods where there's conflict. Why? Because there are challenges no one has solved before. If you look at the Ardennes, 1940, and the German invasion of France, a painful period to be short, terrible. But at that same time, you can say, what things were missed?
What assumptions were made that left a vulnerability? And this is exactly the kind of thing we see with data security today. These are challenges we haven't faced before. Let's learn from them. Let's apply them here and say, well, what's similar? If in peaceful periods of history, we tend not to learn that much about how to do things differently. It's when conflict arises. We have to adapt and we can apply those learnings to new conflict, which we have today.
Francis Gorman (11:05.614)
think we're definitely in a state of geopolitical flux at the moment across the world. Everywhere I looked there seems to be a level of conflict. So I suppose in all of that there will be levels of innovation that we don't yet know what will.
David, as you've been in the industry for 40 plus years and you've been at a cutting edge of technology, as you look across the next five, 10, 15 years, we obviously have a ferocious wave of artificial intelligence technologies in all of their different disciplines ripping across the frontier. But we also have advancements in computation that potentially make the manic-a-manical problems of quantum come to life.
What do see the next five or 10 years looking like? you quantum skeptic? Do you think AI is going to replace the need for a huge amount of effort in some of these areas? Or are the two going to come together naturally to create some whole new way of computing we can't really envision at the moment?
Dave Archer (12:09.006)
That's a great question. And I'm definitely not smart enough to project how AI and quantum intersect. If you look into five to 10 year period though, I think I would put myself in the quantum skeptic range for the five to 10 years. You see research today that tends to indicate quantum computing is making such big strides, but a lot of that research, like much academic research is aimed at
specific corner cases that are advantageous to new techniques that people can think of instead of the general case. It makes quantum computing appear nearer. It's like objects in the mirror are closer than they appear. Well, quantum computing is not as close as might appear in some of the research papers. But we do have to be ready. Why? Because of harvesting. People are today already harvesting data to be decrypted by a quantum computer, even if it's 25 years from now.
on the hope that there are sequence they can learn. So even though is a quantum computer going to appear in five or six or seven years? Probably not. We have to start thinking about the conversion importantly now. AI is another thing where it's hard to say what's the impact. But what I worry about is large language models being able to make associations that humans haven't got the computing power to make.
Are there things where we lose significant privacy and are at risk either at a human level or at a nation state level? Because AIs can make connections between data they've seen. And how do we prevent that? And the other piece of it is not so much just what AIs might discover, but what they could do on their own sort of accord, right? How do we know?
that AI results are accurate. We don't. We have no proof that an AI computation has the right basis of this right information that is validated that the answer is the right answer. Without that in the future, it's hard in some sense to trust what an AI is going to do, except for maybe just basic conversational consulting.
Francis Gorman (14:24.822)
I think that's very true and I think I've talked in the show a lot before about my concern of cognitive offloading as well and how that might impact research. When you look at the research community as it is today and you know we're starting to see
nuances of maybe plagiarism or know content that is being put out there at a rapid rate that is not authentic because generative AI has been used to create it. Do you see that that may damage western worlds from evolving at the rate of innovation that we've seen before? It'll hinder the human creative spark or do you think it will enable greater creativity? I know there's a lot of debate about this but from your own research perspective and you you've worked
in the industry for quite a number of years. as you look at the next wave of students coming through or researchers coming through, do you see AI as a tool for enabling views correctly or that it could actually do the opposite? It could be damaging to the ability of humans to creatively look at the problem and be critical of the essence of it rather than assuming an answer that's being provided.
Dave Archer (15:36.726)
Unfortunately, I think think both are true. I've seen as our team has used extensively used AI to enable creation of solutions that the benefits are undeniable. Right. We see huge productivity gains. And the interesting thing is those are productivity gains in the hands of let me call it expert players. People who are expert programmers already can use these tools to make huge advances.
When you ask about the idea of students and how they come into the new world, that's a place where I think AI may be a disadvantage. Why? Because in the temptation to use the AI and make advantages, the problem is the students are going to be looking at that as well. I don't have to think further. I don't have to investigate myself. I can just leverage that and use it and won't see the pitfalls.
We know that, for example, you've seen this, AIs and used in coding assist can do great things for advancing an algorithm development or a protocol development. But if there's one thing slightly wrong at the beginning, it gets worse, it explodes. And suddenly you have a whole tower of software that is just basically wrong. Without the experience to recognize that, you have nothing.
and students who may not have the experience to recognize that are going to be held back by it. So I think there's both.
Francis Gorman (17:04.992)
I think to align with you there David, I see both lenses, see the huge advantages you can have to analyze large sets of data and extrapolate useful insights and I see the disadvantages you have from over reliance on a technology due to a of that cognitive offload and taking them.
It's definitely an interesting watch space as we evolve as human beings to see where we end up. history will be far more interesting in 30 years time when we look back at times of dramatic change and what happened.
and
problem I actually think there's a layer below it that I'd like your perspective on and I think that's cryptographic resilience and cryptographic agility as a whole in the industry and what I mean by that as people start to explore their visibility of where cryptography sits across their environments it becomes a minefield and you're seeing you know thousands of certificates that no existed all of this public key infrastructure that is underpinned by self-signed certs or by other things
Do you think there is value in, if not to become quantum ready, but to become cryptographic aware and starting to understand your environment as they are today and to bolster your resilience by building the JVAC and the technologies and capabilities to...
Francis Gorman (18:55.786)
uplift certain algorithms as new ones come, be it the NIST approved quantum algorithms or others because from my perspective I think that's going to take 10 years at a minimum for any company of scale by the time they get the people buy in, the process buy in, get the capacity to test all of these systems and algorithms etc. I'd just like your view on it from the perspective that you may have.
Dave Archer (19:21.806)
So it's a complex question. would say, I tend to think of it like this. Crypto, I call it, we'll call it crypto agility for lack of a better term. The way I look at it is, let's talk about quantum computing for a second. How many algorithms today are there for quantum computers that are known to work and do interesting things? It was a small handful. Already that handful has broken a major piece of cryptography in the world. Or will when a quantum computer is practical. But a handful of algorithms?
How many algorithms do we use on regular computers today? Millions? There's a huge future waiting for us where, much like cybersecurity zero day attacks, there are going to be quantum new algorithm attacks that are going to surprise us. Crypto agility must be built in to every software stack we design. We have to be able to swap out easily a crypto protocol for another one at will.
when something new comes along. If we don't do that, if we have to deeply re-engineer things, we will still continue to be behind the ball and still take 12 or 18 months to repair major flaws, which has been the case over the past several years in many ways. We can't be there in the future. AI, in terms of cyber crime, can just be so much faster that we can't take that long to repair things. So crypto agility has to be a fundamental. It might be.
Francis Gorman (20:46.934)
It's a great observation. I find it really interesting to talk to people of different backgrounds who have different lenses on the photographic landscape, be it in the hardware field or at the software level.
Francis Gorman (21:05.486)
because it's definitely a hot topic of debate at the moment in boardrooms and CTOs across the world as they try to figure out what do we do with this thing that might happen that we're not sure if it will happen or when it will happen.
David, going back to the work you're doing at the moment, when do see the first production grade declines using your new technology for full homomorphic encryption? What will that look like? Or what industries will you be targeting to invent those technologies into? What's the market you're going after there? Do see this being a mainstream adaptation of the product, or is it more into the military and intelligence side of things?
Dave Archer (21:48.942)
I tend to think of the way that any new technology, will it be mainstream eventually? Yes. What are the most pervasive technologies today? They're the ones that disappear and you're no longer even recognizing using them. Will homomorphic encryption get there eventually? I think it can and will, but that eventually may be 15 years away. There are definitely market segments where we can be efficient and effective sooner.
within today to a few years out. We already see the use of limited homomorphic encryption solutions to do things like phone screening, where your phone sends an encrypted version of a phone number to a server and the server responds with an encrypted yes or no, it's a spam call, but the server learns nothing about who called you. That kind of thing is functional today and is scalable almost today.
There are machine learning techniques that are probably in that scalable range today or in the very near future and within a few years time. There could be things like cybersecurity analysis where you're looking for anomalies in network traffic or in computer activity. Those things are what I'd call limited depth computations, things that are not complex enough to need the deep power of homomorphic encryption, which continues to be elusive and expensive, but can be done with
simpler homomorphic encryption today. There are similar things like matched filters, things that will look for data signatures or chemical signatures in video. Those things look plausible today. So machine learning in limited form? Definitely. Information retrieval, where you would like to have a database where the information is encrypted.
and your query wants to be encrypted, then the response is encrypted so that the database server, which may be in the cloud or owned by someone else, doesn't learn what you're doing. Particularly, for example, agentic AI, where agents are going to continually more and more reach out to third-party databases to ask for information and don't want those queries revealed because they reveal something about what their users care about. That's a place where...
Dave Archer (23:59.662)
Home-morphic encryption can be useful today or in the near future. Much longer term, there may be things like encrypting LLMs, full large LLMs. It's a ways out. It'll require much more computing infrastructure. You have to be careful about claims that LLMs can be encrypted today. Maybe very small ones can, maybe for very small approaches, but big LLM encryption is going to take a long while with LLMs.
with home warfare encryption.
Francis Gorman (24:32.494)
the compute overhead there would be massive. Can you give me an idea of what it looks like to scale in terms of full homeomorphic encryption? If I think about our current technology, if you get a laptop and you want something like BitLocker on it, you can do that pretty seamless when your laptop's brand new out of the box, know, take a look.
But if you try to do it a year after using the laptop, 15 or 20 hours, you're trying to deal with a whole lot more data and garbage on there to get it the system. But it must be fascinating when you look at the math. I'm assuming you have dashboard somewhere that goes time to encrypt just goes exponentially up depending on the amount of workload and the problem.
that down? Do we need to rethink how we do CPUs and graphics cards to just have a standalone hardware unit within a desktop or laptop that just focuses on pure encryption into the future as privacy preserving technology becomes more of a forefront or more of a needed technology as we open up our lives to so many disparate different types of tech like artificial intelligence etc.
appears to be almost everywhere now.
Dave Archer (25:50.284)
I think it's a good insight that whenever, this is historically true, whenever we add a major capability to a computer, tends to start out at least as an add-on piece of hardware. I remember in the mid 1980s, working at Digital Equipment Corporation, and the idea of a floating point unit was a new thing, and there was a separate floating point unit. Now fast forward, I need graphic support to play games, not, there's a GPU, it's a separate unit.
What happens over time? Those things get integrated. We saw encryption do the same thing with the AES instruction set on modern CPUs. As we go forward to these deeper encryption technologies like homomorphic encryption, we'll probably get to the same place. But it'll start as separate hardware, dedicated. Then it'll get merged in, and maybe it'll be able to join forces with some other functionality and share it. But it'll be a unit that eventually merges in. Today, it's not.
Francis Gorman (26:51.79)
David and I suppose the last question I would ask you is what excites you most in the field of privacy enhanced technologies over the next five or so years? Is there anything that you're watching with intent as to where is this going to bring us or what's going to happen?
Dave Archer (27:11.086)
I'm not sure I caught that question. you, please?
Francis Gorman (27:14.166)
Sorry, so I suppose that the last thing I want to say is what excites you most in terms of the technology landscape over the next five to ten years? there any development you're watching with intent as to where they're to bring us and if they could change the face of computing for the future?
Dave Archer (27:29.624)
So yeah, there are a couple of things. I wouldn't say they're not even well formed enough to be specific yet. But we think about cryptography today as a way to protect data. That's fine. But what we need next is the ability to protect data through its entire lifecycle. So there are technologies under investigation now, current research. And let me give you an example. Imagine you have a video camera and you're taking footage of an event.
After several steps of processing and combining and merging, that event ends up on a video behind the reporter on the evening news. Wouldn't it be cool and extremely useful to have at the bottom of that video footage a QR code, for example, where you can scan that code and it can show you the entire history of,
The footage was taken by this camera. It was digitally signed by its certificate. And here's a certificate authority. And then it went to this graphics editing thing. And we can assure that the change that got made did not affect the content. It just affect the shape and the size of that that rectangle that was shown in. And there may be several such steps. And it ends with the projector that put it on the evening news. Being able to see that whole life cycle of data with cryptographic assurance that it is genuine.
That integrity is the future. We need to get there because it's too easy right now to fake it. If we can make that disappear into the technology we have where it's just built in and whatever AI agent you have that's watching the news for you checks all those things out and assures you and shows you that that whole progression is accurate so you can believe it or not.
That's the future I would love to see and that's what's beginning now and we're starting to see progress.
Francis Gorman (29:25.464)
David, your next startup is an integrity AI where we can, you know, we can tag everything we do in our digital life and follow it through with cryptographic assurance. That is a, that's a fascinating insight actually. That's definitely one I'm gonna mull over.
Dave Archer (29:37.37)
And you're Now part of that exactly is going through AI, right? Because that's going to be the abstraction form of the future. The data comes in and AI can look at it and AI can reason over it and present summaries of it to you to look at, right? That's how you get your news anymore anyway. Same problem. Can you somehow track all of that authentically through an AI processing? That's going to, in the short term, expand an AI's memory requirements by huge amounts.
we can get that done and it requires advances in several areas of technology. Now data has integrity where today it fundamentally doesn't.
Francis Gorman (30:18.158)
Now that you've said it, I will never be able to get that out of my head David. You've just planted a bug that's going to bother me for weeks to come. And as you said it, I've just recalled a data point that click true on Google was down 60 % last month. Basically people were getting the augmented output from the general AI engine when you type in a question, where can
gets such and such and it just presents it at the top level of the browser and nobody went any further. So if we can't prove the authenticity of that information and as advertisers realize that SEO is almost dead now to an extent and that it's whatever the AI decides is going to be important to show what's going to happen. They're going to monetize it so whoever's paying the most is going to be what you see.
I think, yeah, as you've said that now, you've kind of created a domino effect for me terms of what the future looks like, what technology is actually missing to underpin the authenticity of that future to ensure that people are not being led down a lane of disinformation or manipulation to buy or to procure product or to follow a certain ideology without that intrinsic authenticity of encryption as an enabler in that space.
to follow through, we really are walking into the unknown. So that's a fascinating insight. Thanks very much, David.
Francis Gorman (31:51.022)
And look, it was brilliant having you on and think just lots of great takeaways there for listeners. I know we went through a bit of a whirlwind conversation, but really insightful and a great takeaway. Ending on that note was fantastic, I must say.
Dave Archer (32:10.094)
Thanks very much. I enjoyed being here. It was a great conversation. agree. And there's probably so much more conversation to have. So thanks very much.
Francis Gorman (32:17.743)
Thank you, David.