AI Pathfinder for Private Equity Podcast

Zuzana Manhart: Break Workflows, Build Foundations

Steve Budd Season 4 Episode 23

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 31:13

Steve Budd speaks with Zuzana Manhart, Portfolio Manager for Data and AI at CBPE, about how the AI operator role has grown from solving narrow technical problems to shaping strategy across a portfolio. Zuzana explains where quick wins are landing, why AI features bolted onto familiar tools keep getting ignored, what's really holding agents back, and what to look for when hiring your first AI operator.

Takeaways

  • The AI operator job has grown from technical fixer into a strategic role across the portfolio
  • Quick wins like transcription and chat automation land first, the harder stuff comes later
  • AI bolted onto tools people already use tends to get ignored
  • Agents are stuck less on model capability and more on visibility into what they're actually doing
  • Hire someone technical, then let them own the roadmap from the data up

Links



AI Pathfinder Private Equity AI Strategy Meet-ups: 

https://www.aipathfinder.co/private-equity-ai-events

Have a question or would like to suggest improvements, please contact steve@aipathfinder.co

Steve Budd (00:04.846)
Hello and welcome back to the AI Pathfinder for Private Equity podcast. AI Pathfinder helps private equity firms make sense of AI and make it work. It's an expert network built on insight, experience and connection, bringing the right people and ideas together to turn AI from something firms are curious about into something that delivers real results. If you'd like to attend one of my regular AI strategy briefings in London, Manchester and New York, please check out the show notes for details.

This is our next episode in our AI Operator Series, conversations with people leading AI adoption inside private equity firms. I'm pleased to say our today's guest is Zuzanna Manhart. She's portfolio manager for data and AI at CBPE, a well-established UK mid-market private equity firm that's been investing for over 40 years and is known for backing management teams across business services, healthcare, technology, and financial services. Zuzanna has a master's in computer science.

An earlier career as a data scientist across blockchain, fintech and AI consulting, and then time at General Atlantic working on growth acceleration and data science before joining CBPE last year. She sits quietly at the intersection of data expertise and portfolio value creation. Zuzanna, welcome to the podcast.

Zuzana Manhart (01:21.299)
Hello and thank you for having me.

Steve Budd (01:23.758)
Well, I'm really excited to be talking to you today. Let's get a bit of an introduction from you. I know I sort of painted a very, very brief view of your career there, but I perhaps like you to give us a bit more color on that background. But what point did private equity and this kind of role specifically become the path you wanted to pursue?

Zuzana Manhart (01:49.247)
So as you said, my background is sort of in machine learning and I have a master's degree in that and sort of back in the day when I finished school, there weren't many careers in machine learning outside of startup world. So I first joined startups and learned how to build machine learning products from scratch the hard way basically. And so after three or four different startups, I was approached by

private equity, but to solve not the traditional operating partner problem, but sort of ask if you were building a private equity firm from scratch and wanted to do it data first or tech first, how would you go about that? so, so that was sort of, I wasn't familiar with private equity, but I was familiar familiar with how to build a data driven data product from scratch. And so from there,

it sort of became natural to go the private equity route. But over time, the role has changed a lot. So when I first joined private equity, was a lot of solving a very specific problem that PE company suddenly had. So in the due diligence process, or even managing a business, you would suddenly have a lot of data and tech questions. And so the role was very narrowly defined. How do we

solve these data and tech problems. Who do we hire to solve them? And how do we even understand the scope and the nature of the problems and which ones are critical? And so the first hires for PE funds were very technical people who could solve these very specific tech problems. And then with the arrival of AI, it has become much more broader and much more strategic role within a PE fund. So while started,

and still am hands on the keyboard, it's now a very different role.

Steve Budd (03:49.87)
Yes, I can imagine. you know, just having a look at your, title you have at CBPE, which is portfolio manager for data and AI, and which is, I suppose, a slightly different framing because it gives you a particular focus, I would have thought. So the other people I've spoken to on this series, lead data science, head data and analytics, but the portfolio sort of language.

Is that sort of intentional? Is that signal something specific that CBPE wanted to have and focus on?

Zuzana Manhart (04:24.144)
yes. So I would say that nowadays, around maybe 70 % of my job is purely helping portfolio companies with their data and their AI strategy. Originally, I think the CBPE outlook was that there is so much to do with the portfolio, and that is the number one priority. But I think the role was defined.

before it became clear how important AI also is internally for a PE fund. And so since I started just over a year ago, the scope of the role has already expanded a little bit, but it's still mostly looking at the portfolio.

Steve Budd (05:12.93)
Yeah, I can imagine and from other conversations I've had is that actually it's been quite helpful to potentially use the PE firm as a bit of a sandbox as well.

Zuzana Manhart (05:25.79)
That's, that's true. A lot of the experiments I run are on our internal data first. There is like the ability to create sandbox to experiment with any AI tool see what's possible, what's not possible. It's much easier sometimes than going to a portfolio company where

you have to navigate the relationships that you have within the portfolio and trying to find the right people to sponsor your projects or to be the cheerleaders for the tools we implement and trying to find the working groups there can be harder. But internally, within the PE fund, the people are there in the office and very excited to try anything that I roll out.

Steve Budd (06:16.353)
Yeah, I can imagine. you know, certainly seeing this sort of emotional journey, I think that the people in your role have had from, you know, being quite isolated and, and, and being sort of this focal point to now. Well, now it's a little bit more understood. They can empathize perhaps with the role that you have. But does it still feel...

Trying to get a little bit more understanding, I suppose, in terms of what's offered at CBPE. Do you have other resources around you to support what's happening here with AI? Or is it still very much you having to use your experience to prioritise where you need to put your effort into?

Zuzana Manhart (07:06.43)
So, well, first, I would say it's not just empathy, but previously, in my past, the model was to send a data person into a portfolio company for six months to solve a certain type of a project. So it's not necessarily being completely isolated, but it's much more embedded into the portfolio company to solve certain problems, be like,

building out data warehouse or rolling out some first use cases of machine learning models or now AI tools. Whereas now because the role has become much more strategic, there are way more conversations happening with many different people within the portfolio companies and also internally at CBPE. So the role has become less hands on keyboard and way more people and change management.

than it was before. When it comes to sort of like, how much like what I do versus how much help there is or how much other people can help, there has been a massive shift from sort of me as a data expert doing the work myself and then rolling it out within the portfolio company.

to now majority of our portfolio companies having data talent so I can in house so I can go there and sort of help the person upscale or help and define a clear roadmap and then we can sort of build it together. And that has been a massive change in what's possible how quickly we can roll out solutions, how many portfolio companies we can work work with. And also, there has been a big shift

in people understanding the importance of good data, good data governance, and how long certain things will take. In the past, there was this gap between why can't we just have AI today? Why can't we just roll it out? And now there is this understanding that bad data going into an AI system will ultimately create really bad results. And so this understanding then we actually do need to start from the foundation and go up has really helped.

Zuzana Manhart (09:27.954)
sort of like understand what I do and what the impact on the business can be.

Steve Budd (09:35.648)
As you talked about the role there that you have in the portfolio company, is that a change from say 12 months ago, 18 months ago? Is that something that's not been mandated but been sort of recommended as really getting that change that's required?

Zuzana Manhart (09:58.009)
I would say that a year or two ago, I was seen as sort of a consultant. So I come in, solve a problem. And I leave. Whereas now there's the role is slightly more an ongoing AI strategy help. So how do we think about the roadmap? What is the data strategy? What needs to happen?

in the business that's not just within the data team. for example, which other teams should be adopting more AI. So what is our customer service team doing? What is our sales team doing? It has expanded much more just from this IT adjacent role and place in the company.

Steve Budd (10:48.568)
Yeah, no, I can see that it's become far more strategic. Do you become the voice of the portfolio in terms of how they're progressing, how mature they are in terms of also measurement of ROI?

Zuzana Manhart (11:08.718)
Yes, usually, it slightly depends how mature the company is. So if there is a large team and a lot of capabilities already in house, then it's much more just sense checking where we are compared with where we want to be. But if we're starting from scratch, somewhere where maybe not even a data warehouse exists, then it really is me trying to

help the portfolio company implement the right things at the right timeline and also sort of work with the investment team at CBPE to understand that this is the right roadmap that needs to be supported with the right budget, with the right hiring. So it is sort of being the voice between the two and trying to bring it together and say, if we implement this two, three, five year plan, we will be done.

exiting a business that's top of the class within their peer group.

Steve Budd (12:14.642)
So I assume that, well, why I'm running the series is because the AI operator is becoming more prevalent, more important. How do you see that evolving from where it is today? Because it sounds like that's something you've grabbed with both hands, the fact it's now become more strategic. Do you see that ongoing or do you see this as being a relatively short term?

Zuzana Manhart (12:45.182)
I would like to believe it's here to stay. I think right now the things are still changing so much and so fast that the value of having someone who can spend their whole day trying to figure out what the right path forward is, is incredibly valuable. And also, right now, right now we're at this stage when we are we understand what the state of the AI is, as in what can the top of the class models do.

But what we're missing is the right implementation, the right tools and the change in ways of working that we expect in the next one, two, three years. And so I don't think it will become easier. I think it will still become a little bit harder before it becomes easier. And potentially there's a world in three years from now where it's going to be so easy and so obvious what to do then private equity funds.

don't have to worry about it and my role will not be needed. But I would say, especially with the type of companies I work, that they're relatively small and not that mature. I can't imagine right now a world in which they all have a specific AI person to just worry about AI adoption.

Steve Budd (14:05.934)
Yeah, I suppose just the way the other specialist operators have been required for many years.

Zuzana Manhart (14:13.106)
Yeah, yeah, it might become part of the operating deck just as the pricing and the operational efficiency partners are.

Steve Budd (14:22.306)
Yeah, absolutely. I wonder if we could try and get into some specifics and I'm not asking you for anything commercially sensitive, but where have you seen AI genuinely change how something gets done, either at CBPE or one of your portfolios?

Zuzana Manhart (14:39.592)
So I like to divide it in those applications that have a very low buy-in and then applications that have slightly higher buy-in. So the low buy-in is where you can roll out a tool that makes a difference and you don't necessarily need to set up any other infrastructure. that's, for example, rolling out tools like JGPT or Claude. Just rolling the tool out has been really big change for some people. Even internally within CBPE, the...

the things people started doing, the skills in cloud that they implemented. Some of them have been very, very important. That's of low buy. And then we have some other applications that we've seen very successful. For example, customer service. So we like to, I like to define it as sort of like a pyramid of automation where step one is sort of just like you roll out some, let's say voice recording or transcripts and you can

evaluate whether your customer service agents are handling the conversation successfully, whether they're doing a good job, basically. And that's already can help you train the customer service, but also develop a framework what a good interaction with the customer is. And then step two, and that has very relatively low buy in. Then there is step two when you can start automating, let's say chat conversations.

If we know that, for example, 80 % of chat conversations can be automated with just evaluation framework and some sort of like a knowledge base of our business, then that's also great. That means our conversations are equal or better quality than they were before, but now probably at a fraction of the cost. And then the third step is the voice agent, which is like where we would like our companies to be. If we say that

we believe 60 % of our conversations over the phone could be automated. That's a great goal to target, but that has a slightly higher buy-in because then the knowledge base has to actually be really, really, really good to handle voice conversations. And so I would say that as long as there is sort of like low buying and it requires just some sort of formatted clean data and not much other infrastructure, we have seen real change.

Zuzana Manhart (16:55.836)
Where it's much more difficult is applications with really complex business context. So a lot of applications that would require really creating new infrastructure and understanding what we think of the model context should be. So a lot of applications that require data from different parts of the business. So for example, combining CRM and finance together into some one agent is still

really hard because that's infrastructure that doesn't exist in a lot of businesses, but it also understanding how to create that context for the agent is a really new field that requires a lot of experimentation. How do we feed that into the agent in a way that the agent can actually do things that are useful for people? And then there is a second part to this and that's the existing tooling. So

If we can roll out a tool that has AI capabilities that will straight up solve problems for people, we are very likely to test it out, trial it, and see how it works, and then if it's successful, roll it out. The issue is that some of the tools just don't exist yet, or they're not very good solving the particular problem, even though the underlying models are probably very capable of solving that problem.

So for example, if companies have CRM tooling and the CRM tooling rolls out an AI feature that's in some tab or button or an extra chat, people are not very likely to adopt that feature, even though we think it would be really cool. And it would improve the way we work. It would improve the outputs. I could see...

Sometimes I do trial tools that are AI first, so AI first CRM, and I can see how the whole workflow could change. But because we already have a CRM tool that we know how to work with, we're much less likely to change the way we work. And that right now, I think, is one of the huge roadblocks in adopting certain AI applications.

Steve Budd (19:10.99)
We're just going back to that. So just talk through that a little bit more detail then what that roadblock is. Is it just too much of a reach for the general worker, for example?

Zuzana Manhart (19:22.664)
So.

Zuzana Manhart (19:26.834)
I think a lot of what once you roll out a tool, a lot of people get comfortable and learn how to use the tool. And in the past, a of my job has been here is better data for your tool, or here is a better model that will predict your likelihood of convert or prediction of customer satisfaction or something like that. So it's been sort of people still working in the same way, but with better

underlying data. Whereas now what we're asking people is we're going to do things very differently. So for example, now, we will introduce a transcript tool for your calls. And so you don't have to ever write your notes, and then the emails will get automated from your transcript tool, you're sort of like breaking people's workflow, and people have to adapt to a new workflow. And that can occasionally

there can be friction just from changing an established workflow. And so I think there is a future in which some workflows will have to change. It's just, think, about proving the outcomes on the other side. So our customer service is much better now that we don't need to write any emails, any notes, and it's all automated. And therefore, we're committing to this workflow. But there is the friction of

changing tools in business context.

Steve Budd (20:55.79)
Is there any example that you have to hand where that's worked well in terms of a workflow down and rethinking it from an AI perspective? Is there anything that's in the work that you've done where it's just worked really, really well? And that can be a low sort of scale or high...

you know, larger scale.

Zuzana Manhart (21:28.474)
I'm not sure within the, well, I'm not sure specific tooling, but for example, giving in the past role of the data scientist was many, many times creating dashboards and then shipping, releasing the dashboards to management. Whereas in a couple of instances, we now have people building their own dashboards via Claude, which means that they now own

the views that they can see, and exactly the breakdown that they'd would like to see. It means that the role of the data scientist is even more crucial because you cannot directly see the data that people are looking at. So you truly need really well organized, clean data and have high confidence in what the data actually says. But it has given some people such freedom to create their own reporting, own the reporting and

present and get buy-in on the actual KPI metrics that they want, rather than someone in the organization dictating what the KPI metrics are. And I'm not sure it can work in all organizations. This is highly dependent on the culture in the organization, especially the data culture and how important you see the data in your org. But this is an example where really breaking an established

Workflow has worked really, really well.

Steve Budd (22:59.192)
Yeah, yeah, I can see that. Well, I just want to flip it for a minute and just ask where the reality hasn't matched the expectation, you know, where somebody looked promising, but it's taken a lot longer or just hasn't worked.

Zuzana Manhart (23:19.314)
Some of the applications that people are really exciting are sort of, for example, like let's have a chat function over our data and ask any question about any company data. I think it can work, but it's actually much harder than people imagine. And this is sort of the whole idea of your models context.

Your data in a business is incredibly complex. Probably not clean, probably not well structured, structured well enough for dashboard, but not structured to let an agent go wherever in your database and answer any question. And so I think the concept of building a context out of all business data is still incredibly difficult. It requires

understanding the links, understanding very well the agent behavior in databases, understanding how much an agent can remember and can understand just scrolling a database and any applications that requires many different data sets from many teams in a business has proven very, very difficult. But I think that's a matter of time. And I think we see exciting tools coming out of this sort of like data infrastructure layer that will be able to cross.

that gap between how people in the business would love to interact with data and where the state of the data, what it is today.

Steve Budd (24:57.23)
You mentioned, you've mentioned the sort of agent word there in a recent survey with my group. There was quite a lot of expectation of what this could do in the next 12 to 18 months. However, sentiment was this is still more hype than real. Be good to just get your view on that and where you feel that the agentic potential is right now.

Zuzana Manhart (25:29.384)
So there are two extremes. There are sort of like the hype that you can have like an open-claw equivalent across all your apps, all your data, and it can just go and do things. And then there is the reality that we can have one agent actually do one task, and that's about it. And I really hope in the near future we'll be somewhere in the middle. I think it is feasible, but I think it just requires so much.

more understanding about the agent behavior. And I think it requires so much more understanding about the infrastructure and the interactions between the agent and different many apps and many databases. So any experiments that we're running sort of with agentic workflows right now are so constrained, because ultimately, this is not my personal data and my

personal life, this is also like a business context where we have highly sensitive data and not many, basically nothing can go wrong, right? So all the experiments right now are, it's feasible or it will be feasible very soon. But right now the constraints are so restrictive that it has not proven very useful as of now.

Steve Budd (26:53.58)
Yeah, I mean that came out as well in the survey that the blockers, the barriers were quite well distributed across areas like governance, controls, integration into systems, workflows, talent capability, and data quality. that sort of, yeah, rings true, I think.

Zuzana Manhart (27:15.25)
Yeah, I think like and some of them are so we can see the path to solving them like data quality. We can we can sort of work for on that governance. I can see a path talent. Yes, but ultimately, I think there's still a missing piece of infrastructure and tooling that this technology is so new. I cannot see a world where even either CBPE internally or our portfolio companies will be building

large infrastructure systems for this in-house. I'm waiting for tools to be released that would, for example, allow me to watch step, just the observability of what the agent does. So what does it think? How much data does it have in its context? How, which exact data points it has looked at to solve a certain problem so that then I can point the agent better. can restrict the use cases. But right now a lot of the agent

agentic workflows are so invisible. I do not truly know how an agent got from my question to the answer, what steps it exactly took and where it looked and how it processed data. So I think once the observability tools are better and I can better manage and restrict the agents that will unlock a whole new set of applications.

Steve Budd (28:36.526)
Okay, so it sounds like right now it's a not wait and see because you're still in I suppose it's more discovery mode around the agentic.

Zuzana Manhart (28:45.693)
Yeah, it's very much discovering the capabilities and testing out a lot of the tooling in the ecosystem of agentic workflows. Because like, we, sorry, I can, I can come up with a wishlist. It's easy to come up with a wishlist. I would love my agent to do this, this, this, this, this. I don't have a problem with that. But it's very much like, how do I deliver, how do I deliver the solutions and what is out there to help me?

Steve Budd (28:59.224)
That's just.

Steve Budd (29:04.206)
Yeah.

Zuzana Manhart (29:15.229)
deliver that to a certain standard that's actually acceptably useful.

Steve Budd (29:20.568)
Just on the of the tooling that's, you know, it's very hard to keep on top of. And I suppose there's one aspect of you, there's a particular problem to be solved and you're going out to see what tooling's available. But then there's also just keeping in touch with, you know, what's happening in the progress of AI. How do you distribute that in your day to day?

Zuzana Manhart (29:46.816)
It's really hard right now. And the job has changed so much over the last three or four years. But I think there's, I've been given this amazing mandate to go and solve problems. But also, here is the time to just figure out what's happening. Here is the time to truly experiment with what's out there. And it's the necessary part of the job right now.

AI is moving. And if you don't try new tools, if you don't try new ways of doing things, then you will be behind. And also, portfolio companies are looking to me very often to answer those questions, what tools should we be using for sales, customer service and all that. So there is never ending search and looking for the tools, trials and demos and just talking to people what they're using. And really, even if I think a problem is solved,

testing out another way might be beneficial for another portfolio company, or it might be useful in six months from now, when things change once again. So I think it's, it requires the space within the job, that, let's say, 10 % of it is truly experimentation and, and trying to stay informed of where AI is right now. But also,

Steve Budd (31:10.784)
And I sh- Go on.

Zuzana Manhart (31:12.223)
Finding people who do similar things, who are on the same journey of trying to understand how to take AI and apply it to the specific use case of our portfolio companies or a PE fund has been incredibly useful.

Steve Budd (31:30.402)
Yeah, I mean, you that comes down to why I'm doing what I'm doing, because I see the real value in exchanging, you know, ideas as well as, you know, case studies with each other. I think that's so important in a time of real change. you know, I still feel in terms of the way you just described that, that's an enjoyable part of your job though, because everything's changing, you know. And I imagine that...

have you see, well, it has changed a lot even in, you know, relatively short career you've got, imagine it's still quite enjoyable.

Zuzana Manhart (32:12.049)
Yeah, I guess you have to be the right type of person for it, but I find it so exciting that there are new tools, new ways of doing things, new things to try and better and better models and seeing something, know, seeing ChatGPT like three years ago and being excited about it, but knowing that it still needs a little bit of time to get there to now seeing people use it for so many different things has been a lot of fun.

can occasionally be stressful because ultimately in this field, there are not many people to look up to. is, there is not, there are not many mentors to find in AI. have to sort of like find your own path and try to navigate this field that basically didn't exist for much longer than I've been around in this field. But that makes it really exciting, at least to me.

Steve Budd (33:05.326)
Yeah, yeah, absolutely. It takes a particular type of person. But if you're that person, yeah, it must be really quite enjoyable. A couple more questions. Just looking ahead now, we can't look too far ahead, can we? But maybe over the next 12 months, perhaps, what are you most focused on at CBPE, you know, as a firm or from a sort of portfolio perspective? Or, you know, maybe it's about

imagining people in similar roles or moving into similar roles, where should they be paying attention?

Zuzana Manhart (33:43.136)
So one of the biggest goals is to actually deliver AI and be able to show use cases within portfolio companies that implementing AI has actually led to some big changes. So, you know, our sales team is X percent faster, more efficient, better. We have won X many new customers thanks to our AI powered sales process. The same for customer service. So it's right now.

We have done a lot of experiments. feel like now it has really come to, can we show the outcomes of all these efforts of all these months and months of experimentation? On the other hand, it's still the same message of our internal data is so important for every portfolio company, the state of their internal data is a should be key priority. So knowing what's in your data, is it clean, but also importantly,

is there someone in-house who's the owner and who's responsible for the data to be AI ready. So it's still an ongoing effort to create, to sort of like create this thesis and really hire for every portfolio company. Because I truly believe that an external consultant can solve a lot of the issues in the data, but ultimately,

The job of having AI-readied data is never over. It's an ongoing process and there has to be someone responsible in-house. So I think it's balancing the AI outputs that we can achieve now with the message that the foundation still needs the attention and the work and that we cannot implement great AI without the great foundation of the data warehouse.

Steve Budd (35:33.27)
And do you think then, am I reading into that, that there's not yet enough ownership over the data?

Zuzana Manhart (35:40.96)
Correct. There, I would say that right now, majority, complete majority of companies now have someone responsible for quality of data. It's very rare now that we are missing that role. But even if there is someone responsible for data, it's truly about creating that company wide culture of data is important. We very often see

that data is important in one part of the company. So for example, their CRM sales, all these sort of like customer facing, customer data systems are really well clean and with clear ownership and clear governance. But then you, for example, move and try to combine that with finance data and realize that that's still not possible.

So there is still missing links in quite a few businesses where you cannot really have a company 360 view of data in one place.

Steve Budd (36:46.402)
Yeah, yeah, yeah. Look, we're coming to the end of the podcast. One final question, because we know there are numerous P firms out there that are yet to hire someone like yourself. But perhaps they are, you know, thinking and maybe they're actively wanting to hire an AI operator. Could you provide one piece of advice you'd give them?

to avoid that person setting up to fail.

Zuzana Manhart (37:21.596)
that's hard. I think. But I think firstly, PE funds need to hire someone with actual technical skills, someone who can actually be hands on keyboard resource. There's too much happening now that actually requires technical skills, be it proper data science and building databases, or just like tool implementation. And so I think

technical skills are right now really key for this role. And then it's mostly giving the person the freedom to define the role and the roadmap for the role. A lot of people, for example, internally at CBPE would do have a wish list of what they think an AI person should be building and what the priorities should be. But ultimately, they are the stepping stones of the data foundation and they are

things that have to happen before you can get good AI. I think have trusting an AI person that they can create a roadmap that goes from the foundation all the way to successful AI is really, really key. And so yeah, hire someone technical and then give them the freedom to go experiment, figure out what works best and implement it.

Steve Budd (38:41.144)
great piece of advice to end the podcast today. Zuzanna, it's been really valuable conversation. I know it's going to land really well with the listeners. So thank you so much for your time.

Zuzana Manhart (38:55.059)
Thank you so much for having me. It's been fun.

Steve Budd (38:57.78)
Well, look, I will put your LinkedIn details and obviously the links to CBPE in the show notes. But that's it for the latest episode in our AI Operator series on AI Pathfinder for Private Equity. If you're in a similar role and would like to be part of a future conversation or join our growing AI Operator Network across London and New York, please do get in touch. Thanks for listening and I'll see you next time.