Beginner's Mind
Blueprints for Builders and Investors
Hosted by Christian Soschner
From pre-seed to post-IPO, every company—especially in deep tech, biotech, AI, and climate tech—lives or dies by the frameworks it follows.
On Beginner’s Mind, Christian Soschner uncovers the leadership principles behind the world’s most impactful companies—through deep-dive interviews, strategic book reviews, and patterns drawn from history’s greatest business, military, and political minds.
With over 200 interviews, panels, and livestreams, the show ranks in the Top 10% globally—and is recognized as the #1 deep tech podcast.
With 35+ years across M&A, company building, board roles, business schools, ultrarunning, and martial arts, Christian brings a rare lens:
What it really takes to turn breakthrough science into business—how to grow it, lead it, and shape the world around it.
🎙 Expect each episode to deliver:
- Founder & Investor Blueprints: How breakthrough technologies scale from lab to IPO
- Historical & Biographical Frameworks: Timeless playbooks from the world's great builders
- Leadership & Communication Mastery: Tools to inspire, persuade, and lead at scale
Whether you're building the next biotech success, investing in AI, or leading a climate tech company through hypergrowth—this podcast gives you the edge.
Listen in. Apply what matters. Build companies that last.
📬 Join the newsletter & community: https://lsg2g.substack.com/
Beginner's Mind
EP 164 - Kat Kozyrytska: AI in Pharma Is a Yesterday Problem – Why Ethical Frameworks Can’t Wait
Imagine waking up to find your company’s most valuable IP leaked—not by hackers, but by the very AI tools you trusted.
This isn’t a distant scenario; it’s happening inside pharma and biotech right now.
And the cost isn’t just financial—it’s patient lives, broken trust, and an industry on the edge of losing credibility.
In this episode, Kat Kozyrytska shares how leaders can act before invisible risks become catastrophic. From her personal journey in post-Soviet Ukraine to building frameworks in global biotech, Kat reveals why “yesterday problems” with AI demand urgent attention today.
You’ll learn how data privacy failures propagate quietly, why embedding organizational values into AI is essential, and how collaboration across companies can safeguard innovation and accelerate therapies.
The future of biotech won’t be secured by hype or speed—but by trust, ethics, and the courage to act before it’s too late.
🎧 What You’ll Learn in This Episode
1️⃣ Why “yesterday problems” with AI in pharma are already costing billions
2️⃣ How data privacy failures silently erode trust, IP, and patient safety
3️⃣ The difference between collaboration and competition in biotech innovation
4️⃣ Why embedding organizational values into AI is no longer optional
5️⃣ The future of drug discovery, clinical trials, and manufacturing in an AI-first world
👤 About Kat Kozyrytska
Kat Kozyrytska is the Founder of the Cell Therapy Manufacturability Program and a global thought leader at the intersection of biotech and AI. With roots in Ukraine and a career spanning MIT, Stanford, Thermo Fisher, Sartorius, and global biotech startups, she bridges technical depth with ethical foresight.
💬 Quotes That Might Change How You Think
(00:14:50) "AI can amplify good or bad behaviors — the choice is ours."
(00:38:14) "Discovering dark personalities is like learning Santa isn’t real — traumatic, but suddenly everything makes sense."
(01:03:26) "AI speaks with absolute confidence, but confidence is not the same as truth."
(01:20:22) "AI gives us a rare chance to embed ethics and values into innovation."
(01:57:01) "If personalized therapies work better, we have an ethical duty to deliver them."
🧭 Timestamps to Explore
(00:05:16) From Math to Medicine – Kat’s unexpected path from equations to biotech leadership
(00:09:38) Inside a Nobel Lab – How neuroscience breakthroughs shaped her ethical lens
(00:12:27) Shadow AI – Why biotech leaders can’t wait to govern hidden systems
(00:23:03) Data Sharing Paradox – Collaboration vs confidentiality in pharma innovation
(00:31:31) Neurobiology of Manipulation – How dark personalities exploit human trust
(00:41:18) Hidden Privacy Risks – What your everyday data footprint really reveals
(00:52:17) Soviet Control vs American Individualism – Lessons for AI governance today
(00:57:30) The Danger of One Answer – Why converging on a single AI truth is risky
(01:02:51) Confidence ≠ Expertise – Rethinking how we trust AI in science
(01:18:16) Embedding Core Values – How leaders can align AI with human ethics
(01:51:28) Breaking Down Silos – Th
Join Christian Soschner for expert coaching.
50% Off - With 35+ years in deep tech, startups/scaleups, and public companies, Christian offers power video sessions. Elevate strategy, execution, and leadership. Book Now.
Join the Podcast Newsletter: Link
00:00:00:00 - 00:00:26:12
Christian Soschner
Artificial intelligence is already inside your company, whether you know it or not. Employees are feeding confidential data into tools like ChatGPT. Chasing efficiency without realizing the risks. And here is the hard truth. Protecting your company from AI misuse. Isn't that moral problem? It's already a yesterday problem.
00:00:26:12 - 00:00:34:21
Christian Soschner
Because what happens when those hidden queries leak your intellectual property to competitors?
00:00:34:23 - 00:00:42:16
Christian Soschner
Or when algorithms confidently give you the wrong answer? And you make decisions, and even worse, act on it.
00:00:42:16 - 00:00:49:05
Christian Soschner
The efficiency may feel irresistible, but the cost of blind trust could be devastating.
00:00:49:05 - 00:00:53:20
Kat Kozyrytska
people working in the space are already using AI, right?
00:00:53:22 - 00:01:01:07
Kat Kozyrytska
It's the temptation of getting an answer fast out of your spreadsheet is so
00:01:01:13 - 00:01:32:11
Christian Soschner
Cat Consulate's car has lift this tension from the inside. From winning the Merck Prize at MIT to neuroscience research in a Nobel Laureates lab at Stanford, to leading global biotech strategy at Thermo Fisher and Sartorius, she has seen how fragile trust can be when technology collides with human lives. And today, as co-founder of colleagues and the Cell Therapy Manufacturability Program.
00:01:32:13 - 00:01:43:02
Christian Soschner
She's building frameworks for confidential collaboration, where artificial intelligence accelerates breakthroughs without sacrificing privacy or ethics.
00:01:43:04 - 00:01:48:06
Kat Kozyrytska
but if you are working on some obscure target, this is your super highly confidential IP.
00:01:48:06 - 00:01:51:07
Kat Kozyrytska
You want to get some intelligence about the research, for example, that's
00:01:51:07 - 00:02:07:00
Christian Soschner
In this conversation, Kat unpacks how CEOs, investors, and innovators can turn artificial intelligence from a reckless risk into a tool that embeds, values, safeguards, and even amplifies humanity's best instincts.
00:02:07:00 - 00:02:13:10
Kat Kozyrytska
If we know that a personalized therapy will work better for the patient, it is almost our obligation
00:02:13:10 - 00:02:20:08
Christian Soschner
Believe me, this episode is not just about artificial intelligence. It's about responsibility and decision making.
00:02:20:08 - 00:02:28:17
Christian Soschner
It's about how leaders can harness technology to build a future where therapies are safer, more personal and more ethical.
00:02:28:17 - 00:02:40:09
Christian Soschner
But before we dive in, let me ask you one thing. I've looked at the numbers today and found out that only about 20% of you who listen regularly actually follow the show.
00:02:40:09 - 00:02:43:24
Christian Soschner
If this episode sparks your curiosity, please do me a favor.
00:02:44:01 - 00:02:57:00
Christian Soschner
Hit that follow button. It helps me grow the show and bring you more conversations like this with rare deep tech leaders, entrepreneurs, and venture capitalists. And today, for free to you.
00:02:57:00 - 00:02:59:08
Christian Soschner
And now let's get started.
00:02:59:08 - 00:03:07:14
Christian Soschner
Then let's go on. Let's talk a little bit about dark personalities, which is a great. How did you come up with this term?
00:03:07:16 - 00:03:09:17
Kat Kozyrytska
Why didn't this is an industry term?
00:03:09:19 - 00:03:12:18
Christian Soschner
Oh, really? It is. Yeah. The industry.
00:03:12:20 - 00:03:20:21
Kat Kozyrytska
I'm telling you, we have so much research on this. We already know.
00:03:20:23 - 00:03:37:22
Christian Soschner
You have an impressive background. Thank you very much for your detailed, preparation material. Just for the for the audience. You studied biology at the committee in Boston. How how how how was that?
00:03:38:08 - 00:04:01:22
Kat Kozyrytska
Well, I thought I was good at math, so I started on math. And then I met people who were good at math and then took me physics. As for me. So I kept moving through engineering and, chemical engineering. And then, eventually I found biology, and it just seemed like such an opportunity to model and learn in this kind of physics, rational based approach.
00:04:01:22 - 00:04:19:13
Kat Kozyrytska
And it really felt like at that point we weren't really there. I mean, I would argue we're still not really there, but I think we're we have a vision. We can now we have such an explosion of these physics based so models. So I think we're finally getting to this place of understanding. So I think it was a good choice in the end.
00:04:19:13 - 00:04:27:19
Christian Soschner
Yeah I believe that. And I read in your material you earned the Mark Prize for outstanding research.
00:04:27:21 - 00:04:28:18
Kat Kozyrytska
Yes. I was in
00:04:28:18 - 00:04:51:24
Kat Kozyrytska
this great lab that, for me, I think was a real preview of this fusion of biology with core mechanistic understanding. So I was in Cathy Brennan's lab, working on crystallography, protein engineering, looking at, modifying the active site. You could look at it at the very granule, level of electron
00:04:51:24 - 00:04:52:05
Kat Kozyrytska
density.
00:04:52:05 - 00:04:59:05
Kat Kozyrytska
You could see where the electrons were moving and the binding occurred. And it was it was just so beautiful. And,
00:04:59:05 - 00:05:10:05
Kat Kozyrytska
you know, you had the 80% training set, the 20% test sets. It was kind of already a foundation for, for me, for understanding how we how we train models.
00:05:10:22 - 00:05:31:18
Christian Soschner
Reminds me of the series Big Bang Theory, because there was this one guy from MIT, the Abbott engineer, and, you pretty much replayed it in your life because you started at MIT and then decided to move to the West Coast? Yes, to Stanford. But how was the cultural shock for you from east coast to West Coast?
00:05:31:20 - 00:05:32:06
Kat Kozyrytska
Yes.
00:05:32:06 - 00:05:55:09
Kat Kozyrytska
So these are very different schools. And I was teaching while I was at MIT, and then I was teaching again at Stanford and just the different approaches with, the undergraduate students I found to be fascinating. So, you know, at MIT, we have some good weather, but most of the year it's bad, whether you're in the library or at least back in the day, you were in the library.
00:05:55:11 - 00:06:21:18
Kat Kozyrytska
And at Stanford, it's sunny the vast majority of the year. So you walk out of the classroom and you kind of lie down on the grass with your device or your textbook. And it was shocking to me to see the students outside enjoying themselves. Enjoying the weather was unthinkable, but a completely different approach. I think the sun definitely hopes, in general for our species to lift the spirits.
00:06:21:18 - 00:06:25:13
Kat Kozyrytska
So, yeah. I think from that perspective, it's a good place. California.
00:06:26:09 - 00:06:48:10
Christian Soschner
Yeah, absolutely. I mean, it's so fascinating what you said. Mit. Boston. Cloudy weather, rainy weather, not not much sunshine. It's perfect for studying. Instead of. It's one that in California. I mean, you have the ocean. You have the beach. Surfing. Music reminds me of Baywatch in the 80s, 90s. The series, which is just on Amazon Prime.
00:06:48:12 - 00:06:59:01
Christian Soschner
How? Yeah, how how do people do that in California? They just say, no. I go to the to the lab and get my work done. And rather than enjoying the sun.
00:06:59:03 - 00:07:00:00
Kat Kozyrytska
So at Stanford,
00:07:00:00 - 00:07:20:04
Kat Kozyrytska
there's, kind of an expression or hypothesis that it's all this kind of duck approach where you pedal very hard beneath the surface, where it's invisible, but you're working really, really hard. And then from the surface, it's it's it's all very still and peaceful. And I think it's that internal drive to get things, especially in science.
00:07:20:04 - 00:07:45:18
Kat Kozyrytska
Right? I mean, we don't get paid a whole lot of scientists. So it's clearly not for some external motivation that we're doing things. It's really that the wish to explore, find the answers, make this world a better place, save lives. Some of those things, I think, derive, a tremendous amount of progress that that happens in a sunny location at an academic institution.
00:07:46:16 - 00:08:00:10
Christian Soschner
That's fantastic. So you have statement mathematics, biology. And then I read your material. You then worked in neuroscience in a Nobel Laureate's lab. How was that?
00:08:00:12 - 00:08:00:16
Kat Kozyrytska
It
00:08:00:16 - 00:08:26:20
Kat Kozyrytska
was fantastic. I mean, he he's so Tom Zubov, he got his Nobel Prize in 2013, and he is. His brain is definitely the fastest that I have encountered. And, I mean, I think of myself as quite sharp, but talking to him, he's always so many steps and sentences ahead. I remember just in these conversations trying to catch up like, I sort of see how we got from here to here.
00:08:26:20 - 00:08:51:00
Kat Kozyrytska
Okay. Yeah. But, yeah, he he's he's just so bright. He sees the big picture. He's a very molecular person, but also thinks about, very, let's say abstract ideas. And so I love the lab because it was he built it to be everything from crystallography all the way through to animal behavior with some profound implications for, human disease.
00:08:51:02 - 00:09:11:24
Kat Kozyrytska
And so it's, it's the spectrum that can the kind of, horizontal and vertical nature, of of his enterprise. I was a huge lab, 70 people. Right. So you you can get a lot of things done with 70 people. But it's just such, such an explosion of ideas. I am hoping it's an anniversary of his, lab this year.
00:09:11:24 - 00:09:23:01
Kat Kozyrytska
So, I'm. I'm headed to this symposium, to reconnect with with some of these bright minds and bring some of the thoughts that we're going to cover today, to that community to get their feedback.
00:09:23:04 - 00:09:29:16
Christian Soschner
That's fantastic. So from Boston back to, you know, in Boston, I guess, and then back to sunny California, enjoying the,
00:09:29:18 - 00:09:31:08
Kat Kozyrytska
The travel or even the
00:09:31:08 - 00:09:36:02
Kat Kozyrytska
whole movie. I'm packing up my things into a few boxes, and, who could possibly resist that?
00:09:36:14 - 00:09:49:06
Christian Soschner
So you moved from mathematics to biology, from biology to neuroscience and now landed in artificial intelligence and ethics. What sparked your interest in that area?
00:09:49:08 - 00:09:49:24
Kat Kozyrytska
I mean, it's it's
00:09:49:24 - 00:10:09:14
Kat Kozyrytska
sort of a part of my natural journey. I think for me, when I think about understanding something, it's really understanding it to that molecular level. And so that's why neuroscience has always been so fascinating, because you observe the high level of human behavior. And, through my unfortunate, in many ways, experience, I have observed a range of human behavior.
00:10:09:14 - 00:10:33:23
Kat Kozyrytska
So I think, for me, it's trying to understand how does that work molecularly. And so neuroscience is tempting because we're we're on the cusp of molecular understanding, but not really quite there yet. And I don't I don't know if we will be I hope that we will be. But it's it's getting these answers at that protein level or atomic level resolution to the very big picture.
00:10:33:23 - 00:10:35:17
Kat Kozyrytska
Things.
00:10:35:20 - 00:10:48:20
Christian Soschner
That's fantastic. In geometry. There was another interesting point. You said that there is a lot of yesterday's problems in artificial intelligence. What do you mean with that?
00:10:48:22 - 00:10:49:11
Kat Kozyrytska
I think we
00:10:49:11 - 00:11:04:10
Kat Kozyrytska
we especially in life sciences, we kind of think of this, you know, in a few years, we'll, we'll get to artificial intelligence eventually we'll collect a large enough data set or we have to go find this collection of, etc., etc.. But the truth is, is that
00:11:04:10 - 00:11:09:00
Kat Kozyrytska
people working in the space are already using AI, right?
00:11:09:02 - 00:11:16:12
Kat Kozyrytska
It's the temptation of getting an answer fast out of your spreadsheet is so
00:11:16:12 - 00:11:43:04
Kat Kozyrytska
that nobody can resist it. Really? I mean, a few people do, but vastly it's. I want this answer, and I'm not going to spend five hours on that. I plug it into ChatGPT and then here it is. You know, the response. So I think from that perspective, it's we have to face the reality that, you know, employee education and employee adherence to your policies has to become a priority because people will.
00:11:43:09 - 00:12:03:10
Kat Kozyrytska
And this has already happened in the industry, right? People will put confidential documents into ChatGPT. It's, I would argue, not with malicious intent, but it's it's due to the lack of understanding of the profound impact of sharing that kind of information with an AI tool. So yes, it cuts some corners and makes your days smoother, shorter and so on.
00:12:03:10 - 00:12:27:11
Kat Kozyrytska
But the big picture of the impact of that, it is, yeah, we have we really have to be thoughtful about, how we guide employees in using AI based tools. And I would argue that if you're not offering an alternative that is in accordance with your compliance policy, governance and so on, they're still going to use it.
00:12:27:11 - 00:12:30:07
Kat Kozyrytska
It's just not going to be according to your rules.
00:12:31:07 - 00:12:49:03
Christian Soschner
Yeah. Yeah. Regulations. At the end of the day, this is really an interesting part when when we take this conversation into a cafe and I asked, in the next two hours, what's the big idea you want the world to understand right now? What would you say?
00:12:49:05 - 00:12:50:19
Kat Kozyrytska
Yeah. I think the big picture
00:12:50:19 - 00:13:11:19
Kat Kozyrytska
view of this is that artificial intelligence is a great way to scale throughput, to amplify. Right? Bring efficiency and so on. But I think, well, I would love for us to pause and think about what is it that we want to amplify? What is it that we're high outputting? Because we have studied ourselves as a species.
00:13:11:19 - 00:13:41:10
Kat Kozyrytska
So we understand there's a range of behaviors. There's the good behavior versus the bad behaviors. I would argue that we should really focus on amplifying the good behaviors and try our very best to slow down or stop the amplification of the bad behaviors. Right? Because if you think about this from the extreme view, if you used to have those people in Craigslist, right, who, they tried to get your check and so on, now they can do that at high throughput because you no longer have to have the human behind it.
00:13:41:12 - 00:13:57:06
Kat Kozyrytska
So how can we put some guardrails in place, some safeguards to make sure that that that quiets down and all of the goodness in this world that takes more effort and so on, that we bring efficiency to that, that we do better, more ethical things.
00:13:58:13 - 00:14:18:02
Christian Soschner
Isn't that an unusual thinking for a scientist? I mean, when I look on LinkedIn, the AI discussion is mostly driven by science, and I read a lot about we have a new target. AI makes finding molecules so much easier and we can simulate everything. So it's a lot of tech and features. And you come from a different thing.
00:14:18:03 - 00:14:30:20
Christian Soschner
You say, no, it's all about ethics and moral. How how did you come to the conclusion that we need a different discussion in this space, not just tech driven, but above it?
00:14:30:22 - 00:14:31:10
Kat Kozyrytska
I will
00:14:31:10 - 00:14:59:07
Kat Kozyrytska
say that I'm also a technologist and a scientist, so I want answers. I, I, I can't wait for us to have answers. For example, part of my work is in cell therapies, manufacturability, I think. I think if we could get the answer that faster, that'd be amazing. But I think so much of the conversation, like you said, is driven by people who are really focused on tech or people who really have that question about science.
00:14:59:09 - 00:15:22:17
Kat Kozyrytska
If we just take a step back and think about the impact of some of these answers that we're going to get, we can think about that from the perspective of reliability, right? So yeah, we implement the tech. But how reliable is what we're getting out of it. I think there's a gap on the implementation side that we're often not asking the question, is this good or not?
00:15:22:19 - 00:15:45:06
Kat Kozyrytska
And I think for we're, for more conventional technologies. Right. If we think about, capital equipment, you have a box and you ship it and the bolts fall out, it's very obvious that it's not good. Or if you have a, like a microscope that's not aligned very clear. There's no there's no question. But I think with software and these I especially, it's just harder to evaluate.
00:15:45:08 - 00:16:10:10
Kat Kozyrytska
So you have to think about that upfront. And then the other side of it is, on the patient's right side so we can get into more depth on this. But there are so many studies that show that d that, that de-identification, anonymization of data is really a myth. So when you ask a question of a data set, and I'm going to bring up 23 and me.
00:16:10:10 - 00:16:38:00
Kat Kozyrytska
Right, because I've been trying to get my data set out of that, for the longest time, and it just hasn't worked yet. But some of the answers that you're going to get will impact real humans on the other end. So we might be studying clinical impact or even manufacturability, like in cell therapies. Right? I believe that we would get tremendous insights if we pulled in clinical outcomes data, but that's going to reveal something about those humans.
00:16:38:02 - 00:17:02:14
Kat Kozyrytska
And if a certain entity. So in the US for a big player to this is health insurers. If or maybe you're auto insurance. So one right. If they get that insight somehow because you know things get get uploaded into AI all the time. But that can impact their strategy for pricing for insurance. It's just one example. Right.
00:17:02:19 - 00:17:16:23
Kat Kozyrytska
So now there's real financial impact for the individuals who work within that data set. And I think we have to think about that. We as much as we want to get to a scientific answer, we have to think about the humans involved.
00:17:17:24 - 00:17:33:18
Christian Soschner
Yeah that's a good point. Let me challenge you on this. So I'm not I'm now proud to say that I'm 51 in my 52nd year on this planet. And I can now say to you young woman,
00:17:33:20 - 00:17:34:22
Kat Kozyrytska
Let's go with the.
00:17:34:24 - 00:17:56:13
Christian Soschner
I'm, I'm living in Europe and this regulation is driving me crazy. I experience the internet and we are just fine at the end of the day so we can play around. We should experiment. At the end of the day, nothing serious happens. And Europe is thriving, the US is thriving and we really have a great time with technology and we need it.
00:17:56:13 - 00:18:23:17
Christian Soschner
All these mistakes that we made to come to this point where we are now and it's clear upstream. So the West evolved in a good way. Global poverty went down in also other countries, and technology is a blessing for humans. When we look now at the European Union, we want to regulate everything, which leads to the position that AI in Europe is hindered and, development accelerated in the US and also in China.
00:18:23:17 - 00:18:48:15
Christian Soschner
Interesting. The communist country doing better than Europe. Not what you propose, but I understood is. No, we need to be more careful with artificial intelligence. So how would you convince someone like me who says completely arrogantly, young lady, I have experienced the last 30 years and we will be fine in future. Convinced me. Why should we be more careful with that world?
00:18:48:17 - 00:18:49:04
Kat Kozyrytska
Yeah.
00:18:49:04 - 00:19:13:14
Kat Kozyrytska
So maybe you wouldn't. You want to see some to to become more worried about the is the topic of human rights, patient rights and so on. Equality is another big one. You want to see some massive fall out? You want to see a big impact on someone. That's what we are as humans, right? With the our neurobiology reaction, these extremes.
00:19:13:16 - 00:19:47:03
Kat Kozyrytska
But with some of the impact, it's with AI especially it's less obvious, less visible. And you may in fact never see it, especially because some of the models are less transparent and so on. Right. So I think an example to bring up here is, with mortgage, eligibility assessments, you will never know why somebody truly got rejected by an algorithm, but it might be because the algorithm was built on some historical data where we have historically discriminated against a particular category.
00:19:47:05 - 00:20:06:17
Kat Kozyrytska
That category is probably not, you know, the ruling class. So we they will have even less of a voice, in expressing that. But, I think it's that if we if we want to build the future to be better at the very core of this, if we want to build a future that's better than the past, we have to think about this.
00:20:06:17 - 00:20:30:14
Kat Kozyrytska
I know it's it's less, you know, dollar driven, but in some ways, I think the implementation of AI and the development of this technology is an opportunity for us to reset. And again, I amplify the better things that we do. The way I think about it is that we're really here, every one of us, for such a split second.
00:20:30:16 - 00:20:53:19
Kat Kozyrytska
So, Yes, I understand we need cash to survive. I get that I live in Boston, so I understand this in a profound way, but it's also that you can't really take cash with you after you die. I think that's very clear. So why are you here for this split second? Do you want to leave this place a better place than when you came here?
00:20:53:19 - 00:20:58:20
Kat Kozyrytska
So I think fundamentally, it's that. And I think it's just such a chance for us to do
00:20:58:20 - 00:21:00:08
Kat Kozyrytska
better.
00:21:00:10 - 00:21:25:12
Christian Soschner
Yeah. Let's hope that when you knock on heaven's door that, someone doesn't ask you for a credit card and show me, show me up, be converted. We never know. We never know. But let's go. Let's go back to more serious things. I mean, but this is, this is a huge problem at the end of the day, because we need data to use artificial intelligence to improve the human human societies across the globe.
00:21:25:14 - 00:21:47:24
Christian Soschner
On the other hand, it also puts us at risk. How how can we do the dementia? Let's just think about creating new therapeutics. Scientists always complain that, they can't access data. There is so many data around can fail. Clinical trials will be beneficial for scientists, but these are data sets connected to human beings. And you have two names and you have the genetic profile sometimes in there.
00:21:48:02 - 00:21:58:11
Christian Soschner
And our health history. But there is also the township of how can we deal with that, in your opinion? What what do you have a solution for that problem?
00:21:58:13 - 00:21:59:05
Kat Kozyrytska
So I
00:21:59:05 - 00:22:27:20
Kat Kozyrytska
think to your point, the rare disease is a really interesting example of that, right? Where it's if you have 17 patients around the world, you're probably going to know which row in the table is who. So it's it's fairly straightforward to find out. But in fact, because the options are very limited for disease, there's a massive drive on the patient side, meaning they want that they are happy to donate whatever data you want.
00:22:27:20 - 00:22:48:24
Kat Kozyrytska
Their mother's maiden name. Yes. Have that if that's going to help with the therapy. Right. So talking to is and also in pediatric cancer, for example, if you talk to the parents, they will say we will give anything our own the child's data and so on because we want the answer. So I think it's that balance of the benefit versus the cost and the risks involved.
00:22:48:24 - 00:23:11:21
Kat Kozyrytska
Right. And I mean, if the people within the data set want to donate their data in order to get something, it's their data. Certainly. I mean, they can they can decide. But I think they need to also understand the impact, of sharing that data. I think we do not spend enough time explaining that there's not a real way to de-identify and anonymize.
00:23:11:21 - 00:23:39:11
Kat Kozyrytska
And so much research has been done on the topic of re-identification even before the HoloLens became, you know, this, so widely accessible, with such a potential to uncover insights and identities, and we it's inconvenient to share that information with the patients or donors of data. So I understand and, you know, kind of practically why we don't want to do that.
00:23:39:13 - 00:23:58:02
Kat Kozyrytska
But, you know, I think, I think we can think about better ways of conveying that information than like a five page legal contract. And we think we can think about how we can send these patients. So, for example, my situation, I was in the hospital, I was convinced I there's a good chance I was not going to make it to the end of the day.
00:23:58:04 - 00:24:13:19
Kat Kozyrytska
And somebody gives me this. Do you want to participate in a research. You know, we're going to cut this and that out of you. And then we want to analyze this tissue. I haven't slept in three days. I have I don't even know what the words are that you were saying. So that's not it's not a good way to consent people.
00:24:13:19 - 00:24:36:00
Kat Kozyrytska
Right. So I think we have to think about how do we do this, really informed decisions, on the part of patients, and especially people who for donors who just want to do this for the goodness of the planet of humanity, for with longevity research, for example, the UK Biobank is had such an amazing data sets.
00:24:36:02 - 00:24:50:11
Kat Kozyrytska
There are lots of healthy people there. And they're there donating their data. To, to help the world move forward in this research. So I think there are a lot of people who want to make that happen.
00:24:51:05 - 00:25:09:10
Christian Soschner
It's an interesting time. Let's stay a little bit at the problem level first, before we talk deeper into solutions and also into your history. You have a very interesting to, to talk about this. The next part, but let's states, defining the problem a little bit better. When I think back to the 80s, basically the online worlds didn't matter.
00:25:09:12 - 00:25:35:12
Christian Soschner
When I travel, I travel to the United to the United States. It basically meant nobody can find anything about me anywhere online because the online world didn't exist. So I had really the chance. And the opportunity to create the first impression about myself just as a new person then became, then came the internet, and we started feeding a little bit of data.
00:25:35:12 - 00:26:01:24
Christian Soschner
In the 90s, it was not much. It was mostly science driven. In 2004, Facebook started and this was the first time, and people were motivated. Upload your private pictures, upload your drank at the party and share it with friends. Usually people should not do that. It's not very smart, because you have these pictures online decades in the in the the future in the 90s, it didn't exist.
00:26:01:24 - 00:26:30:01
Christian Soschner
Luckily for people like me. And when we look further down and there was a complete shift in messaging online, it started around 2021 with the pandemic. In the pandemic, where social media platforms, I would say convinced and motivated people write articles post every day. And I think, in my opinion, maybe some AI companies, funded some influencers to spread the message come on post every day.
00:26:30:03 - 00:26:51:15
Christian Soschner
And they described the benefits of it. So you find new contacts, you find better business partners, VCs, everything. Well, and this is the reason why I started the podcast on one hand, but on the other hand, it means, when we post and when we comment, we leave a huge footprint about how we think, how we feel, what we do.
00:26:51:15 - 00:27:12:21
Christian Soschner
You can analyze a lot of data from everybody and also nail down, mental illnesses and, health. And also when you analyze speech, I think there is also some, some, some studies and scientists who want to identify, does this person have any progress in getting this? As long as it is, a local company is probably fine.
00:27:12:21 - 00:27:19:11
Christian Soschner
But companies are bought, sold, acquired, shut down. They. But this data stays there.
00:27:19:13 - 00:27:20:08
Kat Kozyrytska
Yes, yes.
00:27:20:14 - 00:27:37:02
Christian Soschner
Available to other people. So this is something I think nobody sees today. That is data is there forever. And who owns the data can access it? Or is do you see artificial intelligence more as a black box where it's not possible to get access to the data?
00:27:37:04 - 00:27:37:18
Kat Kozyrytska
Oh, I
00:27:37:18 - 00:28:03:04
Kat Kozyrytska
think, even from some of the deep digging that you've done right, it's very obvious that there are some documents that the AI can find that, you maybe a human would have with enough time, but it's it's a it's that throughput piece. Right. How much effort, how much diligence would a human be willing to do to really understand somebody?
00:28:03:06 - 00:28:33:08
Kat Kozyrytska
And then if it's a malevolent player, use that information to manipulate them into a certain decision. Right. So you're talking about mental illness, but, in that's clinical. I think if we take non-clinical features, there are levers so you can pull with a certain person to get them to say yes. And as a commercial person, I certainly think and hope that every sales person out there trying to close a deal is doing all the digging to find everything about their customer and find a way to convince them.
00:28:33:13 - 00:29:04:02
Kat Kozyrytska
But on the other side of it, right from the customer perspective, they might not want to be manipulated. Perhaps, and it's a balance. So I think, sharing your beliefs, is tricky because you want to build that connection. I, I would say that LinkedIn, with all of the information that I've been putting about my own beliefs about kind of how the world should work in, in the perfect, scenario, what I believe about, AI and so on.
00:29:04:08 - 00:29:24:16
Kat Kozyrytska
The algorithm is beautiful. It brings me, profiles and posts of people who believe the same. It's such a wonderful bonding experience, I love it. I am so grateful for it. Do I also understand that somebody can look at all of this stuff and then convince me of things much more easily? Yes, absolutely. So I think it's, it's a two way street, right.
00:29:24:18 - 00:29:47:01
Kat Kozyrytska
And also as an individual understanding this about yourself, that there's your beliefs are out there in the open. I think you can then be more thoughtful, take proactive approach and protect yourself, from, people who might be trying to maneuver you to go one way or the other. So, I think it's, it's it's from the both,
00:29:47:01 - 00:29:49:05
Kat Kozyrytska
both sides.
00:29:49:07 - 00:29:53:14
Christian Soschner
Do you think it's possible to manipulate people via social media?
00:29:53:16 - 00:29:54:04
Kat Kozyrytska
Via
00:29:54:04 - 00:30:23:07
Kat Kozyrytska
social media? But I, I think it's it's a neurobiology. Right. So I think of that in the frame reference of the person themselves. There are some previous experiences that make people more vulnerable, right? Some foundational, neuronal signaling pieces that perhaps make them more prone to, risky behaviors, decision making. Right. The all of that is open science questions.
00:30:23:09 - 00:30:53:13
Kat Kozyrytska
What comes to mind for me is when I was at MIT, we were, it's a huge deal, because it was the first casino that was approved in Massachusetts. And the backlash of that. Right. The academic discussion about that topic was so prominent because it was all about the addiction and how gambling triggers this fundamental neural circuits that it's just it's difficult to overcome some of that because it's that lizard brain.
00:30:53:13 - 00:31:14:19
Kat Kozyrytska
It's it's who we were before the have. We had the massive thinking analytical cortex. So when somebody taps into that inside, it's it's hard. It's just practically very hard to overcome that. And so some of the conversations that were alive at the time, right, was how can we protect people? How can we identify people who are more vulnerable to these influences?
00:31:14:21 - 00:31:33:20
Kat Kozyrytska
And I think some of it is the people themselves understanding, okay, I'm vulnerable from this perspective. So I'm going to be more protective of myself. I'm going to put these features in place of my life to make sure I'm not influenced. I don't gamble because I think that that's probably a weakness for me. So I don't even I don't want to try it.
00:31:33:20 - 00:31:53:11
Kat Kozyrytska
I don't want to touch it. But but that's something I'm doing for me. And I think taking some of that control into your own sphere and building your life to protect yourself is. It's just it's key. We can regulate everything to death. But in the at the end of the day, you have to be protective of of your own self.
00:31:53:20 - 00:32:19:00
Christian Soschner
Yeah. It's a balancing act between getting the benefits out of new technology, but, avoiding the risks. And if you get to the extreme, you mentioned before the, the tools. Yeah. I created about you when I was really tempting you brought a lot about deep personalities and ethics, and it was funny at one point. I haven't done that for a long time because ChatGPT a couple of months ago didn't really respond well to questions about people.
00:32:19:00 - 00:32:43:10
Christian Soschner
It always feedback. No, it's not possible. And it's a private person. It's not a public person, not a superstar. So I just gave up on that. And when I read your messages efforts, then we talked about stock personalities. Now I play the talk personality and try to find everything out about you with ChatGPT deep research. And I didn't really have high expectations that I get something meaningful back.
00:32:43:15 - 00:33:04:20
Christian Soschner
And suddenly I got, I think it was ten pages or something like that with references and links, and they put everything together. Where you grew up, that you were an empty at Stanford. Which lab? Everything called video and conferences where you spoke. So all this data came back in a read document. It was really impressive and great.
00:33:04:23 - 00:33:16:20
Kat Kozyrytska
You, in fact, uncovered some of the documents. I did not know where online. And what I did after that is I emailed those organizations to take those documents down. We will see how well that works. Well, I think that's another
00:33:16:20 - 00:33:22:08
Kat Kozyrytska
piece. Right? Taking control of your own data. I think what we often don't think about is that, in fact, we
00:33:22:08 - 00:33:26:00
Kat Kozyrytska
still have ownership of the Facebook, Instagram and so on data.
00:33:26:00 - 00:33:34:08
Kat Kozyrytska
So you can take it down. It's completely within our rights. So.
00:33:34:10 - 00:33:53:24
Christian Soschner
But the thing is, I mean, it works really well with, I would say, for every version of ChatGPT, but you can also ask other questions. You can also take into, what weaknesses does this person have? How does the personality function of this person? What are the, the spots where I should take them? Aside from we talk about stock possibility.
00:33:53:24 - 00:34:07:20
Christian Soschner
I mean, we have to admit it. There are not always, always just good humans out there. And humans evolve in their lifetime. And sometimes they're good. Sometimes they are bad. And some of them so don't know that they are bad or doing something badly in in society.
00:34:07:20 - 00:34:16:20
Kat Kozyrytska
This I disagree with, this I disagree with. I think they know we have evidence to suggest that they know they just choose not to do the right thing.
00:34:16:22 - 00:34:19:02
Christian Soschner
To be have the evidence.
00:34:19:04 - 00:34:19:10
Kat Kozyrytska
In
00:34:19:10 - 00:34:43:14
Kat Kozyrytska
the research. If you ask people who do bad things that it they they know. I think for us it's much more comforting to think that they didn't know, like if you could listen to the interviews, victims of Epstein, they want to tell him some, some of them are sad that he's dead, because they won't be able to tell him now what damage he delivered to their lives.
00:34:43:16 - 00:35:07:11
Kat Kozyrytska
I think research suggests he knew. He just sort of didn't really care. And he was happy. In fact, perhaps to have delivered the damage. It's it's it's it's a very different framework of thinking. I don't know, Epstein, but again, looking at the academic research, I think there's just every bit of evidence to suggest that it's it's a different operational mindset.
00:35:08:18 - 00:35:22:09
Christian Soschner
So basically what we have to accept is the fact that, there will always be humans who are bad actors and do something that harms other people on purpose.
00:35:22:11 - 00:35:22:20
Kat Kozyrytska
Yeah.
00:35:22:20 - 00:36:08:12
Kat Kozyrytska
So, the current estimates are that in the general population, that's about 5%. In prisons, it's between 16 and 25. And the reason that there's such a large scale, right, is because we thought that hard to measure. And interestingly, in leadership, the percentage is enriched to about 10%, according to academics, who study this. So, I think, the way that our society is built, its, sort of rewards some of these behaviors that are very self focused, self oriented, but maybe even beyond that range that it's in, in their very different mindset.
00:36:08:12 - 00:36:36:03
Kat Kozyrytska
The other people are like 2D, they're not the same kind of a person or a being. They're very flat. And, and I think that it's so hard to embrace this for me. I think it was kind of like learning that Santa Claus doesn't exist. Right? It's it's this very traumatic experience of understanding that there are there's a whole there are people who look the same, but there operates so differently.
00:36:36:05 - 00:36:57:02
Kat Kozyrytska
Once I got on board with that, everything makes so much more sense, right? Like so I grew up in Ukraine. For us, World War two was like it was happened yesterday. It was it was always in the daily operations. We always talked about it. Hitler was top of mind. Right? It was not something that was is it sort of some decades ago.
00:36:57:02 - 00:37:21:02
Kat Kozyrytska
Right. It was daily presence and we always were. He was just this one extreme, really bad human. But again, studies suggest that it's you can have a range of behaviors and a range of executional ability. Sometimes you can have that intent, but you can't, you know, you just are not good operations person. So you can't execute on your bad, wishes.
00:37:21:04 - 00:37:29:04
Kat Kozyrytska
But, it's it's a much higher fraction of a population that recurred throughout history. We can see examples, in the day to day.
00:37:30:14 - 00:37:54:11
Christian Soschner
That I don't think we're think about what we are in Germany and also in Austria is a good example of, a really bad system. You can't, really sugarcoat that. It's just a bad system harmful to society across the world. And when I think back to this time, I also when I was in school in the 80s and the university in the 90s, it was a constant reminder that we should not do that again and bring that to humanity ever, ever again.
00:37:54:11 - 00:38:16:05
Christian Soschner
And Austria had played a huge role in that. Actually, he was Austrian. Unfortunately. But when you think back to the 80s also, I mean, when we had the conversation and we wanted to have a private conversation, it's just up house owners could be almost certain that nobody's listening. And this was also the situation in Germany in the 30s and in the 40s.
00:38:16:05 - 00:38:40:18
Christian Soschner
So there was no internet, there was no modern communication. And you state that these bad actors like, in the Third Reich still exists today. But when you look now at today's technology, we fit everything into the internet. You have up I have a mobile phone, so it's camera. Microphone. I have Alexa devices in my house. Anybody could easily tune in.
00:38:40:18 - 00:38:44:06
Kat Kozyrytska
We're very brave to have Alexa devices ready.
00:38:44:07 - 00:38:46:14
Christian Soschner
If you wanted.
00:38:46:16 - 00:38:47:00
Kat Kozyrytska
Well,
00:38:47:00 - 00:38:58:09
Kat Kozyrytska
I mean, maybe maybe it's my background, but I. I am the kind of person who stays very manual, you know, with the Nord blog and so on. So as I do not have the smart home
00:38:58:09 - 00:38:59:08
Kat Kozyrytska
already.
00:38:59:10 - 00:38:59:23
Christian Soschner
I have everything.
00:39:00:00 - 00:39:00:11
Kat Kozyrytska
I think is.
00:39:00:11 - 00:39:02:06
Christian Soschner
Captured.
00:39:02:08 - 00:39:03:01
Kat Kozyrytska
I think it's
00:39:03:01 - 00:39:39:17
Kat Kozyrytska
it's for that risk of monitoring. Right. So, an entity can then get Ahold of your data. One of the very interesting examples, is this, study. I'm not going to remember the year at the moment. But, the, the looked at the, electricity consumption by the power meter. And essentially with a model, you can build a model that predicts how many people are in the household and what TV show they're watching just from that measurement.
00:39:39:23 - 00:39:40:23
Kat Kozyrytska
Of of the power
00:39:40:23 - 00:39:42:08
Kat Kozyrytska
meter. Right. You can you.
00:39:42:08 - 00:39:47:18
Christian Soschner
Can you can predict the TV show. They are watching from the measurement of pixels.
00:39:47:20 - 00:39:49:09
Kat Kozyrytska
This pixel brightness.
00:39:49:11 - 00:39:51:08
Christian Soschner
Really.
00:39:51:10 - 00:39:53:21
Kat Kozyrytska
So now you think you think about the ownership
00:39:53:21 - 00:40:12:20
Kat Kozyrytska
of that data. Now you have an electricity company that knows how many people are in the whichever apartment, at what time of day, what they're watching. I mean, if I were not just a tech company trying to make money, I probably sell that data to some TV station, right? To help them understand, get some, insights and so on.
00:40:12:22 - 00:40:37:07
Kat Kozyrytska
So it's you just have to be very mindful for that footprint, right? Because somebody might own your data in ways that you do not expect. And this, for example, was very surprising at the time when they carried out the study. Right. The the single readout is predictive of so many things that are built with some argument in violation of your privacy of your home.
00:40:37:07 - 00:40:58:13
Kat Kozyrytska
And so, so I think the point that you're making about the connectivity of that data and kind of the pull into some omniscient, being your system is, is a whole new level. And, some of what I've been thinking about is, how public videography or photography. Right. It's not really been regulated in a profound way.
00:40:58:13 - 00:41:20:12
Kat Kozyrytska
So you can have a security camera somewhere and so on, which is not an issue if it's just staying with the security guy who's watching, you know, whatever, 15 screens. But if now that's pulled into some more systemic oversight, very different picture, because now you can track people going here and there and began the impact through health insurance or auto insurance and so on.
00:41:20:12 - 00:41:48:24
Kat Kozyrytska
You can already see how that information can feed and manifest for you very obviously in financial impact. But it's not like you're going to really be able to say, oh, they charged me more because they saw me engaging in some bad behavior somewhere else. It's the transparency of the output. So to your earlier point, right. Like, how can we convince ourselves that data is important and we have to care about our privacy?
00:41:49:01 - 00:42:06:07
Kat Kozyrytska
It's a tough argument, because it's never in your face. Right? The insurance companies, like I say, we charged you $100 more this month because of these things that we saw through these data sources. So I think you just kind of have to think for yourself of the impact of sharing some of these pieces.
00:42:06:17 - 00:42:28:17
Christian Soschner
And that's an interesting point. So it could influence pricing strategy when people see that you frequently travel to, for example, just a damn example to Monaco, let's say, okay, this guy must be rich and this could be a way to are just, but since he travels every week to Monaco and just in weekend summit to Italy, to a cheaper house to could just assume this is a millionaire billionaire.
00:42:28:19 - 00:42:46:10
Christian Soschner
Different pricing. That's amazing. When we go back to your to your, your roots. You grew up in Ukraine. You mentioned earlier, how did you build your perception towards trust in that environment? Could you share some stories of, of your youth?
00:42:46:12 - 00:42:46:16
Kat Kozyrytska
I
00:42:46:16 - 00:43:13:16
Kat Kozyrytska
think I think it was just a really confusing and difficult time. I'm not sure that trust really was a part of my world view. I mean, starting with some fundamentals, with, the government taking away savings one day, right? With the collapse of the union, they said, well, you won't be able to withdraw from your savings account and steam that.
00:43:13:16 - 00:43:43:04
Kat Kozyrytska
I remember just being outside and standing in this line with my grandma. Lots of other grandma standing in that line crying because their whole lives just just went to pieces. And it's such a betrayal from the government. It was very difficult to live through. And so, I mean, for me, the way that I processed that at the time is that all they probably needed the money for something more important than the lives of these people, because it was just easier to think of it that way.
00:43:43:06 - 00:44:02:08
Kat Kozyrytska
And then the crime was so gruesome and so rampant. In retrospect, I think it's because of that loss of oversight. And the law enforcement became very weak at the time. So people didn't have the consequences for their bad actions. And so the stuff that was happening, I mean, it's it's just it's too much any crime.
00:44:02:09 - 00:44:29:14
Kat Kozyrytska
So you watch, that's nothing compared to what was happening, there in the 90s. And so again, for me was, I think, easiest to process it in that, well, these people were stealing money items. People's whatever it is, because they needed it more. In retrospect, having myself then since then lived through situations where I didn't have anything to eat.
00:44:29:16 - 00:44:50:09
Kat Kozyrytska
It's a really high threshold that I just I cannot imagine how you cross, where you take somebody is something. I think it's, again, going back to the dark personality. So you have to be of that mindset. But somebody else is is not a real person. Only you are a real person. So you're authorized to just take from them.
00:44:50:11 - 00:45:00:14
Kat Kozyrytska
So since then, I've adopted a different point of view that, I think it's, it takes a different approach and different way of seeing the world.
00:45:00:21 - 00:45:12:21
Christian Soschner
That's an interesting point. I always thought it was a blessing to change from the communist system to to the open era, and it was more of a smooth process. And, sunshine into capitalism. So it was no, it was really hard.
00:45:12:22 - 00:45:17:05
Kat Kozyrytska
There was sunshine, there was just no running water, but sunshine. We had.
00:45:17:07 - 00:45:19:01
Christian Soschner
Yeah, yeah, yeah.
00:45:19:03 - 00:45:20:19
Kat Kozyrytska
Yeah. But I think it was a very rough
00:45:20:19 - 00:45:31:19
Kat Kozyrytska
transition for the country. And I know we think of the horrible things that are happening in Ukraine now is this, you know, a unique experience. But it's in my time there. It was never really
00:45:31:19 - 00:45:33:12
Kat Kozyrytska
good.
00:45:33:14 - 00:45:42:07
Christian Soschner
So you also experienced the last taste and basically of the old Soviet Union. How how was that?
00:45:42:09 - 00:45:42:22
Kat Kozyrytska
Well, I will
00:45:42:22 - 00:46:06:00
Kat Kozyrytska
also preface this with that because, you know, Soviet Union is a very large country and in the Tsarist time was also very large with poor communication systems. So when you think of the collapse of the union, actually took a few years to propagate that to remote regions like where I'm from, it wasn't like, instant change for us in the mindset.
00:46:06:02 - 00:46:29:21
Kat Kozyrytska
It it was still present there. So, you know, we were talking about how, for example, commercial activity was so shamed for so judged if you sold something somewhere, if you bought something somewhere and then sold it somewhere else for more, public shaming for that kind of an activity. And so for me, I think moving to the US was a very commercial mindset.
00:46:29:21 - 00:46:51:09
Kat Kozyrytska
And, you know, I've been in a lot of commercial functions. I am through and through an hour. I person, is just a massive shift in thinking that in fact, yes, you're adding value by even, let's say, moving goods from over here to over here. That's a value add for someone, because they maybe not are not able to access this location.
00:46:51:12 - 00:46:53:18
Kat Kozyrytska
Just as simple as that.
00:46:54:11 - 00:47:02:13
Christian Soschner
So there was but but how did the Soviet Union then function at the end of the day, when there was no commercial activity?
00:47:02:15 - 00:47:03:13
Kat Kozyrytska
Excellent. Excellent
00:47:03:13 - 00:47:23:21
Kat Kozyrytska
question. But I think there's a lot of show off. Right. I mean, just like during the hunger times, just demonstrating that things are going really well inside the country. So whatever, like selling wheat and, and other goods to other countries while you have your own people dying of hunger. Right. A lot of showing, demonstrating this external success.
00:47:23:23 - 00:47:50:07
Kat Kozyrytska
While internally, the operations were less less than good. Right. Lots of deficits. Again, people standing in line. This is before my time on my parents time. Lots of waiting in lines for, you know, this, the limited groceries at the store. And then I think it was a bit better after that. But then my time again, you couldn't even buy anything.
00:47:50:07 - 00:48:10:15
Kat Kozyrytska
There were no there was no food to buy. And for a country that's married to cheese and sausage to not have cheese and sausage, one day you could not buy for any amount of money. You could not buy these two key ingredients for for the country. It was very difficult. It was a complete redo reset. People rethought how they operated, what they ate.
00:48:10:17 - 00:48:13:18
Kat Kozyrytska
Their whole lives changed in major ways.
00:48:14:10 - 00:48:28:18
Christian Soschner
What were the main differences in how the system was set up? When you compare, you know, both sides, the Soviet Union and you know, the US. What were the main differences on the economic side from the setup perspective?
00:48:28:20 - 00:48:29:12
Kat Kozyrytska
I mean, I think in some
00:48:29:12 - 00:48:51:13
Kat Kozyrytska
ways the Soviet Union operated in a non-sustainable way, which is sort of why we ended up where we ended up in the end. So I'm not sure that operationally the system could succeed. But it's yeah, I think it's the again, the trust by going back to the trust, you trust the government to take care of you.
00:48:51:15 - 00:49:15:07
Kat Kozyrytska
If you needed support. I think, that social peace was more in place. But also, if you put all homeless people in prison, that really reduces your homeless population. You know, that's an easy fix, right there. So I think some, some solutions were, again, on the external, let's show off, how good things are side.
00:49:15:09 - 00:49:34:22
Kat Kozyrytska
And, so, you know, I, I think all the movies you watch, right, you have the tourists kind of going down that main street, and then just as they go off the street, it's it's a completely different set of ruins. Now, you don't have to the building in ruins. I thought this was a thing of the past, you know, maybe the 60s.
00:49:34:24 - 00:50:03:23
Kat Kozyrytska
No, I went to, Saint Petersburg in 2013. I think it was the same. You go down the main street, painted buildings, you turn off the main street. It's. Nobody has painted that in a very long time. So I think it's still very much this, or at least at the time, was very show based, very external perspective. And, you know, I think it's just not a sustainable way of living because essentially you're not building the inside.
00:50:04:02 - 00:50:26:01
Kat Kozyrytska
If you take that to the personal framework right away, forget the government. But for yourself, if all you're projecting all day long is success, but at the core, you're, you know, you're dying on the inside. It's not a sustainable way of living. You have to have some positivity inside, and then you deliver it to the outside.
00:50:26:22 - 00:50:38:21
Christian Soschner
And in comparison to what you experienced in the old Soviet Union, when you contrast that with your experience in the United States, how would you describe that?
00:50:38:23 - 00:50:39:16
Kat Kozyrytska
I think in some ways
00:50:39:16 - 00:51:04:23
Kat Kozyrytska
it's a completely different way of living. Right? I mean, the commercial is so front and center. In the US, but I think there's still a lot of this, surface level. Everything's very good. You how are you doing? Right. The answer to that for me was shocking. Shocking? Somebody from Eastern Europe coming in, and I really struggled with this the first few here, and I apologize, the people that I interacted with during that time.
00:51:04:23 - 00:51:23:02
Kat Kozyrytska
But when they asked me how I'm doing, I thought the answer was to actually then describe how I'm doing, which was not very good those first few years, let's put it that way. But the answer to that is great. I'm doing great work. I'm doing well. So it's some of some of the features are actually really similar.
00:51:23:02 - 00:51:46:06
Kat Kozyrytska
And as an aside, there are people who've written about the similarities between the US and the Soviet Union, infrastructure wise, that just how maybe, maybe it's a big country thing. But but there are academic studies there, theses that have been made about how similar the countries are. And I think those are just really fascinating, pieces to read because we think of them as so polar opposite, right?
00:51:46:08 - 00:51:51:21
Kat Kozyrytska
Cold War and so on. But but in some ways these are really, really similar countries.
00:51:52:07 - 00:52:14:09
Christian Soschner
And that's interesting. It's interesting. I mean, at the end of the day, it's all packed into artificial intelligence. When you talk about the Soviet Union, when is what you write? It was more of a centralized system where basically you, let the government decide what's best for you, and you don't have to think, they make the decisions.
00:52:14:09 - 00:52:18:12
Christian Soschner
You just have to agree. That's that's everything that you have to do. At the end of the day, you.
00:52:18:12 - 00:52:23:19
Kat Kozyrytska
Just have to live. I don't I'm not even sure the agreement is so much a part of it. You just of, you know, go on.
00:52:23:21 - 00:52:44:04
Christian Soschner
Yeah. And there was no room for personal development at the end of the day. So every commercial activity was changed. Selling something to another person to make more money at the end of the day, to invest in yourself in your personal development was not an option, compared to the US, which is completely the opposite at the end of the day.
00:52:44:04 - 00:53:07:11
Christian Soschner
So it's more selfish. People are looking more for themselves and their benefit and their profit make deals. And, have to be sure that the other side takes care of their way in it. They just have to take care of their own being. And the government has a role to compare to the Soviet Union to, to to put a little bit of framework in place so that people can transact and the economy thrives, and that's all.
00:53:07:11 - 00:53:23:04
Christian Soschner
But it's more self-centered than the Soviet Union. What was good in the Soviet Union and what was going on, what is good in the US when you could combine that? The best of two worlds into one system, what would you choose?
00:53:23:06 - 00:53:23:15
Kat Kozyrytska
Yeah, I
00:53:23:15 - 00:53:46:00
Kat Kozyrytska
mean, I think that's a super interesting question. And maybe just to comment to what you and what you said. I think part of the reason that one in a very capitalist society, right, you have to be taking care of yourself is because there is not a social system that will take care of you if you have a dip, in income or, you know, when you retire and so on.
00:53:46:01 - 00:54:18:21
Kat Kozyrytska
So it's it's a two way street. Right. I, I'm, I'm sure a lot of people would you willing to spend less time building on wealth if they knew that there was that support, network. So I think in terms of combining the two approaches, maybe I'll, I'll lean on education here. I think some of the really different things for me, I mean, this was I had a student, when I came here first, was that in a lot of classes, it was about this analytical thinking, right.
00:54:18:23 - 00:54:47:13
Kat Kozyrytska
You take math, you take English, you think about what is it that you're doing? I think the Soviet Union is very much about memorization. So you. There is a right answer. It's the right answer. And all you need to do memorize what that is and then deliver that answer. For example, exams for us, right. It was always you memorized word for word what the responses are to the 30 questions.
00:54:47:13 - 00:55:02:24
Kat Kozyrytska
You come in, you pull out the ticket, you know, which you know the number of the ticket. So you already know what the the questions are because you've memorized them. And then when I came here every week, we were writing an essay on what the author thought. And what did you think about what? The author got nervous.
00:55:03:01 - 00:55:30:24
Kat Kozyrytska
Overwhelming. Just in, in, in, like, how much space there is to think about things. And I think, for me, I mean, I've been here a very long time, but for me, this thinking part, it's really only, I would say fairly recently that I've really come to embrace that. Right? That one is authorized and perhaps even rewarded for having one's own point of view.
00:55:31:01 - 00:55:52:13
Kat Kozyrytska
And other people may disagree with that point of view, but that's okay. So I think there's some value to the diversity of thinking. And I think there are a lot of other countries that are built in that memorization framework that have very similar, people coming from those countries to the US have that very similar experience to mine, where you just there's a diversity of thought, is is very exciting.
00:55:52:13 - 00:56:12:21
Kat Kozyrytska
So I think that's to me that's something really valuable, bringing this back to a I. Right. Some of the conversations are usually 1 or 2 in the past, past two years ago, if you wanted to get an answer to something you asked, you know, five, ten, 30 different experts and you got different answers. They aligned on something, they diverged on something else.
00:56:12:23 - 00:56:48:03
Kat Kozyrytska
Now, if you want an answer, you're going to get one answer. I have yet to see a post on LinkedIn about somebody doing, same query for ChatGPT, let's say for science ten times, or maybe between ChatGPT and so on. Right. To get that same flavor of diversity of thought. So I think this convergence on a single point of view is in some ways quite, terrifying, and a big shift, for a country like the US.
00:56:48:05 - 00:57:15:07
Kat Kozyrytska
So we'll see how we handle that. And to your earlier point about kind of maintaining that source of truth, right, of the one answer that you rely on is coming from your AI tool. How can we make sure that the information that it spits out is reliable and is true? And there are some, let's say, global efforts, right to maintain a source of truth somewhere so that we can reference that source of truth.
00:57:15:07 - 00:57:22:04
Kat Kozyrytska
So the history does not get rewritten by AI tools through generative means.
00:57:22:24 - 00:57:27:10
Christian Soschner
That's a there are two interesting points. I think this, discussion about.
00:57:27:12 - 00:57:31:16
Kat Kozyrytska
Two a couple of questions. Just two I yeah. For so long.
00:57:31:18 - 00:57:54:17
Christian Soschner
Just to not more than give you more, but it's just it's more sorry for being that it's more of a time restriction. Otherwise we discussed eight hours. I try to put out two and not, not evaluate the others, but two sparked my curiosity. One, I think, what you mentioned is understanding of, of social systems is important to understand the risks of AI.
00:57:54:19 - 00:58:16:04
Christian Soschner
You experienced you are one of the rare people who experienced both systems, the East, the East and the West. And I don't think human humans changed much in the in the last hundred years. Technology evolved. But I think on a, on a psychological level, via still the same as humans and humans gravitate together in groups. And when they are in groups, they create societies.
00:58:16:04 - 00:58:40:20
Christian Soschner
And we have so many examples of different societies, and now we amplified that with AI. This is why I think this this discussion about what systems, they're out of there, which specific niche they have, what's the comparison, what's what's the good side in the US? What's the good side in the Soviet Union? What's the bad side is really important to understand what we then amplify at the end of the day with AI.
00:58:40:22 - 00:58:41:23
Kat Kozyrytska
Yes. Super
00:58:41:23 - 00:59:13:20
Kat Kozyrytska
interesting point. And, just just to corroborate either, a lot of the, training data that when it was English based. So I think it'll be interesting to see do different regions then develop once they localize. Right. And there's some efforts now in the Nordics to do that. Once you localize that and you train with your special day, like, let's just say we focus on the language side, but are you going to get different answers based on that?
00:59:13:20 - 00:59:37:21
Kat Kozyrytska
And is that going to be more reflective of the kind of social system that is in place? Right? I mean, I did not come up with this idea at all, but, some of the folks who look at the globe say that this community versus self focus is a gradient as you go from east to west, and you can agree or disagree, but I think maybe AI is where we're going to be able to test some of that.
00:59:37:23 - 00:59:40:12
Kat Kozyrytska
Those hypotheses.
00:59:40:12 - 00:59:42:15
Kat Kozyrytska
Will be interesting.
00:59:42:17 - 01:00:12:09
Christian Soschner
Yeah. No. Totally agree. And the other points that I found compelling was what I really didn't think about so far, trusting AI to talk too fast. The old days were basically when I had a problem, I had to ask myself through a lot of people, just me back in the 80s in school, when I couldn't solve, homework, I had to call friends and have to identify one who solved it from 30 plus possible hits.
01:00:12:11 - 01:00:31:00
Christian Soschner
So I got a lot of information. I had to evaluate the information. It's not that when you ask someone and say that this is the answer, that I could assume it's the right answer. And our teachers in the 80s and 90s said, Christian, start thinking for yourself. Always come up with your own opinion and I'll be happy. I and the AI risks.
01:00:31:00 - 01:00:42:12
Christian Soschner
If you understood you right that people just take an answer from an AI as already proven and don't think twice.
01:00:42:14 - 01:00:43:01
Kat Kozyrytska
Yeah, and I
01:00:43:01 - 01:01:13:11
Kat Kozyrytska
think again, some of this is the fundamental neurobiology that lizard within us. I think that trust in another this is well-studied. You can take specific behavioral steps to build trust with another hopefully you're doing it for the right reason. But we can talk about the malevolent players as well. And so I think, when we look at AI, we think this is an expert because it's analyzed a lot of data.
01:01:13:13 - 01:01:48:05
Kat Kozyrytska
And that's one piece, right? This perceived expertise, perceived knowledge, then for us through the heuristics that we use translates to trust. The other side of it is the confidence. If we see somebody confident, we start trusting them more because of that tone of delivery. And ChatGPT is always really confident in the answer that it delivers. I mean, it's like the absolute truth that there is I did not yet write an article about this, but I just thought it was fascinating.
01:01:48:05 - 01:02:08:17
Kat Kozyrytska
So I, you know, I like to stay on top of the tech trends. I think of myself as, very data driven. So also, I, I wanted to optimize the performance of my posts on LinkedIn, and I asked ChatGPT for some suggestions with confidence. It delivered a whole list of suggestions. Meanwhile, some of them involve buttons that do not exist.
01:02:09:23 - 01:02:48:18
Kat Kozyrytska
I mean, it's not possible to execute on the suggestions because the button is not even on the interface. So anyway, but but it's it's not heuristics that we translate confidence to knowledge in our minds. And this is, this is ancient knowledge. This the Greeks have figured this out, that if you are on stage and you you list some credentials like that, you have you analyzed a lot of data, and then you project with confidence in opinion that just drives trust in the people who are listening to you because of the fundamental neurobiology.
01:02:48:18 - 01:03:16:10
Kat Kozyrytska
So it's you. It's like we have to protect like to the earlier conversation. Right? We can focus on fixing ChatGPT to express confidence levels. For example, you could say this is my answer about I'm about 60% certain. So that's a technological solution to this, a much faster solution for us to implement just to say chat. GPT has analyzed a lot of data but may not be an expert.
01:03:16:16 - 01:03:44:12
Kat Kozyrytska
We acknowledge the confidence with which this tool delivers information. However, this may just be a tone that it's built to use that is not reflective of the its own confidence in the answer. So we can take a proactive approach here and protect ourselves from these influences. Once we become aware of the little things that for us to translate, to trust.
01:03:45:12 - 01:04:06:18
Christian Soschner
That's an interesting point. It's really interesting point, the obvious fact check tripped and alarms or Gemini or Grok just are a retrieval system that adjust to the same, like search engines. Go into a library, pick the book that I need to answer my question and just reproduce the content. But it's not. Actually, it's not.
01:04:06:20 - 01:04:07:12
Kat Kozyrytska
Yeah. I mean, it's
01:04:07:12 - 01:04:28:08
Kat Kozyrytska
the you you typically you do not say, I want the specific book. You say, I want an answer. And then the system sort of has to figure out which books to pull from the shelf to answer that question. So you have some uncertainty already there. And it's might be different time to time or run to run within a given system or between systems.
01:04:28:10 - 01:04:44:13
Kat Kozyrytska
And then the stuff that it pulls out and shows you that's, that's a whole other side of it. And there's some probabilistic, nature to that. So you're, you're going to get variability with that as well. And maybe, you know, like with sorry. Go ahead.
01:04:44:15 - 01:04:47:11
Christian Soschner
No, no. Go check. Just keep going.
01:04:47:13 - 01:05:14:07
Kat Kozyrytska
I think maybe, maybe just like with science, you know, we run the experiment three times. So maybe that's something that we could do in the future. You know, you take different ways of measuring concentration. Maybe that's something that we can, apply to this tool, but it's. We have to think about reliability as core and especially as we apply these tools in our professional domain.
01:05:14:13 - 01:05:38:01
Kat Kozyrytska
Right. Within pharma, biotech, life sciences, we have such, strict code of conduct that is all about reliability, verifiability of, information. And so yes, to implementation of I but but I think taking that same approach that we have the same hesitancy that we have had for all the other tools that we've ever brought in. Right.
01:05:38:03 - 01:05:49:12
Kat Kozyrytska
And really assessing it from, from that reliability, compliance, privacy, a whole suite of other things. Before we move forward with the tech
01:05:49:12 - 01:06:12:15
Kat Kozyrytska
yet again, as a technologist, I am all about implementing technology. I just want to be very clear. I'm all about reaping the benefits of it. But also really putting the framework in place that make sure that you protect yourself, protect your company IP, protect your know, how, you know, make sure that whatever you deliver to the regulators is true and correct.
01:06:12:17 - 01:06:15:10
Kat Kozyrytska
These fundamental things we have to put in place.
01:06:16:01 - 01:06:37:18
Christian Soschner
And what you say is contrasted by the marketing of open AI, for example, or grok, they want to convince everybody that it's the best tool and it's obvious, right. And will replace human. It's the best experts in the room, expert in the room. And I use it, for example, for analyzing stocks, companies and analyzing companies needs and a lot of data.
01:06:37:20 - 01:07:00:17
Christian Soschner
And it's really convenient to just upload it and say, make a deep research about this company. But the funny thing, and the interesting thing is when you ask for recommendations, should I buy this company or not? Publicly listed companies, chipped or grok or generic came back and say, yeah, it's such a good buy. And it's such, it's currently at a low price and has such a, a prosperous future ahead.
01:07:00:17 - 01:07:18:02
Christian Soschner
And you will make a lot of money there. Convince me that there is a this copyright that just had a 400% uptick in market capitalization in the last six months. So I go back to this tool and say, but what about the price? No. Oh yeah. The data of the last three months is missing. So we didn't look at that.
01:07:18:02 - 01:07:28:05
Christian Soschner
And when I then go to church and say, please look at this data, they completely changed their opinion. So it's not a really good decision making tool at the end of the day. Currently at this stage.
01:07:28:07 - 01:07:28:18
Kat Kozyrytska
Yeah. And I
01:07:28:18 - 01:07:50:24
Kat Kozyrytska
think that's why we are right now, very focused on this human in the loop approach that you have. The human like yourself is actually thinking about some of the other things. That maybe the algorithm didn't take into account when, delivering information or suggesting a particular path. I do think that in the long term, human in the loop is going to be very painful.
01:07:51:01 - 01:08:22:14
Kat Kozyrytska
And maybe not even practical to, to maintain that implementation. Right. Because first of all, if you have to look into every single thing that your model, it pulls out and, and suggests it's a lot of extra work. Yeah, that's not very interesting. You're doing QC so, I think in some ways it's more interesting to actually just do it yourself because then you're on the creation side.
01:08:22:19 - 01:08:42:21
Kat Kozyrytska
You're less on the QC side, I think in pharma biotech, you know, she's a very particular skill set. And some people are exceptional at it, but most people are not. For me, this is reminiscent of my experience, when I went from individual contributor to manager and my work as a manager became a lot of this QC.
01:08:42:21 - 01:09:04:18
Kat Kozyrytska
Right. I look through the decks that other people build and like I would, I'm done this way, but okay, let's go with your approach. And then all I'm doing is pulling out looking for errors. Right. Because I'm reading the I'm responsible for the output. I'm the human in the loop. But but I have to make sure that whatever comes out of our team is the highest quality that we're operating with the newest information that we have.
01:09:04:20 - 01:09:26:15
Kat Kozyrytska
But it's it's just a very different flavor of a job when you're going through and on every slide, every number you like that status. Is that correct? Versus if I had to come up with a number myself, it's a very different nature of work. So I think it will be difficult to retain a workforce that all it's doing is verifying whether the insights provided by the algorithm are correct or not.
01:09:27:04 - 01:09:29:18
Kat Kozyrytska
Yeah. Personal opinion.
01:09:29:20 - 01:09:52:07
Christian Soschner
Human in the loop. That's important. I think we need that. The interesting thing with this, lifestyle is, when they are wrong and you tell them sometimes they are wrong to try to convince me that I'm wrong, that the kickback is there and they don't, that they're not feeling so they don't feel responsible, they can come to a wrong conclusion, insists the different conclusion is right and the company goes bust.
01:09:52:12 - 01:10:12:04
Christian Soschner
And the interesting thing in your work is you not only were at MIT, Stanford and have experience in the old Soviet Union, you also worked in the industry. So it's not the academic discussion that we have here. You worked with Thermo Fisher and Sartorius. Just, just, just to but correct me if there are if I'm wrong, it forgot something.
01:10:12:06 - 01:10:32:16
Christian Soschner
And the trust the church. You put too much with the analysis. The problem now we have 50 eyes. AI is out there, people use it, and people love it. I mean, when I think, I think of me, I went in a couple of months, 200 million users. So there was a huge uptick. So there is a need on the market and you need to lead these companies.
01:10:32:21 - 01:10:43:19
Christian Soschner
What's your advice to the leadership teams? What should they do now in this situation? There are risks. They are not perfect tools, but people want to use it. How is that for leaders?
01:10:43:21 - 01:10:44:11
Kat Kozyrytska
Yeah, I
01:10:44:11 - 01:11:04:16
Kat Kozyrytska
mean I'll talk about, let's say the less esoteric things. I think we see a lot of the news about how McKinsey has to let go of lots of consultants because now they can do this, that, and the other, with with a model. Great. I love it. As a consultant, part of my work, is on the commercialization side.
01:11:04:16 - 01:11:26:16
Kat Kozyrytska
So I sometimes run these experiments. Right. What would you suggest for commercialization of a particular tool? And once again, it comes out with, suggestion that it's very confident and and so I can see how for leadership, it's often tempting to say consultants are expensive. I'm going to go with this much more, much cheaper way of doing this.
01:11:26:16 - 01:12:07:11
Kat Kozyrytska
Right. I'm just going to ask ChatGPT. The problem with that is that you could then pursue a path that steers you much away from an optimized way of doing things, or is a completely wrong direction, and you won't know until many months down the road. So now you've invested lots but down the wrong path. And so my suggestion is still talk to somebody who's commercialize things in the past, even if maybe, maybe a new way that we work in is that the company comes up with their own strategy with the assistance of an agent, and then they bring that to an expert, an expert suggest.
01:12:07:11 - 01:12:34:11
Kat Kozyrytska
So this might work. This might not work. Maybe that's a new way. I don't know what the path forward is, but I think going with just what the agent suggested to you is very risky. And then I think we have this other kind of idea that we're going to implement AI, and now we can compensate for, you know, the 4,050% reduction in the workforce that we've had to go through because of the winter, in biotech that we're experiencing.
01:12:34:13 - 01:13:02:07
Kat Kozyrytska
And I think it's not quite as easy as that, unfortunately. Right. Because the the expertise that's lost in layoffs, I think, I think that's, that's going to be very traumatic for the companies. I think in the past, and it's going to touch on knowledge management, agents. But in the past, you know, you we've gone through some phases where we maybe we did we had to lay people off for 2 or 3 months and let's just go with some basic thing.
01:13:02:07 - 01:13:31:02
Kat Kozyrytska
We had a piece of equipment that nobody remaining at the company knew how to fix, but you can probably get by for three months. Now we're in a much longer stretch of this. So we're having to find ways to operate without, without that expertise. And I think that's very, very tough. The experience that I've seen that I'm, leaning on for my life, as I was leading a team, and there was somebody on my team who had been at the company for 30 years.
01:13:31:04 - 01:13:49:18
Kat Kozyrytska
He was retiring. So my job was to, in these two months before his retirement, download all of his knowledge. So that when he leaves, we don't have to bother him with how to run this not machine and so on, how to fix this protocol. Right. But it's it's such an immense task, to do that in a short timeframe.
01:13:49:18 - 01:14:17:23
Kat Kozyrytska
So I think there's a massive opportunity for AI based companies to come in and do this download of knowledge, from the workers. I mean, my, my dream is that we have a little camera, as people execute on the, on the soap that gives some visual basis, some tacit knowledge that we can and you into the written documents, to help us run, processes with a leaner, with a leaner workforce.
01:14:18:05 - 01:14:35:15
Kat Kozyrytska
So I think it's not going to be as simple as we can now compensate for all that's lost because of the economic situation that we're in, we can probably gain some efficiency. But it's again, going back to the simplification. We have to know what the best way of doing something is and then you can automate, amplify and so on.
01:14:35:17 - 01:14:45:02
Kat Kozyrytska
If you don't know then it's it's you're you're just doing more efficiently something that might not be on the good track.
01:14:45:02 - 01:14:49:02
Kat Kozyrytska
So yeah. Investigating what the good track is, I think is key.
01:14:49:04 - 01:15:09:01
Christian Soschner
Yeah, I totally agree. I think we're still early with other names. And this is just the, the, the, the try to solve every problem with the lambs currently, which is usually at the peak at the first peak of the Dunning Kruger cycles before we crash and maybe we end too much. Right. But I try to use this tool in a sales context.
01:15:09:01 - 01:15:32:16
Christian Soschner
And what's really interesting for me was that in the last year, ChatGPT seems to have been trimmed towards pleasing people. What it means in a sales context is that it tries to agree to everything a customer wants, which means the customer doesn't want to pay very often. So George repeatedly comes back with, yeah, you get this, you get this for free, you get this on top of that.
01:15:32:16 - 01:15:53:06
Christian Soschner
Yeah, yeah, we agree on that. And the thing is, when you just work with chat bots in the sales context and you don't understand what the risks are and remove all the humans who intervene and say, no, no, no, you can't give our cars away for free to a customer because the customer wants it. You really risk ruining your company.
01:15:53:06 - 01:15:56:01
Christian Soschner
At the end of the day.
01:15:56:03 - 01:15:56:12
Kat Kozyrytska
Yeah, and
01:15:56:12 - 01:16:13:03
Kat Kozyrytska
I mean, I think you can put some constraints, right? To say this is the space within which you should be operating. And I think in the sales context, it could be these are these are the things we will never give away. This is your we kind of have this for the sales teams now, right?
01:16:13:03 - 01:16:38:08
Kat Kozyrytska
We tell them you can operate within 5 or 10% of the price. But you cannot really drop below a 10% discount, something like that. So I think they can we can give these rules, to the agents, but I maybe want to even abstract further away from that to organizational values. So I think implementation of these constraints will be key in the broader context.
01:16:38:10 - 01:17:05:12
Kat Kozyrytska
So I'll start with an example. Let's say we're 20 years from now and we have another biotech winter. So we're now fully a jack. And I am a CEO. Go to the agent and I say, give me give me a password this winter I want to survive. And that says, yeah, fire 90% of your workforce. Well, that's definitely going to give me more of a runway because I've just laid off all these people.
01:17:05:18 - 01:17:36:16
Kat Kozyrytska
But maybe that's not really consistent with my values of retaining talent, with my values as an organization and respecting humans. There are other ways that we can approach this furlough, this reduction. Right? All these different things other than just lay everybody off. And I think that that's it kind of code of ethics in turn, all to our organization, my, my organization's value that we can bake into this and the algorithm as a constraint.
01:17:36:18 - 01:17:59:12
Kat Kozyrytska
Another way or another one of these to think about. Right. Sustainability. I can maybe ask 20 years from now, I asked my agent, what's the best way to synthesize this? And maybe it's going to give me the cheapest way, but it's going to use all kinds of toxic chemicals. Now, if a value of my organization is sustainability and building green processes, I have to tell that to the agent, right?
01:17:59:12 - 01:18:20:12
Kat Kozyrytska
I want I want it to know what my values are so that it gives me a balanced up to my suggestion that yes, gives me good ROI so that I can you empower commercially saving people's lives. That's key. But I also say the environment is important to me. I want to always be am 30% less in energy, right?
01:18:20:15 - 01:18:43:08
Kat Kozyrytska
Something something like this. So I, I actually see this as I know, I know we think of values as sometimes fluffy, but I think this time is an opportunity for us to really lean in and understand who we are as an organization. What do we truly believe about ethics, morality, humans, the planet, and then feed all of all of that into the AI that we're implementing?
01:18:43:22 - 01:19:18:15
Christian Soschner
It's a very good point. When you talk about agents, you mean, artificial intelligence agents? The interesting thing was when agents. Agents, when you think back to the 80s, when I started programing with basic tech, the great thing with Basic was you structure reprogram. And every time when I pressed run and wrote run and pressed it, enter and the program executed, I could be 100% certain that the computer executed the program in exactly the order that they wanted it to.
01:19:18:17 - 01:19:41:15
Christian Soschner
And to challenge your assumption a little bit with with these agents. I also hoped when I understand the process well enough and let's stay with the same example. I mean, it's a lot of com and company needs to sell when you understand your sales process, when you understand the customer personas, their problem. Yes. What's the buying tool that they want to pay and not you can codified in a textbook.
01:19:41:17 - 01:20:09:06
Christian Soschner
You can write the process a description. You can outline every single email. And what I was hoping for with the names is then you can take these folders, these binders, and just throw it into our labs, and you have your perfect agent that executes exactly what you want, but it hasn't. This is so frustrating. Nine out of ten times, yes, it might work, but then you have to spend time where it doesn't work and it comes back with something completely ridiculous.
01:20:09:08 - 01:20:31:20
Christian Soschner
And this can derail your company at the end of the day, because when it's just a big agreement to, yes, take all my equity and have I'm fine with it, then it you do you think agents are really capable of executing instructions according to the plan they got?
01:20:31:22 - 01:20:32:02
Kat Kozyrytska
Mean
01:20:32:02 - 01:20:55:05
Kat Kozyrytska
we'll challenge you on this somewhat, because I think the value of agents and let's just focus on the commercial space is that they're going to be able to tailor that framework that you describe the book, they're going to tailor that to the specific customer or potential customer, that you're that they're working on with and so on.
01:20:55:07 - 01:21:18:18
Kat Kozyrytska
And it's, it's going to be nice for putting this tailored, customized approach in ways that you just could not possibly pay a commercial team to do. Right. So I think that's that's sort of the beauty of it, but it's the balance between how do you make sure that the tailoring doesn't then disrupt the process that, you know, works well, John.
01:21:18:20 - 01:21:49:23
Kat Kozyrytska
But it's it's such a nascent state at this point. I don't think it's possible to answer your question as to whether the agents will be breaking or not. Breaking. I mean, maybe, right. Sometimes you hire you sticking with sales, you hire a salesperson who has a very different approach to like the way that your, your organization typically runs and there's a huge value to that person because maybe they will find another way, maybe thinking, I'm in this from the math perspective, maybe the path that you have in the organization is a local minimum.
01:21:50:00 - 01:22:32:11
Kat Kozyrytska
Maybe there is a global minimum that your new, different way of doing things can get you to local minima, meaning that you will put even less, or a global mean, are you putting less effort than you were before to get the same ROI or better on? So I think if the agents, we can maybe map it out, right, say, again, from the math perspective, you have variability, you have the leeway to vary that textbook process by 10%, 20%, whatever it is, you can explore the space around or maybe go to a completely different path, and there could be huge value to that, in and of itself, again, through putting your testing,
01:22:32:16 - 01:22:42:20
Kat Kozyrytska
helping you, you know, explore the design space, so to say, go into the kind of bioprocessing language, but applied to the commercial framework.
01:22:43:12 - 01:23:02:19
Christian Soschner
I agree to to the point that you made with testing will say, okay, it's a great it's a great testing ground. So you can test, different reactions to different image, to different, conversations. You can fine tune your tone. You get ten suggestions. How does it sound when I say harsh Lee? How does it sound when I say it politely?
01:23:02:21 - 01:23:19:13
Christian Soschner
And you then have the full scale and this works fine, but I'm not really convinced. And what I try to say in the sales context, really, you say I and maybe we can accelerate the process and when we. Right. I mean, the hard thing in selling is you need to talk to people. You need to listen to people closely.
01:23:19:17 - 01:23:51:00
Christian Soschner
You can't just, say, okay, talk for 30 minutes, and then they send you something entity distant. You really need to relate to them, understand the problem, and then provide the solution, especially in the biotech B2B context and in pharma B2B. But other times really don't work well when I just try to fit an old email into the other Lem and say, okay, this is, the minutes from the last call, and now please create, an offer tailored to this person, and you have the other ten of us already to just sometimes come up with just ridiculous things in the mail.
01:23:51:02 - 01:24:09:03
Christian Soschner
And when I trust in the names and say, and now just send it to the customer dependent customer without double checking it. Yeah, you might have given a year's worth of services away for free and added that's to that.
01:24:09:05 - 01:24:09:13
Kat Kozyrytska
Couple
01:24:09:13 - 01:24:34:02
Kat Kozyrytska
points to that. I think, again, this is a very young tool. So I do for as the the technocrat at heart, I do firmly believe it will get better, in terms of that consistency. But I think the other side maybe counter to what you were saying, that the agents can exhibit empathy that is better than what you get from another human.
01:24:34:04 - 01:25:11:04
Kat Kozyrytska
So you have lots of the news kind of in the normal human domain, but thinking about regulated industries and health care, Google presented on this, on their tool, I think it's called Amy. But they compared empathy readings by patients of their AI tool versus their, physician. And the AI tool scores better. So I think there is a massive opportunity in terms of that tuning into the customer and, being able to give them a good experience.
01:25:11:09 - 01:25:34:14
Kat Kozyrytska
Right, because the AI agent can be a great listener. So I think, I think it can be massively helpful, for, for the commercial function. But again, right, it's, it's these the risk of it going badly with one of the customers. And you were talking about giving away a year worth of services. I would even argue that that's not near as bad as you have a key account.
01:25:34:16 - 01:25:59:24
Kat Kozyrytska
That your agent just said something completely unrelated to. And it probably used your name your email and now you've lost the key account because they, they think that you're insane or didn't bring the previous messages. Right. I mean it's, it's a, I think at that relationship level it could be much more detrimental. So for now I think we're still, you know, in this proofreading and so on.
01:25:59:24 - 01:26:13:14
Kat Kozyrytska
But, I, you have every hope that the tech will, match a higher standard. And we'll get to, less supervision, more reliability situation.
01:26:14:16 - 01:26:32:20
Christian Soschner
Now, it's really interesting discussion and challenge for companies. At the end of the day. I mean, you feed data into systems, confidential data. I'm not so sure. The point that you mentioned about empathy. Just read I really like to conversate. I mean, I use it a lot, these tools to understand them better and what they can do and cannot do.
01:26:32:20 - 01:26:57:12
Christian Soschner
And, I really love this discussion that they have with ChatGPT. You are grok what to say. Please check this post for me. Grok, for example, started with that and bring me more information. Is this true and where is the source coming from? And then they come back with a solution. The next time you wait and you get the solution and they say, no, this is not right, this is definitely not right.
01:26:57:14 - 01:27:20:17
Christian Soschner
And then it sits down and say, so now I tell the system that you are wrong. And the interesting thing is they come back with empathy. I understand your frustration as a I'm not frustrated. I'm not frustrated. So stop insinuating that I'm frustrated. And then the second part is very interesting and say, but the information is right.
01:27:20:19 - 01:27:41:13
Christian Soschner
And as I say, very, very little empathy here. I mean, it's just, wrong information with, telling the user that they are frustrated and many people just take it at face value and say, okay, now I'm frustrated. This this is a tool that you can use to manipulate people at the end of the day, to read the feed and then the act there.
01:27:41:15 - 01:27:42:01
Kat Kozyrytska
There are
01:27:42:01 - 01:28:05:00
Kat Kozyrytska
studies on, how convincing, the, agents can be in advertising against their academic groups who are studying this at scale. And it's the same, like, two way street flip side of the coin, right where it can be a tool. It's at the end of the day, it's just technology. You can use it for good.
01:28:05:00 - 01:28:39:03
Kat Kozyrytska
You can use it for not so good. So, I think this is such a key time to focus on the values, the correct approaches that work and amplify and scale that. I know this is probably the third time I'm saying this, on this call, but I it's it's it's just if you think about it as a tech, I think it's maybe shift your, framework and hopefully, helps pivot towards this understanding of who you are, what you actually want to achieve.
01:28:39:05 - 01:28:56:08
Kat Kozyrytska
I think it's a different kind of technology than some of the other ones, because usually when you bring in something new, like automation, right? And trying to do automation in manufacturing for the longest time, there's resistance from your workforce. And so you have to overcome that hurdle here. You're in a place where people want to use it.
01:28:56:12 - 01:29:24:02
Kat Kozyrytska
It's very different tech from that perspective. The adoption is already rolling. So you have to, take a somewhat of a different, address, some different set of risks. Adoption is no longer your main concern, right? You need to be driving the framework, the governance, making sure that you protect yourself, protect your company, protect your IP and know how as the tech gets adopted.
01:29:24:11 - 01:29:37:14
Christian Soschner
Yeah. Let's take the conversation back to the corporate level. How should we deal with AI? What's what's your framework that you recommend for CEOs in the biotech, in the pharma industries?
01:29:37:16 - 01:29:38:02
Kat Kozyrytska
Yeah, I think
01:29:38:02 - 01:29:59:09
Kat Kozyrytska
the there are many pieces to this. Right. But the to me, the very first and the most important one is privacy, making sure that your data does not leak anywhere. And, you know, most companies have been working with some data security companies, consultants, they're in a good place from that perspective. But AI brings in new risks.
01:29:59:13 - 01:30:26:12
Kat Kozyrytska
So you have to think about all of the ways that the data can flow out of your organization. And then work with your technology developers to your AI tech providers, to make sure that they put, guardrails in place to accomplish your goals of privacy, confidentiality, compliance. So all the other things that you want to put in place, I'll just give one example because I think it's, it's it's an easy one.
01:30:26:17 - 01:30:51:18
Kat Kozyrytska
An approachable one. So let's say we have, a database that you are implementing, and this database has AI driven suggestions for search queries. So maybe you put in your protein name and then it suggests some additional search terms to give you some insights. Great. We want this. We want the intelligence within organizations to be built.
01:30:51:20 - 01:31:19:03
Kat Kozyrytska
Awesome. Problem is that the AI suggestions for search terms are trained on user data. So now you have information data flowing out of your organization to train this algorithm to suggest search terms. If you're searching for job, you want so hot lots of companies are searching for it. Probably not as big a deal,
01:31:19:03 - 01:31:24:05
Kat Kozyrytska
but if you are working on some obscure target, this is your super highly confidential IP.
01:31:24:05 - 01:31:29:00
Kat Kozyrytska
You want to get some intelligence about the research, for example, that's been carried out on this pretty.
01:31:29:00 - 01:31:46:17
Kat Kozyrytska
Yeah. So of course you do. But if you query that and you have to keep in mind that now you've disclosed essentially to the system and everybody who's working with the system that you're working on this, you if you have one other competitor who's working on this target, they essentially can see what you have searched for.
01:31:46:19 - 01:32:10:04
Kat Kozyrytska
So of course you can then guide the technology provider to help you protect your search queries. But it's not a default, at least in the world that we live in today as a customer, as a therapy developer, you have to ask for it. You have to ask the questions. How is your provider handling that privacy and confidentiality side?
01:32:10:06 - 01:32:31:14
Kat Kozyrytska
And you know, I work with both sides with the AI tech companies and with therapy developers. And it's it's also with regulators. Right. So I think it's it's an ecosystem question today. How can we get these tools to be usable for us in a regulated industry with strict IP requirements. So I think it takes action from the parties.
01:32:31:19 - 01:32:56:14
Kat Kozyrytska
We need the customers therapy developers to mandate I need this or else I won't be able to work with you. We need the tech developers to be willing and open to, make amendments, implement stricter systems, and then, of course, have the regulators step in and say, these are the standards that you have to have in place. This is something that I'd love for us to have is a kind of certification.
01:32:56:16 - 01:33:18:23
Kat Kozyrytska
Some that test run that we run on every AI tool to say, okay, it meets. Maybe it's a ten point score system or something like this, right? It needs seven, eight points out of ten. So then as a customized, therapy developer, you can assess the risk level of using a particular tool versus the benefit that you're going to get out of this tool.
01:33:18:24 - 01:33:42:07
Kat Kozyrytska
Because it's all it's always an equation. Right. You have to you have to balance, these things. But, if there are any technology people out there thinking along the same lines, of setting standards reflective of, the code of conduct that we have in this regulated industry. I would love to connect. Please, please reach out.
01:33:42:09 - 01:34:01:13
Kat Kozyrytska
And then I am just really grateful to, the regulators, the industry organizations who've been having this conversation with me. I think this will be, tremendous for our space to have some, some level of standard, we can think of it as, for example, you know, ratings for, parts, let's say for Boeing. Right?
01:34:01:13 - 01:34:15:06
Kat Kozyrytska
There have to be a certain level of quality, and we have some of those implemented for hardware. We just don't yet have it for AI. But I'm confident that we can get to a place where we have some standardized system of assessing tools.
01:34:15:24 - 01:34:41:03
Christian Soschner
Yeah, there's a lot of, of risk and uncertainty involved at this stage for the understand from from your explanation. But what does that mean for the, for a biotech company actually researching a new therapy? I mean, there are no standards on the market. And using it is tempting. So when you do a clinical trial, just throwing the patient data into another them is tempting.
01:34:41:05 - 01:34:42:00
Kat Kozyrytska
To get
01:34:42:00 - 01:35:06:19
Kat Kozyrytska
this done, throw a patient data into into ChatGPT. Yeah. I mean, I think again, we're so as scientists, we're so driven to get to an answer. We want the insight. And I think the potential that, AI opens up, for these insights, especially if we're able to look across these massive data sets, that are comprehensive.
01:35:06:23 - 01:35:24:15
Kat Kozyrytska
Again, going back to cell therapy. Right. If we think about that, then to me being workflow, if we are able to analyze a large enough data set, we can finally get the answers for what matters to be able to deliver, efficacious therapy that you can in fact manufacture with reliability. I think this would be tremendous. Hey, I'm all for it.
01:35:24:15 - 01:35:46:06
Kat Kozyrytska
I want this when I'm a patient. I want to have this as an option available to me. But it's. You just have. I think as a therapy developer, you have to ask questions of your technology provider. And I'm happy to say that some of the, I tech providers I work with are so focused on quality, reliability, privacy.
01:35:46:06 - 01:36:05:24
Kat Kozyrytska
I love this. I want them to also then be more focused on commercial so that they can propagate this great technology that they have. I'm sure I'm not going to say anything groundbreaking, but the if you build it, they will come. It's not true. You have to bring them to see your great thing that you've built. Commercial is not an afterthought.
01:36:05:24 - 01:36:29:04
Kat Kozyrytska
It's a core part of your approach. So that you can then make the money to feed better development. So I want the tech companies to think about, if you, if you I mean, if you believe in this highly reliable tool that you're building, please, please, make some commercial efforts so that your potential customers can see the great value.
01:36:29:06 - 01:36:52:07
Kat Kozyrytska
But I think where we are today is that they're potential customers are not. They don't have the list of questions to ask of the tech provider. So I think we have to get to a point where there's some, understanding of the risks. And so therapy developers on the root in the routine of asking these, you know, 20, 30, however many questions, maybe the burden will be alleviated once we have some regulatory framework.
01:36:52:07 - 01:37:18:17
Kat Kozyrytska
We have a certification program like we can say, out of these ten points to nine and a half. So you probably don't have to worry about the 30 questions. You know, maybe you ask 2 or 3, and the rest have already been by default answered with the certification program. But as we're not there yet, and as we're so innovative and we want to be on the edge of technology, I think you have to again, you have to take control and protect yourself and your organization in a very thoughtful way.
01:37:20:17 - 01:37:38:11
Christian Soschner
Yeah, yeah. That's true. And then people are seduced by open AI, by Alphabet's the big, big companies out there who also wants to sell, their solutions and promise everything. And it's just not working in farmland. Biotech. So easy.
01:37:38:13 - 01:37:41:12
Kat Kozyrytska
Yes, yes, we're I think we're very different from
01:37:41:12 - 01:38:04:07
Kat Kozyrytska
the day to day consumer. Because the impact of wrong answers of misinformation is absolutely devastating. The impact of leakage of data is devastating. And like we talked about before, right? Sometimes you won't know that your data has left your organization. You might not know for years. That's really scary.
01:38:04:07 - 01:38:29:09
Kat Kozyrytska
There's not an immediate readout. So you have to ask the questions, upfront and maybe if I can just say here, I think that I'm such a believer in collaboration. I think we can get by working together. We can get to answers so much faster and at a lower cost. A lot of hesitancy that I hear from therapy developers, obviously, is that they don't want to share their data.
01:38:29:11 - 01:38:53:07
Kat Kozyrytska
But I want us to think about this in the bigger context. So if you're collaborating, the tech is here to do this more privately and confidentially and securely, that you're actually retaining your data and not sharing it with the ten, 20 companies that you're collaborating with. Meanwhile, as you implement AI, not asking questions about privacy, you might be sharing your data with the whole wild world.
01:38:53:12 - 01:39:21:14
Kat Kozyrytska
So, it's the balance of the the balance, the discrepancy of those two things, I think, existing in the same time and space. I find to be a very interesting place, but I it is my firm hope and belief that as we develop a better understanding of data privacy, security, confidentiality in this new world of AI internally, we will then become more comfortable collaborating with others in that private and confidential way.
01:39:22:09 - 01:40:06:08
Christian Soschner
I think there are key points in there. So one key point is that alarms are just a new technology. And also artificial intelligence agents are new technology in their infancy. It's not a mature system yet when the funny part is you talked about therapy companies, so they are sometimes very hesitant to collaborate with other companies. But I've experienced a little bit less hesitant to upload contracts into our labs to analyze them, especially when it's contracts from from Asian countries for example, that I and 20 Cent in English, just uploads them and just put then you have to I mean and edit them like ChatGPT is basically public domain.
01:40:06:08 - 01:40:24:00
Christian Soschner
So when the data, the data left the company, and I'm not really sure that people are aware of this risk, but you have a contract in with some details. I mean, you can read a lot out of the contract and know what stage the company is, what they are working on. Sometimes there is confidential data in there.
01:40:24:00 - 01:40:48:09
Christian Soschner
So you have the confidential data then also, you know, names. It's really complicated. So what we need at the end of the day is not just the algorithm from open AI, it's just a base layer. So you need to really like in old times, like in the 90s with the internet you need this layer in between that. Make sure that this base layer fulfills all regulatory needs that we have in pharma.
01:40:48:11 - 01:41:10:03
Christian Soschner
And this is a huge space for new companies. And unfortunately, you mentioned it before, they are not expert in selling. So they produce the solutions and rarely invest with their investors in commercialization, in getting out to the customers, in understanding the customers. How can you fix that?
01:41:10:05 - 01:41:10:12
Kat Kozyrytska
Well, I
01:41:10:12 - 01:41:34:00
Kat Kozyrytska
think as we talked about before, right. I think for new founders, it's, this recognition that revenue is key. I know you're really excited about the tech that you're developing. Want to get more data and get the better model? Maybe you're in stealth mode. You don't want to reveal all these things, but you sort of have to sell what you have today to feed the development of what you want to build.
01:41:34:02 - 01:41:55:02
Kat Kozyrytska
I think there's a lot of excitement on the investor side now about the usage of AI. It's like if you put the word AI in your pitch, then that's everything's more seamless. But we have to imagine that this is not an eternal fountain, that it will, inevitably, we'll see some of the slowness of adoption, as little as outcomes and so on.
01:41:55:04 - 01:41:59:24
Kat Kozyrytska
And we'll get to AI being less of a trigger for cash. This this is.
01:42:00:09 - 01:42:00:22
Christian Soschner
How.
01:42:01:01 - 01:42:12:12
Kat Kozyrytska
This is. Maybe it's a whole lot. So and I think that founders will think about how they will feed themselves more sustainably, feed their companies.
01:42:12:12 - 01:42:30:00
Kat Kozyrytska
They will invariably do better. This we have seen time and again. Right. This is not anything new. Yeah. It's just a new kind of technology. So I think focusing on that commercialization, I'm sure everybody's tempted to just look at ChatGPT and ask it, what's the best strategy to commercialize?
01:42:30:02 - 01:42:52:03
Kat Kozyrytska
I would argue the more niche your application is, the worse the answers that you're going to get. I will be so to talk to somebody who who has commercialized tech before, obviously, you know, I, I'm in the space. I, I love working with companies, on this. And I think there's so many ways that we have seen that things do not work.
01:42:52:09 - 01:43:08:18
Kat Kozyrytska
So moving around those obstacles and making sure that you get to successful commercialization is key. So, but, you know, focus on bringing the money in. It has to be a sustainable operation. That's that's my suggestion.
01:43:09:12 - 01:43:29:14
Christian Soschner
Yeah. That's true, that's true that are there any best practices, drug development that you see in your work, where companies successfully use artificial intelligence to improve certain areas of the workflow and that already work and fulfill recall it, or compliance risks that we have.
01:43:29:16 - 01:43:30:15
Kat Kozyrytska
So
01:43:30:15 - 01:43:55:06
Kat Kozyrytska
I'll maybe separate those two things out. I think we have some really exciting tech, incredible amount. We have yet to see some. Truly the novel candidate in the clinic is that at least as of today. So that will be, that will be huge. I think that, on the compliance side, that's more in your GMP space.
01:43:55:08 - 01:44:17:19
Kat Kozyrytska
So, away from drug discovery and, there's emerging and evolving guidance. I think from my perspective, the EMR is leading that effort. I love that we're approaching it. Right. That's it's a separate guidance so you can update it more rapidly. They understand that this will be, rapid space. So, you know, they're continuously working on the guidance.
01:44:17:21 - 01:44:42:24
Kat Kozyrytska
I see an opportunity for the Middle East to also lead some of those efforts and develop collaboratively with EMR. And other regulators, a framework, I think on the regulatory side, harmonization would be tremendous, even in terms of cost reduction for the heavy developers. Right. If we could have some alignment, across, different regulatory agencies, that'd be great.
01:44:43:01 - 01:45:06:02
Kat Kozyrytska
So, think about compliance being key in your regulated space. And then on the right. So that's the priority there. While privacy obviously is a priority. But then, on the drug development side, there's some new and exciting things I would again, before you bringing tech inside would think about privacy, would think about reliability of those insights.
01:45:06:02 - 01:45:17:06
Kat Kozyrytska
These are questions we don't see people asking very often. They really should. And then once again, building in your values into, the algorithms that you're bringing.
01:45:17:24 - 01:45:50:16
Christian Soschner
Let's look at the entire drug development, process from drug discovery up to, bringing it to the market. And let's look at the benefits and risks that I has in these areas and what it means for collaboration. And when I look at drug discovery, at the early stages, there are currently companies like in silicon Medicine, for example, they're using, AI tools to, finding new targets, to finding also new solutions, new molecules to model them for these targets.
01:45:50:18 - 01:46:15:05
Christian Soschner
I always wondered how that works with, patents patent. Typically, when the trust feeds the data into lamps, I mean, what what I thought at the beginning was this, just unique models for one company, but then, I don't know, they want to use it for more companies, but then it's just sort of you feed something in their database that's really new, innovative.
01:46:15:07 - 01:46:23:06
Christian Soschner
But on the other hand, you disclosed innovation already. What's the risk there? Help a lot of them. What do I miss in this area?
01:46:23:08 - 01:46:25:00
Kat Kozyrytska
I think on this patenting
01:46:25:00 - 01:46:51:18
Kat Kozyrytska
side, so Goodman had a really spectacular law firm. They had a really spectacular AI day at bio earlier this year. And, they had a really thoughtful approach with that human in the loop because aside from, I think at least as of June, it was just South Africa, where you didn't have to assign, a patent to a human everywhere else.
01:46:51:20 - 01:47:26:12
Kat Kozyrytska
We have the human has to be on the patent so you can strategically place people along that workflow to ensure that whatever you discover is patentable. And I think that's such an interesting area. From the regulatory perspective, from a legal perspective. Right. I think for drug target discovery, we have seen such immense impact of these collaborative approaches where you, take a massive dataset.
01:47:26:14 - 01:47:56:17
Kat Kozyrytska
The project that I have the great fortune of interfacing with was the pharma proteomics project, where at the time it was ten largest pharma pulling together. I think now it's 14 parties, participants in the round to, but it's pulling cash together so you can fund these massive studies. And so already in that first round, of procurement analysis of the UK Biobank, samples, they discovered, I think, over 10,500, new potential drug targets.
01:47:56:20 - 01:48:26:17
Kat Kozyrytska
So maybe now we've we've been some AI in there, and which is I think was happening for round two, and you just feed so much more data into the algorithm you are going to get better answers. So to me, drug discovery is such a big topic. It's so expensive. It's so difficult to do, but so important so that we don't want to continue with the gold rush after the one target that we see works in the clinic.
01:48:26:17 - 01:48:59:00
Kat Kozyrytska
And then everybody floods the floods that space. Right? We want to get away from that because there they're all these patients who are maybe not served because we're all going after this one thing. So, but even if you have a great model, it's still going to be better if you pull the data together from across companies. So to me, the fact that these large players have been able to come up with a framework, legal framework to share those insights, in the competitive collaboration space, to me, that's a wonderful alignment of the goodness of humanity with the individual, company values and goals.
01:48:59:00 - 01:49:28:16
Kat Kozyrytska
I mean, it's it's so beautiful. And so the only collaboration, those many collaborations, that exist, I think, I mean, I'm mapped out maybe 18, that are between various combinations of companies, but we know how to do this. We have legal teams that have figured out how to share, data, and keep the IP. I think the upgrade that we're going to get with AI and some of the tech developments in terms of decentralized data analysis is that now we can keep the data private.
01:49:28:16 - 01:49:49:02
Kat Kozyrytska
We don't actually have to put the data that we have across companies to one spreadsheet. So it's better it's more we can be more confidential in the way that we collaborate. So I think it's it's just such an open and wonderful space. From here on out, and I think I will play a massive role, in, in this in the collaborative
01:49:49:02 - 01:49:50:04
Kat Kozyrytska
approaches.
01:49:50:06 - 01:50:12:23
Christian Soschner
So it's basically helps getting people more to collaborate with each other and breaking up the silos. And there are some benefits when you combine. Yeah. I also with, smart contracts, you can also make sure that, the investments they made when they disclose data are then also, returned to them when something hits the market. So there's a lot of potential and it's moving in the right direction.
01:50:13:00 - 01:50:17:10
Christian Soschner
If I understood you right, from what you see in the industry.
01:50:17:12 - 01:50:18:09
Kat Kozyrytska
I mean, just
01:50:18:09 - 01:50:36:11
Kat Kozyrytska
the number of collaborations that we've already execute aren't executed on as an industry. The number there are in progress, I think, are all really good indicators. What I find is the challenge to propagating that collaborative approach is that in large organizations, sometimes folks working over here don't know that folks over here have figured out a collaborative approach.
01:50:36:11 - 01:50:58:06
Kat Kozyrytska
So, it's it's this brand awareness almost for collaborations, that, that some of that internal, some of the external awareness that we have to build. But, I mean, that's, that's what I talk about in my talks that we have a way we have a framework for collaborating and working together. We just need to apply it to these new domains, like cell therapy.
01:50:58:06 - 01:51:25:03
Kat Kozyrytska
Manufacturability is one, I think, figuring out which organoid fits with which, clinical outcome. Right. What's the best readout, most predictive of a particular clinical outcome in a given indication? I think that's another great area to collaborate. And with all of the momentum that now now is in organoids because of the FDA regulation, from, I think April, I think that's prime I'm excited for us to to really look closely at that space and
01:51:25:03 - 01:51:26:13
Kat Kozyrytska
figure it out that.
01:51:26:13 - 01:51:44:11
Christian Soschner
Absolutely. I mean, with AI, with social media, with all these new tools like zoom that we are using now, you really can bring people together and, help them collaborate better. This is for the drug discovery part. So hopefully at the end of the day, we get better targets, we get more solutions and we get faster for is faced.
01:51:44:12 - 01:52:06:01
Christian Soschner
It really takes a long time to pre-clinical candidates and to candidates that I approved at the end of the day by the FDA so they can go into clinics. What benefits do you see on the clinical stage? We talked about risks earlier. So now let's just stay on the benefits side. What what what can I do? The clinical side in drug development.
01:52:06:03 - 01:52:07:06
Kat Kozyrytska
Yeah. There's been a lot
01:52:07:06 - 01:52:42:17
Kat Kozyrytska
of investment, on the biomarker side of things. Right. And, personalizing therapies, we know, from NCI match study that biomarker associated therapy selection gives six times more people, benefit. So I think it's the evidence is clear. It's clear that we'll do better if we personalize therapies. Right. So I think leveraging AI to map out what are the predictive biomarkers and then finding people, to deliver these therapies to again, it's a balance of patient privacy.
01:52:42:17 - 01:53:08:19
Kat Kozyrytska
Right. So we'll have to we'll have to navigate that landscape in a thoughtful way. But for people who want the therapy but don't know that it exists. Right. It could be lifesaving to help them even know about the existence of a given therapy. With cell therapies. We've we've gone through this, you know, for a few years where, you know, are massive efforts to figure out how can we make doctors aware of the existence of these novel therapies?
01:53:08:19 - 01:53:41:17
Kat Kozyrytska
How can we put patients aware of them? A good friend, treat you Ryan. Who's an encore? She's a mother, of a child. And she developed, therapy, together with him, of course, to to help this child survive. And they've been battling this for, for many, many years. But she talked about how in, the initial stages when her daughter was just diagnosed, she spent weeks in front of the laptop just trying to find the cure.
01:53:41:19 - 01:54:10:18
Kat Kozyrytska
So great that she's so resilient that after this traumatic diagnosis, she can move on and be in front of the laptop and be searching. That's not everybody. That's not every person. So I think we have to it's our obligation to make it easier to find this information. And so, actually a conversation that I had just earlier today, was about how perhaps ChatGPT could be a tool to raise awareness, of, of some of these therapies and certainly not focused on that today.
01:54:10:19 - 01:54:21:00
Kat Kozyrytska
But maybe it's another channel that we can bring that information closer to the patients, because we sort of know that everybody's tempted and using, ChatGPT to find information.
01:54:21:19 - 01:54:41:19
Christian Soschner
That is just one of the reasons why I started this podcast. People use it to get information. So we need to make sure that the right information reaches the people when they search for it. And podcasts are a very good example of a very good way to distribute information to weather lamps and data, hopefully to retrieve it. Then when someone is using it.
01:54:41:21 - 01:55:09:20
Christian Soschner
But what I thought when you were talking, is I experience the farm industry as an old fashioned industry. I mean, it started somewhere in the 1800s with the industrialization and then was designed basically for one concept, over decades to develop one solution. And then deliver it to many patients. And now what you said, my understanding is that we have a really paradigm shift in the industry, that we move away from this.
01:55:10:00 - 01:55:23:04
Christian Soschner
We want one molecule, put it into one pill, and then just ship it out on the market to a more personalized approach. Is pharma really ready for that structural.
01:55:23:06 - 01:55:24:18
Kat Kozyrytska
Thing? That's an excellent question, and
01:55:24:18 - 01:55:29:19
Kat Kozyrytska
I would love to bring an entrepreneur to, to get some, folks from farmer to speak to this.
01:55:29:19 - 01:55:33:15
Kat Kozyrytska
I think there's such an ethical side to this question. Right.
01:55:33:15 - 01:55:46:22
Kat Kozyrytska
If we know that a personalized therapy will work better for the patient, it is almost our obligation to make it available to them, to help them find it and to deliver this personalization.
01:55:47:21 - 01:56:17:19
Kat Kozyrytska
And that maybe propagates also to investor side, who all want maybe to see a blockbuster. And I understand as a commercial person I understand this, but it's also that if you know that the tailored drug will work better than a blockbuster, can we shift the way that we think about investment in this space to have math that takes into account pure dollar value, ROI, but also this humanitarian ethical aspect?
01:56:17:21 - 01:56:41:02
Kat Kozyrytska
And I love the way that Robert sells, Robert Traxler from Excel, which talks about it where he says that we are in health care, not money care. If you were purely focused on the cash. Money care is the finance industry, right? That's the place. But in health care, it's more complex, an equation. It's about improving people's lives.
01:56:41:04 - 01:56:58:17
Kat Kozyrytska
In, in I think that companies therapy developers when they fundraiser of course they're struggling with the hockey stick. How do we make hockey stick work? So, I'm working on an article, with Ryan Murray from, that source, hopefully will be coming out here in the, in the next few weeks on ethics side of it.
01:56:58:17 - 01:57:04:13
Kat Kozyrytska
Right. How how can we think of this, perhaps somewhat differently than other types of investment.
01:57:06:09 - 01:57:30:09
Christian Soschner
What's what's your concept, to rethink investing in? I mean, at the end of the day, investment is always about making money. So this is, a precondition. Not for an ethical reason and not because people are greedy. But at the end of the day, you have a have to reinvest in the next generation. So if you don't make a profit from the old generation, at one point in time, you lacked simply the capital to reinvest into next generation.
01:57:30:11 - 01:57:43:22
Christian Soschner
So this, this, profit driven industries are profit driven for a reason to be able to improve the quality of the entire system. Do you have an idea how a new concept could look like?
01:57:43:24 - 01:57:44:15
Kat Kozyrytska
That
01:57:44:15 - 01:57:49:00
Kat Kozyrytska
super question, I think once I figure that out, I will sell that and I will be off to buying.
01:57:49:00 - 01:57:51:17
Christian Soschner
I was I was hoping.
01:57:51:19 - 01:57:53:13
Kat Kozyrytska
But I think some of it right.
01:57:53:13 - 01:58:17:12
Kat Kozyrytska
Is, is, is this, just the complexity of the math and then that if you sell more of, therapies that are less efficacious, that actual impact on the patient population might be not as good as if you sell smaller batches, but of very personalized, therapies. And we're moving to that.
01:58:17:12 - 01:58:43:02
Kat Kozyrytska
Right? We don't really have to make an argument for this with all of the novel types of therapies, with cell therapies, it's it's very clear that it's so much more precision medicine oriented. So I think we have that understanding. It's just how can we how can we get away from that hockey stick, which is it's just such a difficult, slide for companies to build as they fundraise.
01:58:43:04 - 01:58:45:24
Kat Kozyrytska
So I think it'll be interesting to see how this about.
01:58:46:07 - 01:59:06:08
Christian Soschner
When the hockey stick techniques is driven mostly by the high failure rate. In the end of the day, when you need to invest money and get, I hit rate of failures, you need, to I mean, Christopher Lang, our approach in, in his speeches brings the example that from 10,000 molecules. Yeah. Only one hits the market at the end of the day.
01:59:06:08 - 01:59:11:15
Christian Soschner
But you need this 10,000 number to find the one.
01:59:11:17 - 01:59:13:18
Kat Kozyrytska
I think this was will change
01:59:13:18 - 01:59:34:24
Kat Kozyrytska
because of AI, right? Because you can de-risk and make that pool smaller in the beginning with all the predictive tools. And there's so much tech, being developed on that front to reduce essentially reduced failure rates that even before we get to humans, we have so much a better understanding of the toxicity profile, and the binding profile.
01:59:34:24 - 01:59:46:10
Kat Kozyrytska
Right. All of all of the different pieces that, the routine testing that we can approach from that predictive perspective. So I, I think that that equation of the 10,000
01:59:46:10 - 01:59:47:22
Kat Kozyrytska
will change this.
01:59:47:22 - 02:00:06:10
Christian Soschner
This is, this is something I'm really curious to see in the future. I mean, when you look at these, two approaches, you have, the Six Sigma approach where you have just an existing product and make it better. And what you do there is reducing the fire rate, which really works well. You understand the process, you understand the product.
02:00:06:10 - 02:00:23:16
Christian Soschner
And Elon Musk is an expert in that to just improve and streamline every single process and take it out. But then you have this creative process where we really try to find a new solution to identify a new problem, and then naturally, the favorite side. And then I hear the claims
02:00:23:16 - 02:00:28:24
Christian Soschner
from artificial intelligence companies and say, okay, let's reduce the failure rate in the creative part.
02:00:28:24 - 02:00:53:24
Christian Soschner
Yes, you can do that. But, in the old system, you also risked that you reduced the hit rate. I mean, what you want is basically to about finding the one molecule that works. Wants to invest a lot of money in the one molecule and wants to drive down the investment into molecules that don't work. But in the old system, which without the AI, this 9999 failures were necessary to get to the bottom.
02:00:54:05 - 02:01:07:02
Christian Soschner
When you reduce the failure rates, then you also reduce the risk to find that one molecule that works in the end of the day. And what you end up with is hundred potential candidates, but none of them works.
02:01:07:04 - 02:01:08:16
Kat Kozyrytska
So couple of
02:01:08:16 - 02:01:35:08
Kat Kozyrytska
pieces to that. I think it helps to separate that into multiple phases. Right. So on the discovery side, it's I think in many ways the same question as we discussed earlier for this is your 1,020% exploration space, your yeah, we have a something that we think will work but can you work around it, explore that design space some more, to find new candidates, and identify new targets and so on.
02:01:35:08 - 02:02:03:17
Kat Kozyrytska
So you give more of a leeway, to the algorithm. The development side, I think is predicting develop ability. For example, we've done this for small molecule to some degree. We have, a standard and proprietary algorithms for predicting develop the ability of mAbs. I would love for us to get to this point with cell therapies, but I think it's in the end, it's a we could have a checklist if you make you discovered some new compound modality and so on.
02:02:03:21 - 02:02:23:13
Kat Kozyrytska
You have a checklist. These are ten criteria you have to fulfill in order to be able to manufacture it. So then your development can be much more streamlined if you know what these criteria are in a. So you won't end up in a space like we've had recently, right where you're in the clinic, but you're finding that might actually manufacturing the drug.
02:02:23:13 - 02:02:43:10
Kat Kozyrytska
It's not. It's only some of the time that you're able to do that. I think this whole this, this part we can figure out. And to me, that's a very collaborative space because we're all going to face very similar challenges in manufacturing, just given the biophysics, biochem history of the molecule that
02:02:43:10 - 02:02:44:24
Kat Kozyrytska
we're working on.
02:02:45:01 - 02:03:01:03
Christian Soschner
Now, not just moving in the field to potentially the automation. And, just doing the small things can also take going for the quick wins can really when we look at the markets, I mean, there are a lot of therapies on the market, but doctors are usually not aware of that. They have to treat patients and no time for education.
02:03:01:03 - 02:03:21:04
Christian Soschner
So if you can improve that, you're ready. Benefits. The patient doesn't always have personalized medicine. If you just can find the right medicine for the person with the eye, it's perfect. And when you look at the drug development, I'm not so sure that we really get down from the failure rate at the end of the day. But, can run much faster through the entire process.
02:03:21:04 - 02:03:40:03
Christian Soschner
I don't I'm not convinced that we really should aim at reducing the failure rate. There is always have a yes. Always. A marker of innovation and an innovative area. So failure is good in that space. And I would really not like to, just try to streamline something that is necessary for innovation and getting.
02:03:40:04 - 02:03:40:12
Kat Kozyrytska
The way
02:03:40:12 - 02:04:01:13
Kat Kozyrytska
that you can think about reduction of cost and reduction of timeline for that is you're doing this in silico. So you're still trying a bunch of different things, but it's much cheaper to do because you don't actually have to pay for the media director all these things. Right. In there again, lots of companies working in that space.
02:04:01:19 - 02:04:27:12
Kat Kozyrytska
Fewer than I would like to see. But, there some really interesting, innovative, in silico early PD. And I'd love for us to have a deeper understanding on the actual my manufacturing side, so we will get there as well. But I think in this new age, we'll be able to condense a lot of the lab work, by expanding our predictive capabilities.
02:04:27:14 - 02:04:47:18
Kat Kozyrytska
Again, reliable bility will be key in that part, because if you're predicting a process that then you discover two years down the road doesn't actually work. It worked great on the computer, but in the lab it doesn't work right. So that then you're you've lost a lot of time and money. So reliability once again in our space is is just key.
02:04:47:18 - 02:04:52:00
Kat Kozyrytska
So as you're bringing in and then silico modeling provider ask them
02:04:52:00 - 02:04:53:04
Kat Kozyrytska
questions.
02:04:53:06 - 02:05:09:11
Christian Soschner
Yeah. And we need more resources in that space with willing to take risks at the end of the day. And bet on these companies and, in silico is, I think, huge opportunity, but we have not enough investment capital in that space. My opinion. What what is your opinion on that?
02:05:09:13 - 02:05:11:04
Kat Kozyrytska
When I looked at investment, it seems
02:05:11:04 - 02:05:37:08
Kat Kozyrytska
that, a lot of money has gone in into AI on drug discovery side and patient stratification therapy, personalization from that perspective, I think, this development side, maybe it's not perceived as as big of a cost. Which is maybe why there's less investment going in there. But to me, when I look at it, it's much more, let's say streamline, streamline similar, operationalize able.
02:05:37:10 - 02:06:00:15
Kat Kozyrytska
So I think it's actually a prime area to be developing technologies. And we see some of the big players making investments there. Would love to see more more, VC money going. And I think my assessment for why this has been an underfunded area is that, we kind of want to solve the biggest problem, the biggest value, the biggest cost driver.
02:06:00:17 - 02:06:16:07
Kat Kozyrytska
But, so I, I, based off of this, work How to be a billionaire, How to be a millionaire. And that whole book is about how to save in tiny little pieces. It's not. How about how to make the massive paycheck so that you save a little bit and a little bit on that, and in the end, you have a lot of money.
02:06:16:07 - 02:06:26:09
Kat Kozyrytska
So, I think of this development side is that we're going to save a little bit here and there, but in the end, the acceleration that you're going to get, the cost savings are
02:06:26:09 - 02:06:28:04
Kat Kozyrytska
going to be significant.
02:06:28:06 - 02:06:50:09
Christian Soschner
I think it's to say that sorry to interrupt you, but I think it's the saves problem from, from from the company side. Not so much from the VC. I mean VC side in Europe is underfunded. So this is a regulatory part. But what I see from the company side is that they not sell what we see are looking for at the end of the day, because Orbis needs to make this and looking for these tax returns because it's pretty simple.
02:06:50:11 - 02:07:13:09
Christian Soschner
It's the mechanics of the industry because they want to take high risks, high risk of failure. And if you return what I rarely see when I talk with founders of AI companies, for example, you brought the example with, into lab models run in silico. So you don't need a rat model or a dog model or it's, animal models in general.
02:07:13:11 - 02:07:38:10
Christian Soschner
You just run it on a computer in the cloud, get the results, and can move on. And FDA seems to move in that direction. The company that nails that and can really say we can simulate the entire drug discovery part and pre-clinical development part, and you get in no time a candidate out of this that goes directly to the clinic, creates an entire new market and captures the entire market.
02:07:38:16 - 02:08:06:01
Christian Soschner
But really, any company say that to investors and say, look, I mean, what we do is remodeling the entire market. And when we win and when you make us win, we throw everybody else out of the market that operates in that space. I think it's the sales problem. At the end of the day, when I look at the sales pitches, they are mostly yeah, we do a little bit of here and a little bit of there and it's like a project sales, part, but we see, I really recommend reading the Paolo, for example, written by Sebastian Mallaby.
02:08:06:02 - 02:08:29:21
Christian Soschner
He brings a lot of excellent examples how Genentech started. Why ABC invested in Genentech and this is basically the blueprint for how to sell to PCs. And when the founders are not convinced, the creator thinks case for we see the likely not get funding and then we have the problem that they are not funded and then the solution doesn't hits the market and then they need to go on the customer side.
02:08:29:21 - 02:08:37:23
Christian Soschner
But then you have also the problem that they don't want to sell to customers. So it's a little bit the sales problem on both ends.
02:08:38:00 - 02:08:38:05
Kat Kozyrytska
And
02:08:38:05 - 02:09:00:21
Kat Kozyrytska
also it's a you know, when they go to customers, I find that the language is still very mathy. It's very, algorithm focused, but which is understandable. Right. The founders problem is sometimes it's a little technical in AI, but then you have to translate that language to the customer. There are biologists, most of them. Some of them are data scientists, but they're still not AI engineers and AI.
02:09:00:23 - 02:09:25:02
Kat Kozyrytska
Again, this is not a novel problem in any way. I have worked with lots of mechanical engineers who also, when they make a flier about their new CapEx, it's very focused on that, engineering language, and it takes some iterations to get away from, build and more towards the impact. And I think everybody knows they have to do that, but it's hard to do for yourself when it's your own product.
02:09:25:02 - 02:09:33:24
Kat Kozyrytska
Right? So it's the value of bringing somebody who's the third party, and kind of sits between you and the customers. The translator, I think is immense.
02:09:35:15 - 02:09:55:17
Christian Soschner
But they need to do it. Then they need to learn it. Sales to investors. It's also the sales problem is always these tend to deliver returns to their investors. It's pretty simple. So when a founder doesn't bring this next story, then the deal doesn't close with the investor. The same with the customer said what you said a customer needs to solve a problem.
02:09:55:19 - 02:10:09:03
Christian Soschner
And if they take the condition doesn't connect with the problem of the customer, the customer won't buy the solution. So it's pretty much the needs. This both mindsets and the needs to get people on the teams who would love doing that.
02:10:09:05 - 02:10:12:09
Kat Kozyrytska
Yes, yes, I completely agree.
02:10:12:11 - 02:10:18:03
Christian Soschner
It's really it's really. Time flies when you have fun. We are ten minutes over our two hours.
02:10:18:05 - 02:10:19:20
Kat Kozyrytska
Thank you. Thank you for your time.
02:10:19:22 - 02:10:30:03
Christian Soschner
No, I really love that. Before I come to the final question, is there any topic that you want to emphasize? It's the end of our our live stream.
02:10:30:05 - 02:10:30:16
Kat Kozyrytska
I don't
02:10:35:24 - 02:11:04:11
Kat Kozyrytska
big themes. Right. One, I cannot say enough, as a adopter of AI tech, asking questions and really thinking deeply about your the privacy confidentiality of your data. And then if you're in the regulated space thinking about compliance, and thinking about what are some of the next things that the regulators are going to ask for, inevitably, in six months, 12 months and so on.
02:11:04:13 - 02:11:41:06
Kat Kozyrytska
Thinking about implementing your values to make sure that suggestions that your, employees get, that you get from the algorithm is reflective of your core beliefs. Going to the personal side, protecting yourself from the malevolent layers, out there, they are out there, whether you believe in their existence or not. So in the same way, thinking about your the privacy of your data, making yourself less vulnerable, and there's so much research into that, very consistent steps that these people take in order to get you to do whatever it is to serve them.
02:11:41:08 - 02:12:04:14
Kat Kozyrytska
So information's out there. Happy to join anyone who wants that. But we have to take advantage of the academic research that's been done. And then, you know, bigger picture. I'd love for us to be more collaborative in this industry. I think the, the progress that we see with other industries where they're able to standardize and collaborate, I mean, it's it's just immense.
02:12:04:16 - 02:12:31:10
Kat Kozyrytska
So, like, for example, I, tech, right, how fast they were able to move because of open source code. I think for us, we're lagging in that. And I really hope that this difficult economic climate can help us think outside the box, because we still want to develop therapies that will save people's lives. So collaborating will offer a cheaper and likely faster way of getting those therapies to
02:12:31:10 - 02:12:32:16
Kat Kozyrytska
patients.
02:12:32:18 - 02:13:00:18
Christian Soschner
And I absolutely. And I hope I mean, if somebody thinks bad actors don't exist, I would recommend going on social media online for an hour extra time, for example. And you see a lot of back and forth between people, so they exist. But when we leave this at the beginning of the conversation and now look into the future, and this is my final question to you, the next ten years, I mean, we are now in 2025, in ten years we are in 2035.
02:13:00:20 - 02:13:28:16
Christian Soschner
And there is a lot of potential out there with AI, with robotics, with all these innovations that are currently in development, what's your most optimistic scenario when you can think it through? How will biotech and pharma look like in 2035? When everything happens according to your plan and you have unlimited money and can make this one scenario, this best case scenario really happened in ten years.
02:13:28:16 - 02:13:31:24
Christian Soschner
How does the future look like for you?
02:13:32:01 - 02:13:32:23
Kat Kozyrytska
Yeah, so I
02:13:32:23 - 02:14:07:01
Kat Kozyrytska
think at the core it is that we have gone through these ten years without massive losses of information, by specific companies. We have gone through this without a collapse because we were thoughtful and really forward planning in our privacy, confidentiality, compliance, etc. strategies. I think a in in this ten year view, I would love for AI to become a tool, technology to implement good things, to amplify the good behaviors, good part of our population.
02:14:07:01 - 02:14:26:15
Kat Kozyrytska
So that, I kind of almost think of AI and humans as being one populist. Maybe in some ways in the future. I don't know if it's ten years or longer, but I would love for us to reduce that, you know, average, malevolent player at a rate of 5% down, by amplifying, the good, algorithmic side.
02:14:26:15 - 02:14:52:17
Kat Kozyrytska
And then just this collaborative approach, I think will help that move the industry. So much more better reliably. Right. If you even think about, measurements, that different companies are taking in different ways, if we can analyze across the different ways the outcome, the inside, the we're going to get is going to be much more reliable than the one person with magic hands in your lab who can execute on this experiment, and no one else.
02:14:52:19 - 02:15:09:24
Kat Kozyrytska
We will have seen this type of experiment, so I think reliability will go up. The cost will go down, the timeline will go down. If we can bring ourselves align ourselves, better with with that, overarching initiative of bringing therapies to patients. I think we have done this before, so I know we can do it. We can
02:15:09:24 - 02:15:12:09
Kat Kozyrytska
do this more often.
02:15:12:11 - 02:15:38:03
Christian Soschner
I enjoyed this conversation a lot. I like your positivity and I love your balanced approach. On the one hand, you are not only positive and are looking to build a better future and bring people to a more collaborative mindset. But you're also aware of the risks and that not everybody is a good player, that there are also bad players on the market and, wants to help companies to understand both sides, the positive sides, what I can do for them, but on the other hand, make them aware of the risk.
02:15:38:03 - 02:15:40:24
Christian Soschner
And they love this balanced approach that you bring to the table.
02:15:41:01 - 02:15:52:07
Kat Kozyrytska
Thank you. Thank you. Life has made me, balanced in that way. It wasn't my choice, but, but I think it's a more realistic perspective that will enable more actionable steps.
02:15:52:09 - 02:15:59:13
Christian Soschner
That absolutely agree with that. Thank you very much for your time and for this, amazing conversation. And let's catch up soon.
02:15:59:15 - 02:16:11:07
Kat Kozyrytska
Thank you. Thank you so much. Question. Thank you, everybody for joining. I really, really appreciate this. And please reach out on any of the topics, any of the other topics. So was such a pleasure. Thank you for the great questions.
02:16:11:09 - 02:16:14:13
Christian Soschner
Thank you and have a great day and talk to you soon.
02:16:14:15 - 02:16:16:22
Kat Kozyrytska
Bye bye. Lovely. All right.
02:16:17:00 - 02:16:18:17
Christian Soschner
Thank you.
02:16:18:17 - 02:16:58:10
Christian Soschner
We have heard today that artificial intelligence isn't just technology. It's trust, privacy and values encoded into code that showed us how the wrong use of data can derail entire companies, but also how the right frameworks can transform medicine, speed up discovery, and bring therapies to patients faster than ever before. And here's the really insight. I will not replace humans, but leaders who know how to work with AI ethically, strategically, courageously will replace those who don't.
02:16:58:10 - 02:17:18:11
Christian Soschner
If this conversation sparked something in you, here's one simple action. Please hit the follow button. I've seen the stats today, and most of you listen without following. By following you help this show grow and you make sure more real deep tech voices like cats reach the world for free.
02:17:18:11 - 02:17:21:20
Christian Soschner
And if you found value here, don't just follow. Come back.
02:17:21:20 - 02:17:27:19
Christian Soschner
Next week we have another remarkable guest who will challenge how you see the future business and yourself.
02:17:27:19 - 02:17:40:13
Christian Soschner
Because this show is not just about listening. It's about learning how to lead and build the next generation of great deep tech companies in a world that's changing faster than ever.
02:17:40:15 - 02:17:41:21
Christian Soschner
Have a great day.