
Lean By Design
Lean by Design delves into the dynamic world of biotech, pharma, and life sciences, focusing on operational excellence through effective workflow and process optimization. Join hosts Oscar Gonzalez and Lawrence Wong as they engage with industry leaders who share their insights, innovative strategies, and real-world experiences in transforming complex challenges into streamlined, efficient solutions. Through thought-provoking conversations, you’ll gain practical tips and inspiration to drive continuous improvement and success in your organization. Produced by Sigma Lab Consulting, Lean by Design empowers you to enhance productivity and innovation, one process at a time. Listen on Spotify, iTunes, Apple Podcasts, or wherever you get your podcasts.
Lean By Design
0206. From Chaos to Clarity: Bridging the Gap Between R&D Decisions and Scientific Data with Bogdan Knezevic
In this episode, we sit down with Bogdan Knezevic, co-founder and CEO of Kaleidoscope, to talk about a growing challenge in life sciences R&D: making smart decisions when data is scattered across teams, tools, and partners.
Bogdan explains why disconnected systems lead to costly delays, duplicate experiments, and missed opportunities. He shares how the shift from academic to industry research, where projects are shared, not siloed, requires better workflows, clearer handoffs, and more thoughtful tools.
We discuss:
- Why real-time access to decision-ready data matters more than connecting every system
- How delays between experiments quietly waste months of progress
- The hidden cost of repeating work because past data is hard to find
- Why user-friendly tools are just as important as powerful ones
- How better data management can strengthen trust with partners and investors
If your organization is working to scale R&D, improve collaboration, or simply make better use of the data you already have, this conversation is for you. Even small changes today can lead to huge gains tomorrow.
Learn more about Kaleidoscope Bio at https://www.kaleidoscope.bio/
Connect with Bogdan at https://www.linkedin.com/in/bogdanknezevic/
Ready to assess your organization’s efficiency? Connect with us at leanbydesign@sigmalabconsulting.com to uncover high-impact improvement opportunities. 🚀
Learn more about us by visiting: https://sigmalabconsulting.com/
Want our thoughts on a specific topic? Looking to sponsor this podcast to continue to generate content? Or maybe you have an idea and want to be on our show. Fill out our Interest Form and share your thoughts.
Thanks for joining us on another episode of Lean by Design, and today we have a really special guest. He's somebody that I connected to years ago, before we started Sigma Lab Consulting, and we've continued to maintain that relationship. I'm excited to see the things that he's doing, and so today we are here to talk to Bogdan Knezovic, who is a co-founder and CEO of Kaleidoscope. Bogdan Knezivic, who is a co-founder and CEO of Kaleidoscope I met him a long time ago when he was initially piloting wanted to get some feedback. Honestly, lawrence, I probably gave him much harsher feedback because at the time, I think I was getting a lot of emails asking me can you look at our platform? Can you do this? Can you do that? I had a little bit of a harsh response, but I got to hand it to you, bogdan. You carried that so well, and then I was like, okay, this is a real guy, this is a real company here that he's trying to build. Like, let me, you know, give him a little bit more of my time.
Speaker 1:I think a part of what you and I do, lawrence, is we're really interested in what other people are doing and, however, we can impart some knowledge to somebody to help their progress at their company. You know that's something that we like to do. We like to really be a part of that. Bogdan's coming with over 15 years of academic and industry experience, including immunology, neuroscience, regenerative medicine, genomics, with an emphasis on preclinical drug discovery so very similar to our space where we're coming from the labs and seeing things that are in front of us and going why isn't anybody doing anything about this?
Speaker 1:So today we're really excited to have a conversation with Bogdan because we're going to start talking about those R&D decisions and how well those decisions are being made. When we're looking at the scientific data, do we have the data that we need to create the right decisions at the right time? Are we missing things or are we just going based on a hunch because of some data that we saw previously, without really looking at the trends? So this is a big topic and, I think, a big challenge, with a lot of younger companies that may be really trying to hunt for that one piece of data that they can string through a set of hypotheses and really drive their operations. So I'm excited, bogdan, welcome and thank you for joining us today.
Speaker 2:Thanks for having me, oscar, and I remember those. I didn't have any bad memory of it, so I'll take your word that it was a harsh feedback, but it was really helpful speaking to you and to people like you that are generous with their time, because obviously all that I care about and all that we care about at Kaleidoscope is solving things that actually matter to people and so understanding what that is and what is maybe perceived as important but not actually important and the reverse is really useful. So I appreciate, obviously, the time you spent then and since then. So thanks for having me.
Speaker 1:Absolutely, and I'm glad that you don't have any bad feelings. Sometimes I think about these things. I'm like how did I meet them? And then I go wow, I was very crude, but I'm glad that we are here and having this opportunity to talk to each other. When we started thinking about R&D and the pace of research, you're constantly looking for updated data. It has to be scrubbed, it has to be massaged, presented a certain way. When we talk about our ability to make high stakes decision, why is this such an issue when it comes to R&D? Why is it that we are making decisions that are incomplete in terms of the context or in terms of what data is telling us that we have delays, etc. Why is this such a common issue?
Speaker 2:Yeah, that's a really meaty question. I wish I had a simple of. It comes down to an understanding of what you're trying to do when you're running R at an org or a cross-team level, which means that it's reliant on data that might be distributed heavily across different people and teams and tools. And so when you bring those two things together the fact that it's this distributed data and that every data point has its full milieu of context behind it it becomes a very, very challenging thing to manage, especially in a world where you're relying on humans and human time and human memory. If you don't have the right combination of behaviors, processes and tools, if you don't have those elements, you're inevitably doomed to let things slip through the cracks, and so those kinds of things compound over time and over the course of R&D.
Speaker 1:They really do. You know, as you mentioned just now, sort of this series of permutations. It's just hypothesis after hypothesis, after hypothesis, depending on where the data is going, and understanding that the data is there, but the context may not be there, because we are dealing with much more complexity in the projects that we're doing. We're doing projects that really span, you know, data and strategies from other functions that help us determine which direction we want to take the research. We want to take the research and it is in that framework that we've all grown to love or hate. That lies in those opportunities that often are missed, that create gaps in knowledge.
Speaker 1:And well, why did we go in this direction? Why did we run that? I thought we had the conversation with this team and we realized that that was not the best move forward. But things are not necessarily documented the right way. Perhaps the communication, as you said, as it changed from hands to hand, the communication of a new strategy, was not really discussed. So are these issues that you see only in earlier companies, growth companies or even at the enterprise level?
Speaker 2:I think we see them across the board.
Speaker 2:I think there's only more at stake and more opportunity as you scale, because it goes from the problem being, you know simply I use quotes here how do we manage handoffs or make faster decisions, to one, that is, how do we best leverage the like corpus of knowledge we've built.
Speaker 2:And anyone who has like been in biotech and worked in biotech knows that after a certain point you have now accrued so much data and knowledge that you very easily lose the ability to remember what was done or what was deprioritized or did someone leave in the middle of a certain screening campaign being run. And that matters now, not just for you to do the work you know you have to do for a given compound, but also because you might be sitting on a treasure trove of knowledge and data that might lead to entirely new programs, more pipelines being launched that you just literally would not know because six months ago didn't get the result back and forgot. And now that compound is just sitting there shelved. There's entire businesses now that are built around this premise of let's find things that are collecting dust, and I think that there's just a tremendous opportunity for companies internally to improve how they stay on top of their decision, relevant data and how they get the maximum value they can out of any data package that they have.
Speaker 1:I think it's pretty remarkable, as you're explaining it, when you really start to dig within your role and your function and you start to see things that were created that you've never known.
Speaker 1:You start to see initiatives that started and stopped because there's this inherent lack of continuity and lack of knowledge base within organizations. There was a point in my life where I thought that going into biopharma was going to be similar to my dad at NASA working 40 years. But in some cases these are revolving doors. People are taking roles to work two or three years to then move on to another organization to take on a different responsibility. So, especially when you see this for people within the life sciences, if you go from academia through industry, you have technicians, graduate students, postdocs and researchers on grants, and they're there for blips of time, but you could very well be sitting on a treasure trove, at the very least have a flow of work that has been done that gives you more information on the direction that you should be going, preventing any duplication of experimentation. That's really just going to slow down your bottom.
Speaker 2:Yeah, and all these things are linked together. So I like that you brought up the kind of revolving door analogy, because the time horizon on R&D is massive several years on the low end and so when the pieces are all interlinked and you have that kind of movement, then you can't as easily say, okay, well, someone will be here, even if they're here for just a year, and they're really great. It's self-contained work and it's one and done. We use this analogy sometimes when people ask us about project management and what's different about project management in bio versus or in life science, r&d versus elsewhere. I think a big thing is that things are just not as ephemeral. In biotech you're not just doing a checklist and completing it and moving on. You're going to have these artifacts that are all interrelated, that might come up repeatedly in the future, and so you're not just trying to do something for the sake of forgetting about it. You're going to need to access it in some way, shape or form, later.
Speaker 1:I love that perspective. It bodes well with the things that, lawrence, you and I have been doing in the projects that we've been running with organizations, where you really try to build a system that you can reference, because think about how often project teams in R&D and in other spaces, where you're really collecting things, but you never take a moment to reflect, you never spend an hour or two to say, okay, let's look back at what we're doing. All you're doing is looking at the very end. What happened at the finish line, when really it's all about the journey? What did we learn? What did we improve? How did we get faster? How did our decisions become more clear or lack thereof? And then what can we do in the next one?
Speaker 1:There's so much power in reflection and there's a saying that says experience is not wisdom. Your ability to reflect on that experience is where you get that wisdom, and I think that when people come into the organization, having some assemblance of what has happened before is really going to prime you in the way that you make decisions, in the way that you set up your R&D space and drive toward those goals. That seem like a checklist, but to your point, it's not. It's not a checklist. It's being able to look at the past and, in some ways, try to predict the future.
Speaker 2:The iteration piece is exciting to me because it manifests in slightly different ways depending on stage of company. So I think a lot of early stage companies really need to be embracing the fact that, like you want, your goal isn't to get 10 half-baked programs to clinic. Your goal is to get your main lead candidate in programming to clinic. And so what you actually want to be doing is testing and iterating very rapidly in the early days so that you know what you can kill and kill the fast, and then focus more and more resource on the things that are that are looking good, so that when you get to clinic you're less positioned to then take on the kind of challenges that come with clinical trials. So that's on the like early company side. So, having that awareness of like, are we iterating on? What is the decision? Are we getting good? And then the flip side is if you're a mature company, maybe you have now clinical stage assets or you have a growing portfolio of programs. You're two, three, 400 people or more. Of course, if you're a global top 10 pharma, that goes without saying. But if you're of these, like now, mature mid, mid-sized biotech, now this becomes how you build an engine that keeps innovating and keeps novel ip the like.
Speaker 2:Improvements in iteration are now at a macro, like meta level. No longer we get one thing to clinic it's okay. Well, this is the engine. Is how do we get it to be as efficient as possible? Because it's still immensely expensive to run these iterations. So you're going to need to show, especially if you're public, you're going to need to show your board. Hey, this is how we're getting either better or cheaper over time. Otherwise, you're not really driving value across the org.
Speaker 3:To your point, the engine itself, at some point in time, was designed for specific outcomes, and so, as your portfolio is growing, your team is changing, the decisions are being made. That machine needs to get modified over time, right? And I think you guys have touched around this idea of obviously, all of us are generating data all the time and that data is being used to make decisions. Right, but oftentimes that data is not very accessible, and so, even if that data exists, somebody can't see it or maybe use it for what they need it for. So there's the accessibility piece, and then there's the organization of the data itself. Right, when you are generating the data, are you leaving breadcrumbs for people to be able to follow if you are not there or if you move on to something else? And those are really important pieces to make high quality decisions.
Speaker 3:And, bogdan, from your experience, from working with clients and what your company does, what are some of the flags when you come across a growth company versus an enterprise company that may be dealing with some of these issues? It sounds like the different sources of data. It's one thing to have it all connect within a company, but we all know that we have partners outside of the company itself too, and you have to make decisions off of other people, people that are outside of your organization. So how do you help them navigate that? And what's one of those red flags when you talk to a client and say, okay, hold on, let's just align on what the actual problem is here.
Speaker 2:Yeah, I like that question. A couple of things came to mind as you were talking. One is a pretty simple question, but if you can't answer what's your best performing compound like why you, why answer that without having to email your six reports and then they have to dig into their data. That's a problem and it's a very common thing in this space is that that's a hard question to answer and and yes, there's gonna be quite like follow-ups, like how do you define best performing, whatever? Sure, you can define it how you want, but if you don't have an easy way to do that without a bunch of manual back and forth, then that's an issue.
Speaker 2:Another one that I see is downtime, so this concept of downtime being really expensive in biotech, which is when you actually know what the next thing is that you could be doing, or you have the information there. It's just like nothing is happening. I've seen this manifest where teams will, you know, get together. Let's say, two different distinct biology, chemistry or some kind of combination of teams will get together, maybe once a month to go over things that are relevant to both teams. I've seen this happen a couple of times, where people will share like one team will ask so when can we have those designs ready so that we can go and screen? What do you mean? They've been ready for like six weeks now and you realize that, like someone had saved a file somewhere, but there wasn't even a simple system.
Speaker 2:I'm not talking about all the powerful things we can do with Kaleidoscope, that aside, but even a process by which, okay, when this completes, notify this person or post it here or add it here, and so when you think about all the ways that kind of downtime can happen, and then you factor in external collaborators and people that you're requesting work from that are not in your organization, the mistakes they can make there or delays that can happen there, you're now stacking or compounding delays over time. So in a course of a year, this could easily be a quarter of time. That's just from being better at visibility into who's done what and can you action the data as soon as you have.
Speaker 1:You've touched on so many things that are. For me, it brings a little bit of a visceral response, because I've been in those situations, too, where you're sort of sitting around going like didn't anybody actually have a question of where's that data? For the last six weeks, what have we been doing? And usually you're talking about R&D teams that are working on multiple projects, so they're not just doing one, and that's a transition that happens. When you talk about root cause, why do these things happen? Well, I think in some cases we were trained to be like that. If you think about where does training start? Typically it's going to be in academia and you are going to have a self-contained project where I get to determine the experiments that I'm going to do. Or, if you're a technician, you're working with a postdoc or maybe a graduate student, so you're just really helping them on whatever they need whenever they need it.
Speaker 1:Now that you're moving into an industry role, you need to talk about these things. You need to alert people when things have been completed or when you're near completion as a project manager. When you're near completion, because you need to make sure that the other side is ready to receive what you are doing If you tell them last minute, they may say, oh, I didn't set up any experiments so it's going to take four days. How often is that, bogdan, where, oh, I didn't set up any experiments so it's going to take four days? How often is that, bogdan, where, oh, I didn't grow my cells for that experiment because I didn't know you were ready, so I just split them yesterday so I'll do them again in three days to set up the experiment that I can do two days later. This is all the time. And then when you start to bring in partners, cros, other organizations that probably have a lead time and no one really has a picture of the flow of the timing, as you said before, it's sort of this like I'm an individual contributor, ok, now that I'm done, I'm going to go and move on. And it just left on the desk, it's left in the cloud.
Speaker 1:In these earlier companies it's rare in my experience to find a research operations specialist, a project manager, something of that nature, in R&D spaces, because they don't need it, apparently. But these are the things that happen. We are not trained as scientists to pass the baton, because a majority of what's in academia is self-contained. It's the experiments you're running, whether it's supporting somebody or that you are a grad student and you have your own things to do. You are creating the pace. In these scenarios, everybody is a part of that pace and you lose motivation. You lose those crisp ideas that you're ready to knock in the next one, get sidetracked, you get distracted, and when we talk about why why these things happened, aside from the tools that we're using there's a little bit of ownership that gets lost, whether it's the expectations or behavior that has not changed or has not been communicated across the organization yeah, spot on.
Speaker 2:I think the academic culture piece is a big one and there's a lot of layers to it. Everything you just said 100%. Also the aspect of time frames and time horizons, and you have funding for two or three or four years guaranteed and actually chasing the thing that's most intellectually stimulating for you. The emphasis is on novelty, but not enough on reproducibility. I don't need to talk about kind of reproducibility crisis, but all the incentives reward novelty. Obviously, novelty is at the heart of why biotechs sprout, but I think the realization that hits a lot of people like a wall is when you're working in biotech.
Speaker 2:Your biotech is the engine to commercialize IP. It's not the place to go and blue skies roam and follow curiosities. We've seen this pattern over the last couple of decades of how we've moved away from a world where trauma does everything to one where really it's the academic labs doing the fully innovative explorative testing and then there's something, an inkling or a seedling that you can take and commercialize and so you create a biotech around that. And then you have these specialists, cros and CDBOs and other orgs that you collaborate with to support and the proof points you need that are not core to your IP but are important in building that package. And then pharma gets involved when you're phase one, phase two, wherever it is.
Speaker 2:So this kind of distributed nature, this whole commercialization engine, is something that is very foreign to academics. It definitely was for me as well. It's a whole other pace of work when you come out of that world. I think that is a cultural aspect that is really important to educate people on and make them aware, unless you're joining a super, super, super early discovery team at a massive organization and you have free reigns. Of course that's different, but I'm talking about the average biotech.
Speaker 1:It takes me back to when I had that transition. When I came in from academia I had this is this what industry is like? Kind of perspective, because things were not really connected. My understanding was that the reason why it's so hard to get into the industry they want experience, they want is because it's, you know, super complex and it's fast paced and everybody's. And I get there and I'm like I see a bunch of people out there, computers, so I'm like, okay, this is different science that I'm used to, and no one communicating.
Speaker 1:There was more through emails, but then it was almost like everyone was doing their own thing and then, at the last minute, it was a mad dash effort to develop insights.
Speaker 1:Let's try to pull these things together so that we can have a strategy for the next stage. It was underwhelming and something that I've grown to understand now, but it expanded your horizons of awareness, like who should be aware of the work that you're doing, regardless that your office and your lab is over there? I was in a lab down the hallway that I could easily have just disappeared every day because it was a back exit to the parking lot, but my work wasn't just my work, it was everybody else that was around me, that was working with it as well, and I think understanding and that development comes with time, but I don't think there's enough attention to it. The habits that we bring in, or lack thereof, contribute to some of that slowdown, contribute to some of that disjointed nature where you might be coupling new scientists with folks that have been doing it for 15 years, that have expectations that have never been communicated.
Speaker 3:There's a balance, though. Right, because we went from even 15, 20 years ago, where a lot of this stuff wasn't connected, to now opening the floodgates, where you have this massive flow of information. And, if you guys are familiar with, the paradox of choice is, when you have too many options, people get difficulty making decisions. So not everything that people are communicating to you is actually useful. I think there needs to be some sort of boundary between okay, if we're going to make this decision, we should actually be considering A, b and C. And what is the balance between what things you include and what you don't include?
Speaker 3:And I'm sure, bogdan, you come across this because your company is focused on connecting these systems. So how do you help your clients out and how should the industry really think about what things should be connected? And because it's a difficult choice, right, because there's trade-offs and I would imagine things get very expensive when you try to connect every single thing you're using and you look at it and go. We don't even use most of the stuff. That's for what we need it for. You know, how do you think about that from your company, lens, and, just broadly speaking, from the industry as a whole?
Speaker 2:That's a really great question. We took a very clear stance on this early. I'll caveat by saying there's lots of tooling out there that will help with various things. So this is just how we chose to approach it, which is focus on the data you need for the decisions that you're making as a team. So we have this concept of like. We call them data slices as a technical term, but it's the slice because it might be across. So we have this concept of like. We call them data slices as a technical term, but it's the slice because it might be across a bunch of contexts or teams or places, but it's the data slices that are important for whatever phase of work you're in.
Speaker 2:All of the other data is really important, and all the other data compounds and drives that, and all the other data will be audited at some point, of course, but in like, our worldview, it's what data do you need to reach a decision? And so a big thing that the best teams we work with, and then also the ones that start in an okay place and report huge gains, are the ones that really can leverage yes, our tool, but also just the mindset that we bring, which is well, define the thing that you need what's the purpose and what's the decision you're trying to make. You need, like, what's the purpose and what's the decision you're trying to make. Start with that and then work backwards and understand okay, well, this is the goal. For these reasons, to achieve that goal, we need, let's say, this data package. So what experiments or studies or assays or things do we need to run to achieve this data package? Let's plan those out and then you go and execute on the science.
Speaker 2:It's been interesting to see how people interact with our platform, where you have the teams that are really well established and for them it's figuring out okay, great, how do we expose the views you need and make you work faster. But you also have the teams that are like a kaleidoscope becomes our framework for defining what we're doing and why, and so after implementing, we've noticed these really great, unexpected, incidental, indirect results, which is the team is no longer planning you know tomorrow's experiment. Today they have clarity on this is what we're trying to achieve this month or this quarter, and this is why it matters. So I, as an individual, understand the work I'm doing contributes to this white space, or contributes to novel data here, or contributes to fixing this thing that doesn't look good.
Speaker 2:Here you have more of this alignment. It's everyone from the research associate up to the CSO feels like they're on the same page when working in this way. Laura, it's kind of answering what you asked, but it's not about bringing everything together all the time all in one place. It's being methodical and principled. What are you trying to achieve and what do you need in order to achieve it?
Speaker 3:Oscar and I have been really talking about the decisions that these companies have to make. What is the level of risk associated with it? Because if you're trying to make a really risky decision, you'd okay. Let's just pick and choose what we want, because we still don't know what that eventual target is going to be. But it depends on the risk level of decision-making that you are in. Every company is different, depending on the life cycle of the asset that you're developing. There are some things where we'll just have a little bit of data for this, versus the other things that are much further developed, like no, we need to have this, there's no, this is a non-negotiable item that we need to have. Oscar, do you have any opinion on this as well?
Speaker 1:I think that what we're talking about here is difference in how some R&D are set up. Either they are looking for a specific target, something that specifically addresses this disease and this patient population, because of X, y, z, and in other cases, you may be looking at folks that are developing platforms, where they're trying to develop a new technology. We don't exactly know where it's going to go. It's going to have a lot of different things and it's going to have a lot of different features in it, but we want to create a platform that can do X, so you sort of get to there and then you're going to have to start making decisions of all right, where does it make sense to? How can we partner this with people? Where should we be doing our own internal experimentation? Your goals are going to shift, your goals are going to change, and I think it's really important for folks to understand that, even though you're setting up a goal, you're trying to reach a target that might be like a year out, the way you get there might be different than what you think you know. Really having the ability to pull these things together Again, running the experiments that answer the right questions and that's the other part that we don't really talk about. We go with it as a hypothesis without really saying what are the different questions that we want to answer with the $2 million that we're about to spend. You know, looking from high to very specific, obviously, once you get into further into development, you know these are those questions that you're starting to see now as being a pinnacle to R&D research.
Speaker 1:Your target product profile and your target compound profile. You know, what is the compound, what should the compound do? And, as a product, what should it do for people? Those are two very different questions that you eventually need to answer when you're doing drug development, because they're going to take you down certain spaces. You need to make sure that your compound is going to nail it and then you also need to figure out well, how is this going to turn into the right product for those patients? Are they elderly? Do they have trouble accessing oral medication versus making this an IV, versus being inhaled? Those are all vastly different directions that they're going to cost a little bit of money to get that answer.
Speaker 2:They're also going to be very intertwined with what your teams are executing on in the lab. That's something that we also try and really emphasize, which is, like you need to keep a dynamic view of what that target profile is, and so we have, you know this, like leaderboard dashboards in Kaleidoscope. But when we get this in front of people who've like worked in pharma, one of the first things they go is like, oh, this is basically a way to like benchmark against a target profile, because you can spin up, like you know, whatever compounds you're trying to compare and you pick the parameters you care to track and you see them side by side and pressed across all iterations of work that you've done. And so it's interesting when, like again something that I learned through just working in the field and through the customers that we serve, which is there's going to be things that change in either your understanding or in the competitive landscape or something that is going to, then it might be very like strategic or commercial change.
Speaker 2:You have to then propagate that strategic direction through to what your R&D team is doing, because if you change the delivery mechanism or if you change something about that for commercial or strategic reasons, well now your team has to go and build or optimize for different things. If you change the delivery mechanism or if you change something about that for commercial strategic reasons, well, now your team has to go and build or optimize for different things. So, having a way to connect that and to know, not just a way to propagate that information backwards, but then to also know do we actually have things that fit this new profile? Well, because we've now spent four years doing R&D, we might actually already have candidates that are very excellent fits and you wouldn't have known that a priori because that wasn't the profile that you were originally chasing. So a lot of this stuff. It becomes very clear, at least to me and to people I talk with, why you need to have good systems in place when it comes to data.
Speaker 1:And so what you're suggesting when you have these strategic changes that cascade from leadership, from your manager, et cetera might put you in a position to go back into your system and say you know what. This is where the data has to bifurcate, depending on which direction we go. However, the core data that we have now taking in this new, you know, target product profile now we can see you know what. We have 50 constructs that we actually are matching up with what needs to happen over here. Let's resurrect those and see how far we can go. I mean, I think that's a great example of how you know understanding not only just your data, but pulling in those things and maintaining that historical knowledge and data for the company can save you a lot of money and can save you a lot of time to go back and restart the screening.
Speaker 1:The lucky ones are the ones that can nail that strategy from the beginning and say we might go into three directions. All three need this set of data. Let's go there. As we continue to get more, we might do a side experiment here and there to see if we get a blip that we can then carry out. How long has Kaleidoscope?
Speaker 2:been active. We started the company at the end of 2021.
Speaker 1:In that time since then, can you share a story of an organization that you went to that probably was as lost or disconnected and you were able to help them take a more thoughtful approach into establishing better systems, better process and hopefully closing in those gaps?
Speaker 2:The first thing that right away popped in my head of where I experienced it most was just in my own project, and this was in grad school. So this is where it became painfully obvious to me that the anecdote here is I experienced it myself. And then also as soon as I started talking to people immediately around me, like the amount of times where everyone was like nodding along and saying like yep, and sharing stories. There was pretty eye-opening but examples of things like a colleague that was interested in sequencing a number of mass patients because there was really exciting work happening on the RNA sequencing and genomic side and spending whatever it was three, four, five months prepping that data set to go and pair with with the hospitals like patient recruit and not get blood and get consent, and all that. Then finding out on a team meeting the once a quarter team meeting that happens that like they had already pre-generated a hundred or a patient's worth of samples from that exact target cohort and it was in a freezer and a spreadsheet somewhere on some folder that had logged that there's the sample, the samples are in this freezer, but good luck finding that when you're at an institute that's as big as a welcome center at Oxford. So it's just a work that went into the redundant work, the time and the razor and the energy and the mobilization of other people now and patients being involved. All of that obviously is extremely painful to grapple with. So that was maybe like my most acute example.
Speaker 2:And then, where I've seen us be able to shift teams, I think one is this example that I gave earlier of managing the handoffs, and this happens quite often that individual teams will have great internal tooling for what data they have. Maybe they have an electronic lab notebook where the molecular biologist can look, or the computational bio team has their own database they can query pretty easily. They have these great tools there. But what about when you're relying on work from someone else or when you're requesting work? I've come across teams that have tried to then Lawrence to your point earlier build more like integrate more like try and do that. But then it becomes very unruly because now you are trying to effectively maintain and build software systems in-house yourself, you're now also thinking about okay, well, wait, how do I notify people? So I have to have like a notification system and then wait. But how do I manage permissions and authorization and how do I make sure that the wrong person doesn't get access to the wrong data, and so teams that shared that they did a lot of that realized oh my God, we're incurring so much tech debt now and putting our IP at risk and it's just not something that you want to touch.
Speaker 2:The alternative is, well, I guess, keep pumping money into that or go back to the really broken way where you might go six weeks without knowing you have data that you can action, and so there's examples like that where we've come in and given teams again, it sounds like very straightforward, but that's the whole point, which is great Use us to request the work that you need, like find the data that you need for the decision on go, no go, and then, as you get it, have a way to action and say, great, this is a go, I'm going to request it from this vendor. And then they get a notification. It goes to their email. Again, we're big believers in like don't force too much behavior change if you don't have to People where they are. So it goes to their email.
Speaker 2:These are like major companies in the world, so we want to stick with what they're used to. They open a simple page. They have a way to change the status, drop a file in, add a comment the same thing that they would do via these sprawling email threads. They can just do on one page, hit submit and now that notifies biotech like, hey, you have data, it was marked as complete. Here are annotations for it. Here's the file, you can review it and see. Or you can have us do more automatic QCing. But you can have a human come in and review it and say great, this has all the things that I expected. I'm going to mark this as complete or no. This was a mistake. We have to understand this better. But the point is you're tightening that feedback loop immensely and going from a world where you'd have to do that yourself or risk losing three, four months a year to it's managed for you and you can focus on executing the stuff that you need to execute in-house.
Speaker 3:Thinking about just how the users input the data is very important, because if you make it very hard, people are less likely to put in the right data, they get frustrated, they get mad, they shut their laptops down and we know the end of that story. So it's beautifully said that you guys have thought a lot about what that flow of information is, not just from the standpoint of management or decision makers, but also at the other end. If you're requesting information from these people, you got to make it easy. You can't make it some difficult thing where you have to jump through six different hoops and five different doors and you're just not going to get the same result. That's a really important piece that we try to drive home as well, and things that we do is try to make things simpler, because usually that's just better. You try not to make things complicated, because people are naturally resistant to change A hundred percent.
Speaker 2:I can't take credit for this. This is all by a design co-founder, david, and the brilliance that he brings, which is how do you hide away all the complexity? You can, as humans hate complexity, and he's like a designer by training who then moved into product and who then moved into health and life science. But it's been really cool and big privilege to see how he thinks through those things and things that seem very simple but can make big changes, like I remember when we were like first productizing how do you annotate, like why you made a decision?
Speaker 2:We had like a flow that was still pretty quick. I think it was like two or three screens that you would like click through. We'd ask you different prompts and then David and his kind of obsessive mind of make it simpler, make it simpler, make it simpler. We got to a point where now it's add data point and it on the screen just highlights anything that you could click. That's a data point and you can just scroll and click the things that you want and it pins them. And so we've removed the idea of these click throughs because we know humans are going to start it and be like oh, I don't have time for this and like get it or leave it or do it wrong, and so just making it dead simple so that it's just like the way you would intuitively do it If someone on the table presented you with things and said what do you like? And you're like that thing, that's the magic that we tried to bottle and love that we're doing as a product first team.
Speaker 1:I mean, it's such a great approach because you're really taking human-centered design approach. There are people that are at the front of this that have to do this work. You know the science is already complicated. The strategy is complicated because, let's be honest, there's a lot of data we wish was available but it's not available. So you have to do it. You have to come up with the right experiment, you have to come up with the right hypothesis to answer, and I think what you're expressing here is a refound vision of things. Don't have to be so complicated, and we're used to that. Why? Because humans are creatures of survival.
Speaker 1:So if we get a new position and we go, wow, this place has really garbage process, what do we end up doing? Making our own. We decide to go okay, now, when I get this document, I'm going to do this, I'm going to transform it this way, and then I'm going to create something else, blah, blah, blah. And now you start getting all of these like process ninjas that are all coming up with their own process, expanding the variability in the output that you're getting. Oh yeah, this is cool, like, can you like do what David did? He did something really cool. Can you go?
Speaker 1:You know, you start to see those questions all the time and it slows down the progress. It slows down the continuity that we talked about in the beginning, the knowledge base. Like, if you can't tell me how you're going to do what you're going to do, how can you expect anybody else to follow suit? How can you expect the consistency to be there? How can you expect the project to progress when we stop trying to figure out the most perfect thing and we start looking at simplicity and creating the right vision? What are you seeing in your clients, in your customers? What are you seeing change in the way that they work?
Speaker 2:I think a lot of it is understanding what the purpose of the work that you're doing and why so that kind of autonomous? We're all gunning for the same goal If we can get together as a team to make major decisions, otherwise I have clarity on why is the work I'm doing important and how does it contribute and how can I contribute better. So I think that's a human level change that we see happening. There's all the productivity stuff that I mentioned, which is obviously immensely important, especially when markets are what they are today, where you're under the gun to deliver as fast as possible. So saving a day, a week or three months a year is massive and it can be life or death. Another interesting byproduct has been and this emerges as a natural use case which is the teams that are like, really leaned in and really embrace this mindset with some that have shared their, you know, c-suite will ask why is this not a Kaleidoscope dashboard, which, like I love to hear that that's like an internal household name that way.
Speaker 2:But some, some of these companies have shared huge value here is just like the distilled. Where are we Like, what's the state of our affairs when it comes to R&D and what data do we have to support that. They've then started sharing that also with their pharma partners, their investors, their board, like different stakeholders that are external to the org, and the result of that has been, in like the pharma partner case, whoa, your data is like really easy to understand and navigate, which is the industry is so rapport and trust oriented that, if that's the effect that you could have, well, now you're that much more likely to work with that pharma partner, for them to pay you major dollars to work with you on assets or co-development or a platform or whatever it is, and so those kind of downstream effects have been really cool to see, because now it's company putting its best foot forward. And if you fail because the science doesn't work out, like in my view, that's fine because that's the risk that you're taking.
Speaker 2:But if you fail because of preventable reasons or things that you could have done better, that to me is the real shame, because that means that there was something that could have been in the hands of a patient and could have completely changed the trajectory of someone's life that won't or will take a year or two or three or four longer for completely preventable reasons. So that's the world that we want to eliminate. When I get asked like what's the goal of Kaleidoscope? Well, one is every biotech on the planet using us because we believe that it's valuable. But two, the goal is that every piece of science translates and gets to the people and populations that need them the most in the fastest possible time. And if that takes 10 years, the science is fine. It should take the minimum amount of time possible because people are waiting at the end of the line.
Speaker 1:Making the right decisions. You touched on a few things there. We talked about how the really develops a more clear foundational knowledge for the people that are conducting the work, that they know what the end is. I've had my own experience in a number of organizations academic and industry where the cynicism of the people around me of my work doesn't matter. It angered me a little bit because I really spent a lot of time to try to get myself into this role. This is where I want my career to be like. No, no, no. I don't care how small everyone thinks my work is. It means something. I am here because they need me here.
Speaker 1:How do I show people more of what I'm doing and you're not on communicating with the partnership, the leadership, the CROs? When you have this level of clarity, oh, the confidence that people feel when you're able to talk about it where you're not sort of like this came from stats you are on your game and that changes the relationship, not only the relationship between partners, but also the way that leadership looks at your group and says you know what? I'm going to back up a little bit, because they have it together, they figured out a flow, and these are sort of those byproducts that we don't think about because it's not something that shows up on your balance statement, it's not something that you know crosses off a corporate goal, but it really does so much to drive the mindset that is so critical when we're trying to improve the way that we do work, when we're trying to see the possible and understand the value of the things that you have around you. I mean, these are multi-million dollar projects. Get out of that Word document or that Excel spreadsheet and get into something that makes sense to manage the right way, to give you the abilities to expedite those decisions and to have alignment across people that may or may not be at your organization Super, super critical.
Speaker 1:So, as we start to close, we're talking about this gap between R&D, decision-making data, how we connect across organizations, how we connect across partners and CROs, and really what are the signs that we're not doing the things as well as we could be doing? What is one piece of advice? It could be scientific, it could be operational, it could be personal development. What is something that you would give to leaders on bringing clarity to their decision making, on, you know, navigating through the complexity where they're trying to find the right way, the right place to start.
Speaker 2:One thing that came to mind is it's never too early or too late to start making gains, because gains will compound. Even small gains will compound a lot. And so if there's a shift you could make today, a shift towards a better process, a shift towards a better tool, a shift towards anything like that, it's worth making. I think it's time to plant a tree is like 20 years ago, and the second best time to plant a tree is today. Especially first time leaders, they feel either oh, it's too soon, like why, like who cares, and they don't realize that's the easiest time to make a change, because it just becomes second nature. And then your future, you is going to be singing your praises.
Speaker 2:And on the flip side, people who are like, oh, a lot of some cost fallacy, like we've tried and sometimes it's understandable, I get it. We, we work, we're software. So I sometimes understand that people are like they feel burned because they've used really bad tools. They're jaded, I get that. Sometimes it's yeah, but we're like so complex, like how could we do this?
Speaker 2:And again, it's about defining well, what's the minimal thing that you could do today that would move the needle on it, and how do you do that today and how do you do that today? And so it could be as simple as hey use the same naming convention. That's a very simple change. It doesn't require you bringing on a new vendor, it doesn't require anything like that, but it's going to move you in the right direction. And maybe it is like find a vendor that can integrate these 10 data sources and give you a demo. Sure, but I think that's the biggest thing, that when's the right time and when's the right time, and the right time is now. It's literally now. There's very few exceptions to this, obviously, but it is sooner than you think.
Speaker 1:I can empathize with that so much. Really, what we're talking about, guys, here, is don't wait. Don't wait to fix things that can be addressed. Don't wait to have the conversation. You may not have the bandwidth to do something right away, but these are things that create ripples through your organization.
Speaker 1:We've seen it as well that you know we're with a, with a client, for you know one year, two year, and then you start to see people that you have not even directly engaged with, that are starting to come into your direction to say like hey, like I want to jump on board with what's happening in that group, I want to jump on board with happening with this group, and you start to see this appreciation and how things have been made easier and sort of the clarity that people get.
Speaker 1:And how do you know about it? Because people stop talking about it, because people stop talking about how bad it was, because no one's going to come and give you a trophy, unfortunately. So you know no one's going to throw a party for changing the process and doing a new thing, but people stop talking about it and they start talking about the value, they start talking about what really matters, and I think that's what we talked about here today. So, bogdan, thank you very much. For anybody that's interested in learning more about Bogdan's work about Kaleidoscope, where can we send them?
Speaker 2:Yeah, best for me would be Connect on LinkedIn. I'm always there. Our website is kaleidoscopebio, so you can check that out, and then we do also maintain a resource blog where we post mostly opinion pieces and kind of perspectives. That's on blogkaleidoscopebio, so for anyone who wants to subscribe there, we've gotten great feedback from the community that they appreciate that content.
Speaker 1:Awesome, bogdan. It was a fantastic conversation. I feel like there's about six more different conversations we could go into, so thank you for joining us. We'll thank Lawrence. He jumped out, had another call to go to, but it was an absolute pleasure and I look forward to doing this again.
Speaker 2:Thanks so much for having me, oscar, likewise.