Once a Scientist
Once a Scientist
94. Ira Hoffman, CEO of HighRes on robots, digital twins and how we will speed up science
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Ira Hoffman is the CEO of HighRes. He has a M.S. in Computer Science and a B.A. in Physics/Mathematics from the University of Vermont.
OUTLINE:
[00:00] - Introduction to collaborative robots and safe human-robot coexistence
[01:52] - CEO of Hi Res (formerly Hi Res BioSolutions), 20-year journey in lab automation
[03:36] - Deep diving into AI strategy and product development
[05:23] - From large-scale automation to accessibility for scientists
[06:45] - How robots safely coexist with humans in labs
[09:19] - Too many unvalidated hypotheses from AI tools
[09:39] - Moving from high-throughput screening to prescriptive hypothesis testing
[11:21] - Physical and digital integration challenges
[15:46] - How chaos in early research differs from standardized processes
[19:17] - Why fully autonomous labs are still far away
[21:09] - Identifying automation opportunities vs. necessary human involvement
[22:13] - Why large molecule biologics are easier to automate
[23:36] - A leading example of closed-loop design-make-test-analyze
[27:22] - AI-powered products enhancing automation workflows
[28:47] - How to upskill teams in using AI tools effectively
[31:44] - 3D visualization, AR overlays, and workflow simulation
[36:36] - Why capturing contextual data is critical for consistent results
[39:43] - Leadership approach under pressure and creativity
[41:57] - Find your passion in science and technology
-
One of the big step changes, I think, was the introduction of collaborative robots. Those that are physical AI coexisting with people are collaborative robots that incorporate sensing of some form, whether it's perception or force feedback on joints, passive sensing that let them coexist with people. For those newer to the space, it may not seem like much, but being close up to the science and seeing what's happening and how it works, being able to work with your instruments is pretty critical from my perspective.
Speaker 2:This is the Once A Scientist podcast. I'm Nick Edwards. We're back with new episodes, so keep an eye out and subscribe to the podcast if you haven't already. I'm excited. Grateful to have Ira Hoffman on the podcast. It's been a little while since we've recorded an episode, and I wanted to make sure I came out with a banger on the next one. So, ira is the CEO at Hira's BioSolutions. So I'm going to start out with a quick true or false. True or false. You gave me the bird in front of a group of pharma executives on stage at slas.
Speaker 1:That's definitively false, Nick. I merely had an itchy eye and was relieving myself of that itch, which appeared to you to be some type of gesture, but nothing more than a scratch.
Speaker 2:Nothing more than a scratch. Okay.
Speaker 1:Definitively false.
Speaker 2:Definitively false. Okay. All right, well, this interview's over then. That was. That was really long. That was fantastic.
Speaker 1:That was my favorite topic.
Speaker 2:So, Ira, do you want to give a quick introduction to yourself? You know, you guys, you're leaders in the lab automation space, but, yeah, tell me a little bit about yourself and what you guys do.
Speaker 1:Yeah, absolutely. First of all, thanks for inviting me on and having me. I always enjoyed talking to you, Nick, and so excited for the conversation. So my name is Ira Hoffman. I'm the CEO at hi Res. We just changed our name to Just Hires after being hi Res BioSolution for almost 20 years. And so what we do is we're a lab automation company. We've been in the space for about 20 years. Fun fact is, I was actually the second customer of the business. I started my career at Merck way back in 1999 doing lab automation as well, and started a project with High Res. I didn't finish the project as a customer. I took a. You know, I started a company with a friend where we converted wheelchairs to be robots. And then that didn't pan out and I called the founder. I'm like, hey, I'M interested in exploring opportunities. He's like, come over here now. And my first job in High Res was to actually finish the project I bought because nobody could figure out how to do what I wanted. So that was actually my first job at High Res. Coming in as project manager was figure out what I wanted and actually implement it, which we did. And so I've been with the company for 20 years. I've been CEO for the past six or seven years. And I absolutely love it. I love working with our customers. I love the space that we're in. Just the idea of, you know, applying science, technology, math, physics, you name it, to a, a problem space that can massively change the world and have a real impact.
Speaker 2:You're kind of a tinkerer. You like building things?
Speaker 1:I love building things. I love exploring, learning, creating. I think you have to be in this space to some extent.
Speaker 2:My understanding is that you, you still like to prototype and build products, like software products. Is that true?
Speaker 1:That's 100% true.
Speaker 2:Tell me about it. Like what do you, what do you. Why you're, you're the CEO of a company of what, 200 something people?
Speaker 1:We're about 300 people now.
Speaker 2:Okay,.
Speaker 1:That's a good question. Why? There's selfish reasons and there's professional reasons. I think part of the selfish reason is it scratches a niche for me that does not include a bird. Nick, just in case you're wondering, I've got a tremendous amount of technical curiosity. I want to know how things work and what's possible. And there's no better way to do that and know it than actually trying it out. And it comes in waves and phases. And were at a point last year, at the start of the year, where we had some sense of direction with our AI strategy, but we really needed a very crisp, clear vision on it. And that's not something you can really hire in, especially in a space like ours where the domain knowledge is just so deep and touches so many areas. And it's a hard space to have the domain knowledge that spans all of the areas you need to really understand the space. And so I took it on myself to deep dive into the world of AI capabilities models, to explore what type of product offering we could make. And just went through a period of like 40 days of exploring a bunch of different tools, technologies, capabilities, and emerged from that with a pretty clear product strategy for what our next generation platform is going to be. And we're now going to market with that and super fun journey and still very much involved in the direction of our product and how we do things.
Speaker 2:That's cool. I think if you're not using tools these days and learning what the state of the art looks like in any role, it's kind of like you're going to miss out on some things. So you've spent the last couple decades building robots, getting them connected together. From your perspective, what's been the biggest change in how, like experiments, how science is done?
Speaker 1:It's a good question. There's a lot of things that are similar. I mean, sometimes I look at our space and I'm like, wow, it moves really fast.
Speaker 2:Yeah.
Speaker 1:And sometimes I look at our space in certain dimensions and I think it moves incredibly slow. So it's a weird dichotomy and how it can be both at the same time. Certainly back in 99, I was using large scale automation to perform experimentation. So a lot of the same concepts were in place that you have to automate these experiments. But I think some of the biggest changes we've seen recently really are around making it more accessible to the scientists directly. There is an area where we have automation engineers, specialists who know these machines in detail. One of the big step changes, I think, was the introduction of collaborative robots into the space. And for those newer to the space, it may not seem like much, but, you know, being close up to the science and seeing what's happening and how it works, being able to work with your instruments is pretty critical from my perspective. You know, you need to make sure machines are working correctly. So collaborative robots was a big step change and that happened probably eight years ago or so.
Speaker 2:Can you define, like, what does that look like? How do you collaborate with the robot really?
Speaker 1:It's called collaborative, but what it really means is it won't hurt you. Okay. You know, it's not like, hey, let's, I'll hold this and you now use your hand to do this. In that sense of the word collaboration, it really means you can coexist without any risk of danger.
Speaker 2:Okay.
Speaker 1:And so these robots have things like force feedback mechanisms on axes and can sense with very high speed and response time where there's any resistance. Am I touching something? Am I hitting something?
Speaker 2:They.
Speaker 1:There's a, there's a whole standard in spec for what collaborative robot means. So for instance, there's a max speed you can move at, so that regardless of the mass of the thing, it's going to have a maximum force that can be exerted on an individual.
Speaker 2:Okay.
Speaker 1:So this is really going from, you know, what were the old school stobbly denso kuka large industrial six axis arms to what you see now, which is more like the Brooks precise scara arm. And I think most robots being put in the marketplace today, especially those that are physical AI coexisting with people, are collaborative robots that have, you know, incorporate sensing of some form. Yeah. Whether it's perception or force feedback on joints, passive sensing that let them coexist with people. And so as we put automation into labs, we want that to be the case. We want to coexist with science because that's where the science happens. Yeah, I think they're still emerging. I think like this is a incredibly unique and interesting time where there's a very palpable pull for increased amounts of automation as you know, AI and the quantity of hypotheses increase. Yeah, right. How do you get that validation and test data? So speed and cost and efficiency of execution of science has always been a challenge in the pharma space.
Speaker 2:Yeah.
Speaker 1:But I think that's now, you know, 100x.
Speaker 2:I mean we have now tools that can do like endless hypothesis generation. Like it's amazing, it's incredible. Like the recursive literature search, being able to like do automated data analysis and I mean you can combine those in with like computational tools for protein modeling and protein design. That's pretty exciting. But, but in the end science is done in the real world. Right. There's always been this gap between like hypothesis and lab validated execution. If we don't commensurately build the tools to, for the execution to stay in line with the hypothesis, then we're just creating. The term I've been using for this.
Speaker 1:Is hypotheslop, a proliferation of hypotheses that are unvalidated.
Speaker 2:Yeah, yeah, exactly.
Speaker 1:And I think we're in a time too where the hypotheses can be more advanced than they were historically, where we can get very prescriptive with a set of tests to get to a, a surgical style question. Whereas I think in the past there was a lot more, you know, the analogy I use from the early days of screening was like high throughput screening is effectively the shotgun strategy to drug discovery. Yeah. You know, yeah. Load up your screening tool with all the molecules and shoot them at the wall and see which ones fall out. And I, you know, I think we're moving towards more of a sniper mode than a shotgun mode where you can be very prescriptive about the specific task. And this is, you know, really changing how we'll think about automation because instead of scaling up to run 1000 of the same thing, you're going to scale up to run 1000 very prescriptive, unique conditions to build a very rich data set. And those are inherently more complicated experiments to set up an automation. So how do we bring the tools that let that happen very easily and instantaneously? Yeah.
Speaker 2:And so in a way it's like you go from like high throughput screening, which is kind of like a, like you said, a shotgun approach. Whereas now you can use that high throughput screening for like very specified hypotheses that, for molecules that you maybe have designed in silico and then use that infrastructure to then just test lots of like good hypotheses rather than testing a bunch of, you know, random designs.
Speaker 1:Exactly, exactly.
Speaker 2:Yeah, yeah, I think that's, that makes a lot of sense. So lab automation has been pretty like, I mean it's gotten fairly ubiquitous. I would say it's a, you know, it's still a developing field, but like what's holding it back? What is like the bottleneck for us being able to do just massively more amounts of experiments right now?
Speaker 1:You know, when I think about it, I think right now, and this is actually. I'm going to pose a counter question first to you, Nick.
Speaker 2:Yeah, yeah.
Speaker 1:This would be fun. Define lab automation for me.
Speaker 2:Oh man. Well, I think historically lab automation, I think about it as the classical term is what people refer to it as just lab robotics could be like single instrument liquid handling devices. It could be multi instrument orchestration systems. And then there's all the infrastructure to support that, you know, from like LIMS systems, ELNs, like the ability to like integrate with inventory management, those types of things. That's how I'd say talked about it historically.
Speaker 1:I'm going to give you a solid A on that test, Nick. Well done.
Speaker 2:That's sweet, man.
Speaker 1:Yeah, well done. I think a lot of folks, especially being high res, we're in the automated system space. We also make devices. We're now controlling benchtop workflows. But I think there's usually two camps. People hear lab automation, they think of a complex liquid handler. It's kind of like the step one into lab automation and step two probably is the now we have a robotic work cell that can do things and obviously that can scale to many work cells interconnected, hyperconnected with connected transport mechanisms, autonomous mobile robots. But I think an important concept and as we, you know, as we think about your question is that science doesn't stop at the reach of the robot and all of that can be automated. And so it really is the physical and the digital that can be automated. And so not just the physical pieces you see and that equipment, machinery or robots, but the planning, the resourcing, supply chain management and running these amounts of tests isn't as much of a technical problem. And there's just as many problems in the world of operations. How do you plan and manage resources and purchase materials needed for this and plan appropriately and schedule people to create the cells in time for experiment X? And so there's just as many logistics challenges as there are technical challenges in executing the science. And I think that's a capability set that's frequently overlooked and tends to be put on lab operations teams. And so you need those tools as well to facilitate the scale up of, to ever get to really autonomous labs. I think the other bit that you really need is translation of scientific intent into confident execution. And so today in the world, you know, someone has an idea for an assay, it's going to take, you know, weeks to months to translate this into something that can run in an automated fashion. And so what can we do to take that from weeks to months down to minutes. And you know, that's an area that we're looking at applying technology and you know, how we leverage doing what we've done for 20 years, our experience in translating intent into execution that runs confidently and you know, we'll get the result. And how do we package up that experience into our tool set that lets our end users do the same thing with very high velocity and very high confidence. I think that's going to be one of the biggest changes and improvements we can see in the space that's going to be required to support the increase in hypotheses we're going to see.
Speaker 2:Yeah, 100%. I think the goal is to reduce cycle time of individual experiments and the number of iterations. We're working on those types of things as well and integrations into these systems. And I think that it's just an exciting time where you've got now a lot of the infrastructure that's had to, been, had to have been built up over the last couple decades. Which you know, includes obviously things that you guys are doing with like integration, you know, work cells and being able to map out and run across workflows like multi instrument integration. The growth of the digitization of science, which is A huge precursor. I think there's still some areas where that needs to be improved. Right. It's laid this groundwork and this framework now where I think we're at really interesting opportunity for science in order to in ways that we can massively speed things up. But one of the things that I'm curious what your thoughts, because one of the challenges is that early stage research is so chaotic, it's hard to systematize in a way. And so how do you think about that? Because would you say in the past a lot of lab automation has been more toward things that manufacturing types of applications and how do we get towards something that's a little bit more chaotic, like early stage R and D?
Speaker 1:I think earlier lab automation had elements of manufacturing, but was actually pretty early stage in the R and D pipeline.
Speaker 2:Okay.
Speaker 1:If you think about back in the day where high throughput screening fit into drug discovery, for instance, you would have before that target id, target validation, which are things that still exist. You have a hypothesis on a disease or a disease model, and now you say how can you create a drug to change that mechanism? And so you've got to find some biological interaction point which lets you modify or modulate the behavior of that system. So you know, whether it's a protein receptor on the surface of a cell that if you can bind to it, changes downstream cascading effects or whatever the target strategy is. That was a whole space. And then ultimately you got to like, okay, I know the disease model. Now we can make an assay that evaluates molecules against that disease model. Yeah, and that's still very early stage R and D. That's, you know, one of the first steps in a drug program is finding leads that you can then use to kind of explore and interrogate that disease model. But that's where you kind of applied manufacturing techniques, scaling up, testing in very low volume. But those were really, you know, a refined asset now that you can actually run it over a very large number of molecules. I think the needs that's changing now. And you know, how do you bring tools into that earlier space of target id, target validation effectively? You know, how can you automate all the steps involved in the process from concept to evaluation to refinement of molecules. And certainly there's a lot that goes on in those cycles from a molecular optimization as well as, you know, ADME tops, dmpk, all these, you know, you haven't worked well, it works in this isolated model. How does it work in actual, you know, creatures and animals and people, I think applying automation and we've learned a lot from moving from HTs to those other areas which tend to be lower throughput, more bespoke, craft, challenging assays, you know, and applying automation to other areas such as, you know, genomics, genetics, bio, agriculture. And so through seeing a tremendously diverse set of applications, we've kind of learned how do we make this as maximally adaptable to any scenario as possible.
Speaker 2:Okay, makes sense.
Speaker 1:And I think a lot of the other, you know, the key pieces that will unlock the earlier pieces is like how do you have a co scientist that can enable someone to ask questions and actually do those tests very easily? You know, the support, the exploration piece. And so providing devices, tools, processes and software that enable that process, I think it's going to be pretty massive. So permutations on the what automation looks like, you know, whether it's a device and software driving it or actually fully automated platforms.
Speaker 2:Yeah.
Speaker 1:And so I think automation kind of in that new definition can cover both of those use cases.
Speaker 2:Where do you fall like on the autonomous science spectrum? I mean like, do you think we're driving really toward autonomous systems where it's, you know, end to end, closed loop discovery. So what's the current state and like what does it take to get there?
Speaker 1:I think you got to think about it and if you say autonomous, I think you should really think about really autonomous.
Speaker 2:Yeah.
Speaker 1:And I think max definition of autonomous would be 0,0 humans. Right. But there is so much required to operate a lab that we're very far from fully autonomous Labs, like all of these systems require consumables. You need to order tips from a provider and receive the box and unpackage them and take them out of plastic and then put them in a transport and then load them into the system. You need to prepare reagents and cells just in time, you know, and there's. If you want a very broad purpose autonomous lab, you need to be able to handle anything. Right. So you can't have like a vending machine of all your reagents because there's an infinite number of reagents and they could be of different conditions and require different preparation techniques and they need to be in the dark and shaken and have this. And I think we're pretty far from a fully autonomous general purpose lab.
Speaker 2:Yeah.
Speaker 1:If were to set that as the true high bar of what an autonomous lab is. And so I think we'll go from what we have now is work chunks, you know, run this block of assays to more Connected work cells I can do more independently, but I think it will be within a constrained universe of scientific space.
Speaker 2:Yeah, yeah.
Speaker 1:There is just the practical planning and execution. And so I think we'll learn to automate the things like experiment planning, resource planning. How do we calculate what we need from supply chain, prepare that in time at the start of this. How do we automate the kind of receipt of material goods and prepare them to be loaded into automation platforms? And so the autonomous lab problem isn't just about, like, how you orchestrate a series of instruments to do any experiment. It takes a lot more to run a lab than that. And so I think there's going to be a whole slew of challenges and problems in that side of the world as well.
Speaker 2:It's going to be, I think it's really, it's like what. Some of the, some of the most interesting problems to me right now are kind of like, how do you, where do you need human in the loop? Like, and what are the necessary components of that? Where can you know, drive automation for? Like, and, you know, we're, we're kind of the taste generators and we, you know, decide on, know which is, which are the right directions to go and which, you know, that are well aligned within, you know, strategy. Maybe that stuff will get more and more automated over the course of time. I mean, it will for sure. It's just a question of, like, what does that timeline look like? Do you think that the way that, do you see, like, the way that science is being done right now is changing in and of itself, or is it mostly just like the speed, like the scale and the speed?
Speaker 1:I think science is definitely changing. I think in certain areas you've got to look to evolve science in a way that makes it more efficient and automatable. Good example would be to compare contrast, small molecule development versus large molecule. And if you look at what can be done in small molecule, the space of molecular synthesis is far more challenging for small molecules than it is for large molecules.
Speaker 2:Okay.
Speaker 1:Just the types of equipment, reaction types, environments require a lot of equipment that isn't at this point today, like fully automatable, especially if you want to cover the whole molecular universe. Right. As soon as you start putting constraints on, you start saying, oh, you'll never be able to synthesize this or that just by machine limitations. Whereas in the large molecule side, we have a fantastic factory in the human cell that we can program to effectively produce whatever we want. And so in the large Molecule side, the cells become those synthesizers.
Speaker 2:Right.
Speaker 1:We can load the instruction set and they synthesize whatever molecule we want which is far more automatable. And so I think looking at the opportunity space in large molecule and knowing that it's much easier to make whatever you want for the most part it becomes more attractive space. And so I think science in some ways will go towards where you have the ability to execute.
Speaker 2:Right.
Speaker 1:If you can be a lot more efficient there, then focus energy there because you can be faster and more cost effective. So I think that's an example of how like, you know, the direction of science moves somewhat to what ways you can move in and what can be done with efficiency.
Speaker 2:Yeah, that makes sense. Is there anybody that you can point to and you're like wow, this organization is really cooking and they're doing things differently that, like that you're excited about right now.
Speaker 1:I mean for me it's like we get the privilege of going on the journey with incredibly bright and forward looking companies and they work with us to help craft that future. Many of which I can't probably talk about.
Speaker 2:Yeah.
Speaker 1:But there certainly are some that I can, and I think a great example is probably the most advanced and capable tech bio company in recursion and what they've built from a. If you want to talk about lab aesthetic embrace, adopt and leverage automation to drive speed and efficiency and their ability to generate massive amounts of data, to build models, to inform how cells behave in the context of certain drugs. They're one of the most prolific labs I've seen from a deployment footprint. But really more from a productivity perspective, you know, I can see tiers of automation users and then there are those that think about it like a factory and a full operations mindset optimization, continuous improvement. And they've just been always like ahead of the curve and pushing the envelope of what can actually be automated. And in many ways probably, you know, they're one of the closest ones that pushes the needle on the autonomous lab concept. But I think we're starting to see more folks that, you know, their systems are really built around characterizing molecular interactions with cells and imaging the cells to understand multi parameter behavior and responses. And so I think we're starting to see more folks going into that closed loop design, make, test, analyze where you can actually synthesize things. Whether it's a small molecule in a complex synthesis mechanism. You see places doing more intelligent chemistry that fits in automation better. I don't know if you've Ever heard of click chemistry? For instance, click chemistry here in San Diego actually permits. Yeah, but it's a lot more automation accessible.
Speaker 2:Right.
Speaker 1:It's just a series of reactions that can be run on any machine so it becomes more programmable. And I certainly think we see a lot of folks going into the, in the large molecule space who can design, produce. Right. Synthesize, if you would, to really transfect cells and express proteins, purify and then characterize and then be able to run that and like, hey, give me a small amount to produce first so I can do a quick characterization. And then if it's interesting, scale up the production volume of this interesting protein. So I can now deeply interrogate and run a panel of assays versus just having a small amount of protein. So smarter system designs. I can do like faster, cheaper, first pass iteration with a small amount and then expand out if something becomes interesting. So we see a lot of companies in the biopharma space certainly moving in that direction.
Speaker 2:Yeah, your job's pretty cool. You get to see all the cutting edge stuff, huh?
Speaker 1:I love it, man. It's a very hard, demanding job. Integration and lab automation is no matter what. As the integrator, you own everything because you tie everything together. And so if someone brings their device and it doesn't work, well, the system doesn't work. And then, so then it's our job to make sure the whole thing works together. So we bear a lot of responsibility and we have to be good at planning for and managing that. But I absolutely love my job. I love learning about all the ways that science is evolving to improve therapeutics, treatments, cures for diseases. And just the opportunity to help move that forward is for me is just like get jazzed every morning to see what kind of impact we can have.
Speaker 2:Yeah, yeah. That's awesome. So you guys, I mean, were both at Nvidia's gtc. You had a talk there. Really cool talk. I want to focus on, well, AI1, how are you guys using it inside the organization at hi res, how are you using it? And I guess those are the two main things. How are you guys using it? What does your strategy look like? Because I know that it's pretty expansive. It covers across many of these topics that you've talked about. And so can you give a high level summary of the 15 minute talk that you gave at D2C?
Speaker 1:Well, first of all, Dick, you should know that I'm actually a hologram and this is an AI representing Aira.
Speaker 2:Yeah, I actually made it.
Speaker 1:Yeah, yeah, exactly Ira hanging out in the podcast, I'm a firm believer of you've got to use the tools to know them and to know how you can apply them. And so at high res, super early on, like, we've got to adopt and embrace this technology for all the things we can, but we have to do it in a way that like brings value and know how to do it. And so there's a lot you have to learn to become productive with those kind of tools. You can give claude code to two different people and get wildly different outcomes. It's the tool, in conjunction with a very skilled operator that leads to the most impressive results. And so how do we build skilled operators? Right. And that is hands on lessons in learning what works, what doesn't work, how do we create the right tool chains for our teams to make it easy to explore and then as we find productivity increasing things, share those lessons with others. How do we codify that and how do we convert those tools? And so we look at every area in the organization. And so for instance, we built out and launched some new products at SLAS called Lab Assistant, which is a natural language conversational partner that can interface with your systems, has a full comprehension of all of our knowledge. So we've codified our knowledge base into a knowledge system. That knowledge isn't just useful for our customers, it's incredibly useful for us. And if we think about the kind of tasks we do at high res, whether it's helping a customer analyze their science and translate to a design, that tool is massively powerful. If we need to put together analyze an rfp, we can look at prior art and prior designs and previous protocols. Right. And now we can actually, with our Lab Designer tool, we can actually make 3D renders and graphics of these configurations.
Speaker 2:It's so cool. Yeah, yeah.
Speaker 1:You know, with being able to understand the science and design and execute those protocols in simulation, like this whole tool chain of what we do in person, we could go, right, this is very much like drug discovery. We're going to evaluate a design against some scientific performance, design, make, test, analyze a solution which is an automation process, a configuration against the problem statement, and then quantitatively improve that over time. And so we're taking what we deliver to our customers for that purpose and saying, how can we do the exact same thing for the operations we run and how we think about things? And there's no better way to know your customer than to like live in their shoes and understand their challenges. Obviously we have a twist on it, but it's the same user experience elements, what things make you productive and would make our users productive. So I encourage folks at high res, use it wherever you can and those that figure out great tricks and productivity tools, share them. And then how do we make tools that are accessible to everyone?
Speaker 2:We did some proof of concept of taking a protocol, you know, that was designed in potato and optimized protocol. And one of the things that struck me when we looked at some integration opportunities was just the kind of. I like the digital twin concept in there. Can you tell me more? Like how are you thinking about. Do you. What, what do you think about digital twins for the lab and both from an execution perspective and like, can we go more toward like simulating these experiments or is the, are the digital twins really just. Is it.
Speaker 1:I don't know.
Speaker 2:Tell, tell me how you think about this.
Speaker 1:Yes. I mean digital twin is similar to the word lab automation. If you say digital twin, I'm sure a thousand people will have a thousand different varying pictures in their head as to what that means.
Speaker 2:Yeah, I think world model. Right. Like, but sorry.
Speaker 1:And then, you know, I think there's the version of it which is a visual representation of the world that lets you see the state of the world in silico.
Speaker 2:Yeah.
Speaker 1:You know, and there's certainly value in that dimension. Others will think of it as the data set that corresponds to the state of the world, you know, the temperature of your incubator over a period of time. So like every signal you'd have in the space, you can have available in the digital world as well. And you know, there's different use cases and values for each of those modes. Certainly I think about, as you know, I'm sure you've been in a bunch of labs, Nick Pharma companies, and they're massive. There's hundreds of labs with thousands of instruments. And you know, as we think about the cognitive tasks we put on our scientists to operate in a larger scientific space, how do you understand the space? How do you know where things are? You know, and I'm a firm believer. And if you can make the tools look like the world, then it's an intuitive software application. You don't have to translate, you know, Device X is. No, you're looking at this and this is actually this. And you know, as we get better at things like augmented reality, like we start to overlay the visuals on the real world. And so having this digital twin, I think is a fundamental, necessary step to actually achieve what I think those next phases are Going to look like.
Speaker 2:Yeah, I think just going from one step of a workflow to the next, from one instrument to another, you have to be aware of where things are and what are the constraints that you have in terms of the equipment, what are the constraints in terms of reagents. And there's a lot of components of this that are. It's not just coming up with a protocol. It's being able to like make agents aware of what are the possibilities and the. And the constraints that they can work in and so that they can navigate these kind of world models that map to your physical reality. Right.
Speaker 1:And especially if you want to enable more scientists because, you know, instead of having to train for six months and know where something is or how this machine works, agents can learn this information and make it easier for a person just using this for the first time.
Speaker 2:Yeah.
Speaker 1:Oh, here's where you load this. You need columns in your lcms for it to run. And here's the one you use for this. And so the more intuitive we can make the technology and the tools for the scientists, the faster and more autonomous they can be. And then, you know, when I say scientists here, whether it's agentic or person.
Speaker 2:Yeah.
Speaker 1:There is a, in my mind, a tremendous amount of value of having intuitive toolset. And I think a 3D visual really does a good job of that. In terms of like the in silico side, can you actually model things? We absolutely can from a throughput and performance perspective, but not necessarily from a scientific perspective. Right. You still have to put the liquid in a well and measure something. But for us, we do this every day, is we have to design a system that will produce 10,000 antibodies in a week. So we can do all of that in silicon because we know what the experiment is, we know the durations of the operations, and so we can model the process, model the workflow, create a system. Does it fit in the space? Now let's iterate on that to optimize performance or test permutations of that. That's useful for us. It's also useful for our customers. I think in the world of scientists, if they want to explore scale up of a concept. What if I wanted to do this, how would I do it and what would I need? And these companies should be asking these questions every day, the way they work and what they think about changes all the time. So any business needs to evaluate new opportunities with a comprehensive, thoughtful plan to determine if it's feasible and what would it look like. And so I see these as tools that let us model, but also those customer spaces, those scientists, like, how do I scale up sequencing to this level for this? And what if I have to actually grow this number of cells? How do I think about this? And how could I model this so I can have confidence in going with a plan?
Speaker 2:Yeah.
Speaker 1:So I think it becomes an incredibly valuable tool chain for anyone, you know, looking to evolve how they do science.
Speaker 2:That's really interesting. I'm. I'm excited. I think the future is looking bright, man. The, the reality is, like, biology is always going to be, I think, personally, an empirical science. And it's just a question of, like, how good can you make the things going in and then how fastly. How fast can test those things?
Speaker 1:I think a key thing there too, Nick, and this has been a challenge science has struggled with for a while, is not just doing them and doing them fast, but doing them reproducibly.
Speaker 2:Yes. Oh, gosh, please.
Speaker 1:Yeah.
Speaker 2:Talk more about it.
Speaker 1:I've been in the lab and, like, get results in Pennsylvania, but we're getting a completely different result in Seattle.
Speaker 2:Yeah.
Speaker 1:And why. And then there's, you know, there's just an infinite number of factors that impact the way some of these experiments run. Like, we had this one problem in our screening lab, and we couldn't replicate a result. And it turned out it happened every spring. It was because that's when mulch was applied and a small amount of spores actually leached into the lab space and contaminated cells.
Speaker 2:Yeah.
Speaker 1:And like, you know, there's just so many ways at which you can have variation in the execution of science.
Speaker 2:This reminds me of the story that when I was in my PhD, I had these. I was working on brain slices and doing electrophysiology recordings from them under a microscope. And I had this, like, aberration where I was losing, like this. It was, it was just. It just wasn't working. And. And I spent three weeks trying to figure out what was going on. It's the exact same experiments. And I was like, I was putting a duct tape over and like a trash bags over the vents at the nih because I was like, you know, I couldn't figure out. There's so many variables. I guess the science is hard when you have. You're doing it with one person, even when you're doing it with robotics. Right.
Speaker 1:I don't have the silver bullet for that one, Nick.
Speaker 2:You know, I think I respect my.
Speaker 1:PhD, but there are a lot of things you can do to combat it. And certainly for us, like, the way I think about it is, like, with anything, the more context you have about how something was done, the more you can understand the situation in which that science was executed. The timing of operations, the temperature, the time of day, sunlight. Like, what, what contextual data can you use to help reduce the dimensions by which you can have, you know, variability in execution?
Speaker 2:Yeah.
Speaker 1:And so the more we can make that easy to define, capture, and include with the actual results, then the more smart analytics tools can evaluate. Are these apples type results or are these fundamentally different things?
Speaker 2:When I started this podcast Back in 2020, we talked a lot about scientific careers, and it's always been about what does the future of science look like from a training perspective, from lots of different angles. And I think it's been really helpful to get people's advice on.
Speaker 1:Kind of.
Speaker 2:Navigating a career in science. And you've been doing that for a number of decades. You have a really good reputation in this industry, I would say. And I, you know, I kind of witnessed that firsthand.
Speaker 1:You.
Speaker 2:You look. You look kind of scary, Ira. I mean, you just look like you're. You look like you're gonna. You're. You could get in a fight, but you're the nicest person on the face of the earth. You were very deferential to the people on your team in just like booking a room. And I was surprised by. So, like, can you tell me a little bit about, like, what does your management philosophy look like? What are the things that are important to you as a person navigating a career in science?
Speaker 1:There was a lot in there to unpack, Nick. And I'm gonna address the. You know that you look intimidating, like you can get in fights. I did play football. My natural resting face is like, I've, like a fur brow. Like, I'm all, like, always thinking about something, and that leaves me in this mode. And it's funny, a lot of people, like, will see me like that and, like, I'm not going to talk to them. And once people get to know me, like, that's nothing. You know, like you said, it's nothing like who I am. I think for me, a fundamental first tenant is you have to have fun. You know, we work really hard and what we do is really hard. And, you know, people spend a lot of time in their careers. Like, probably the majority of. If you think about the hours you spend or how you allocate your time in your life, most of it's probably in your career and what you do well. And, you know, I'm a firm believer. And you've got to enjoy the ride. And if we're trying to build an organization to get people to do hard things, we've got to make it as enjoyable, fun, and exciting as possible.
Speaker 2:Yeah.
Speaker 1:And in my mind, so much of that comes down to creating that environment, really comes down to personal connections, relationships. You know, you've got to be able to connect with people because that's. That's what builds rapport, that's what builds teamwork, that builds camaraderie, and that's such a fundamental first for me. And so, like, if you can establish that you can get through anything together as a group or as a team, you know, you just have that foundational experience. You know, I always talk about with my teams like, there are no passengers on this ride. Right. And, you know, I think that's a key part to who we are as an organization, is that everyone's, you knows their role. And, like, if we have anyone can have an idea on how to make us better, that can come from anywhere. There is no, you know, check your egos at the door. Let's make sure that we're coming at this with open minds and that really, anyone at the organization can come up with the next great idea. And so I'd have a context environment where that is true. And those are those kind of fun, creative, open spaces. Yeah.
Speaker 2:Yeah. That's awesome. Do you have any advice for people that are early in their career right now in. In science?
Speaker 1:I. I do. I. I would say find. Find your passion. You can think you like it, but you've gotta. You, You. You should do what you love, and you've gotta find what you love. And if you do, you'll never work a day in your life because you're. You're, you know, you're doing what you enjoy. And for me, it's, you know, working with people applying technology, science, math, physics to making the world a better place. I couldn't imagine a better thing. And so that's why I felt like, yeah, I worked incredibly hard, but I absolutely love what I do. It's a blast. And that would be the most important thing. That's what I tell my kids. You've got to find what you love. Find your passion. That's really, I think, the secret to success.
Speaker 2:I love it. Ira, really appreciate this, man. It's been fun. I appreciate you sharing your thoughts.
Speaker 1:Yeah, thanks for having me on, Nick. I really appreciate it. It is always a blast to chat with you.
Speaker 2:This is the once a Scientist Podcast. I'm Nick Edwards. We're back with new episodes, so keep an eye out and subscribe to the podcast if you haven't already.