Trends from the Trenches
Trends from the Trenches
Episode: 41 - Sonia Timberlake on Agentic AI for Biotech Coders
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI is now good enough to change how computational biology teams actually work, but most companies are still adopting it like it’s 2023. Sonia Timberlake, R&D strategy consultant for Timberlake & Maclsaac Biopharma Consulting, speaks with host Eleanor Howe about what’s real: agentic coding, high-throughput data workflows, and the practical limits that still slow teams down. Timberlake digs into benchmarks for capabilities and end-to-end tasks, as well as multimodal chart understanding, source verification, and where human review remains non-negotiable. The conversation also explores beyond AI to what biopharma risks missing, why novel targets still matter, and where investment interest is clustering right now. Plus, tune in to get a preview of Timberlake’s workshop at Bio-IT World Conference & Expo in Boston next month!
If this helped you think more clearly about AI in drug development and computational biology, subscribe, share the show with a colleague, and leave a review so more builders can find the conversation.
Links from this episode:
Workshop: AI Upskilling for Computational Biology Teams
Bio-IT World
BioTeam
Diamond Age Data Science
Bio-IT World’s Trends from the Trenches podcast delivers your insider’s look at the science, technology, and executive trends driving the life sciences through conversations with industry leaders.
Meet Sonia And The Big Shift
Eleanor HoweHello there, everyone. Welcome to Trends from the Trenches. Joining us today is Sonia Timberlake, a biotech executive who advises companies on research and analytics strategies for developing novel therapeutics. Sonia has spent 13 years building biotechs. She was formerly head of research at Finch Therapeutics, where she built the new discovery platform and lead pipeline strategy. Before that, she was head of data science at Juno Therapeutics. Sonia, welcome to the trenches. Do you want to tell us a little bit about what you do now?
Sonia TimberlakeYeah. So about three and a half years ago, I started a consulting company and it's focused on how to leverage high-throughput data for RD, which is what I did in biotech. And during that time, I've continued to work as an operator at a cell therapy company, but I've also advised in SaaS, worked in venture creation, worked across multiple biotechs and VCs. And that's been just a really fun, fantastic learning experience to see how different teams approach problems and sort of the diversity of approaches and different industries. So I spent a lot of my time in the last year thinking about the frontier of AI generative models and specifically how we can bring them into drug dev. And I am, you know, full disclosure, I'm totally AI pilled. So everybody's gonna get all the propaganda.
Why AI Suddenly Matters Now
Eleanor HoweThat sounds great. Well, and it's all your experience in all of these spaces that made me say, oh, I really want to interview Sonia. So thank you for coming. So then, you know, my first question for you is, and you've already, I think, told us the answer, but what forces are you seeing impacting these companies that you're working with now, especially since this is trends, compared to a year ago? Yeah.
Sonia TimberlakeSo of course, we have to mention the backdrop here that, you know, we're still, we still have a lot of headwinds in biopharma from a lot of different angles. But, you know, that's like that's cyclical and we're gonna come out of that. But compared to a year ago, the, you know, the really hopeful and exciting thing I see is how impactful AI is in our sphere. And I think a year ago you could say with a straight face, like, hey, I don't really think AI is impacting, you know, drug development or computational biology. I just don't think like the the capabilities are are there yet. And I really don't think you can say that with a straight face today or in the last, you know, four to six months.
Agentic Coding In Bioinformatics
Eleanor HoweSo then let's dig into that a bit. I'd like to hear more. You know, the first thing that leaps to mind is agenc coding models. So how are you seeing them impact computational biology? Yeah.
Sonia TimberlakeSo I see like the capability is enormous, but the mean impact is still quite low in my experience. And so I'd say, you know, I think a lot about like in software development, sort of mainstream software, not scientific software, like 95% of code. It is like very common for a company to say that today or in the last six months, is written by AI agentic agents, where I think it's it's the opposite for scientific code bases in in biotech and pharma. It's probably more like 5%. And I've talked to a lot of my friends, you know, not just the my clients, but a lot of my friends in in pharma. And so, so okay, so like I think there's this huge discrepancy between the capabilities today and and the adoption. And and that's like common with any new technology, and that's really common and it has to diffuse through society. Okay, so like specifically, I think that like pipelines, if you're doing, you know, RNA-seq or something that's well documented, it's sort of in the canon. Like, definitely no human should be writing that. There's tons of training examples. There's really good verification. Like, there's no real the domain expertise doesn't come from writing the pipeline, right? It it comes from maybe curating the inputs and sort of interpreting the outputs. Like there's still a big role there for humans, and there's a big gap in terms of the AI capabilities. But for coding in general, even for exploratory data analysis, for producing figures, for understanding figures, I think that the capabilities are there. And I think that you, I think that's reflected in the benchmarks, if not everybody's usage today.
Eleanor HoweAnd and I want to ask you about benchmarks, but before I do, I mean, I we've seen very similar things as far as the utility of coding agents. My team has found their work is sped up tremendously because they don't have to write these pipelines by hand. They get a big boost from their AI assistant. And so the productivity goes way up. It's really impressive the changes that we're seeing in just day-to-day work for a bioinformatician. Is that kind of what you're seeing as well?
Sonia TimberlakeYeah, yeah. And I think about it both in terms of like the efficiency of things that you were already doing, but also your ability to do things that you wouldn't have already done, right? Like if I'm a single person single member computational team at a biotech, well, you know, two years ago, I probably was either a specialist in single cell or maybe bulk or, you know, maybe spatial, but like could I do protein modeling? No. And I think, you know, today the ability to do that, at, you know, like at a at a at least a basic level is is it's just like that. So so I think there are those two axes. And I have to say, like, I was impressed that that Diamond Age was an early adopter. I remember, I still remember John Hutchinson's talk was just probably, you know, over a year ago where he was using agentic coding for you know really impressive sort of zero shot or one-shot applications. And I think in in one case it actually coded up the application in a language he didn't know because like that was the best application for the task, right? So there's this there's this access of efficiency and there's this access of like, you know, new new capabilities. So so I'm excited to to see Diamond Age sort of like picking the right problems and and applying AI in the right way. Thanks.
Benchmarks That Build Real Trust
Eleanor HoweYeah, I'm I'm excited to see it too, as you might imagine. So can can you tell me a bit about the benchmarks that you were talking about? Like how do you use those to get meaningful information?
Sonia TimberlakeYeah, yeah. And yeah, so I'm a I'm a big fan of benchmarks for AI. And I want to distinguish there's like benchmarks for capabilities and for tasks and then for processes. And I think those exist at different levels. But I think fundamentally that like this is the way, you know, we always ask, like, how do I trust the output or like how do I trust this process? And I think you have to, you have to have these objective measures. And they're not cheap to make, and they they're a public good, and so they suffer from all of the issues that public goods tend to suffer from. But I think they're they're so, so critical for for us all to you know continue hill climbing in the capabilities, but also build trust in what we have today and and encourage adoption. So, what are the the specific types of benchmarks? So for for bioinformatics, there are three that I know of that are really relevant. There's Bix Bench, which was published by Future House about a year ago. That was one of the first. And those are all human-curated. Like, here is a paper, and we've verified that you can go get the the data. And then here is not just like the tab output, but like here's a figure from the paper, and here is the human curated correct peer-reviewed responses that you should be able to reproduce. Comp Biobench, actually, just like three days ago, this is out of Genentech, published. They put up a preprint and a whole GitHub repo on a new computational biology benchmark.
Sonia TimberlakeAnd they they benchmark then a bunch of you know of the frontier models, sort of with tools, without tools, are they allowed to think indefinitely? Are they not, right? So there's all sorts of different constraints and they measure the performance. And this is all open source. So you can go, like if you built an agent with your own domain expertise from Diamond Age that, you know, obviously has a different set of capabilities than, you know, Claude, like vanilla Claude, then you can go and you can benchmark it and you can show, like, in a like third-party verified objective way, like this is where we sit. There are also benchmarks for specific capabilities, for example, chart X or Chart QA. So your ability to for the LLM model to understand visual, visually presented information and like what kind of trend did this chart show, and what can I conclude from it? Those are nowhere near as good as the text understanding, as like for obvious reasons. Like that hasn't been the focus of training, but they're getting much better, like those multimodal models. And if you think about like the language that scientists use to communicate, like so often it's through chart or a figure, not even like a p-value or an effect side. Like so many of the decisions that we make as humans looking at at charts and and and figures and interpreting them. So I think that's like a key capability. But so I said, like there's there's capabilities, there's tasks, and then there's processes. Tasks are much harder. So tasks like actually Comp Biobench is a great example of tasks. It's end-to-end, like a you know, a pretty hard question that you would give to a PhD scientist, and they would go away and they would work for, you know, a month or so to code this up.
Sonia TimberlakeLike, okay, here's a spatial transcriptomics data set. Can you like analyze all of the single cell profiles and like look at, you know, different gene differences between treatment and control, for example, right? Like that's you know, that's not that's not trivial, it's not totally protocolized. There are like many steps and gotchas. And so it end to end, like, how does it do on that test? Now you can imagine that that curating like a set of ground truth positive control responses to that is is very expensive. And then there's the third category, processes, which I think we just don't have yet today, but that's where the field is going. So a process would be like, oh, I generated this pre-IND document. It, you know, it has sources data from 10 different teams. They've all curated it subjectively through their different processes, and everybody's got a different thing. But like I somehow I trust that the team has put together something good and I'm gonna submit it to the FDA. Like that's a whole different level. And AI for knowledge workers in general just isn't on that level. So that was that was a long answer. But like I think, I think benchmarks and our understanding them is so important. And also like the field is moving so fast that they change every few months. And but I think it's just it's so critical for us to understand the capabilities of the tools that we're using.
Eleanor HoweAnd it's great to know that there are some people out there professionally tracking those things because it is overwhelming to try to keep up with the movement in the field right now. It really is. I agree with you completely. It's tremendous. Yeah.
Sonia TimberlakeYeah. And I really applaud like Genentech in in particular, right? Like you think of benchmarks, public goods, that's sort of the realm of academia usually. And that sometimes can make it really hard to apply them in industry because we have like different sets of things that we weigh than academic research. So, so to have an industry group put some thought into it and put it out there for the community, I think is really laudable.
Eleanor HoweYeah, agreed. Then what about other knowledge work? You mentioned regulatory a bit. What about the non-bioinformatics side of things? What are you seeing there?
Sonia TimberlakeYeah, I'm seeing like there was this wave of adoption of like, I'll get it to write my emails and search my outlook and stuff like that. And then there's, I think there's really like two camps that I see now downstream. There's a set of people who maybe their company said you need you can't use LLMs until we spend nine months coming up with a company policy. And like you can't use Chat GPT as if everybody isn't using it in their private browser anyway to do their work. And then, but like, you know, they're they're they're handcuffed, right? You're copying and pasting and like, you know, trying to avoid putting company information. And then there's like super adopters, and it's remarkable to see them. Like the CSO that I work with, he is not a coder, but I showed him some stuff about like, you know, deterministic literature review and like how to sandbox his stuff on Claude Code. And now he has these like interactive HTML with drop-down menus and like it's this huge clinical trial review, and everything is source-verified, and it's like this, you know, beautiful work product. And I know he doesn't have the time to like put in a lot of time to this. I know he's just like gotten really good at leveraging AI systems for like really true, like dependable sort of knowledge work products.
Eleanor HoweAnd this is all custom work that he did. It's not something that he bought from someone.
How To Vet AI Products
Sonia TimberlakeNo, no, it's just like it's Claude Code, and I showed him like, oh, here's a bunch of skills. So like Claude can learn use these to call tools to make, you know, go to the Clint Trials API or the PubMed API and not just use sort of the internet training corpus. And here's how, like, instead of using the chat box, like use one of the coding environments, and so that like unlocks it to write a interactive like HTML document for you.
Eleanor HoweSo we can benchmark the tools that we're using, but what about tools that someone is trying to sell to us? How do you how do you cut through the hike and and evaluate these marketed products and services and determine which of them have any value?
Sonia TimberlakeYeah, and there's so much out there, it can be exhausting, right? And everybody's got a new app and it it all looks good too, right? Because AI is really good at making things look good. And so, yeah, so I do some of this for VCs that I work with and I diligence, you know, new biotechs, or you know, sometimes it's like a software like SaaS platform or an AI discovery platform. So there's a set of questions that we go to, and you know, like it's doing it outside of like doing it in a non-con, like a non-confidential format, I think is can be difficult. But I think what I'll do in a non-con environment is is ask people, well, can you critique some other past models? And I think just looking at the founding team and their ability to think through and explain similar problems, like that's sort of a baseline level, right? And can they explain why, hey, like that was not technically sound or why that was technically sound, but it didn't have a good moat and for competitors. So, like going through that process. And like that's really common diligence, like half of the time you're diligencing the product, and half the time you're just diligencing the founding team because they're gonna have to pivot, right? Like you're gonna before your like between your seed and the time you get to the clinic, you're gonna pivot six times.
Sonia TimberlakeSo you're really like you are investing in the team more than anything. Once we get into like more of a under NDA, you know, I'll ask them to walk me through, okay, like if it's a foundation model, show me that the logistic regression didn't work. And like, did you did you try that? Did you try that? Like, and that sounds like simple, but like that's a sort of a place to start. Or, you know, what are other people's tools that you could have built on? Or like if you go back six months ago with what you know now, like how would you do things differently? So I think like a lot of this is evaluating the way they the way they think about things and the way they attack problems. But I think at the end of the day, there's no like free lunch here. Like AI and cutting edge AI and foundational models does rely on a lot of specialized training and you have to dig into the technical bits with the founder sometimes. And not all VCs want to do that today. And I think it makes it really harder for them to distinguish between like what's sort of the veneer of AI Shazam and what's going to be transformational.
Eleanor HoweAnd for people who are thinking about not necessarily a startup, but rather a product that's on the market and who maybe doesn't have access to the founding team. Do you have advice for those folks?
Sonia TimberlakeYeah, like what kind of what kind of products are you thinking of is that?
Eleanor HoweMany products claiming all kinds of things, like we will discover all of your new drug targets for you, right? There are companies that say things like that, and maybe, maybe they are doing it, but but let's say that I work for a pharma company and I want to know how do you how do you evaluate this?
Sonia TimberlakeYeah. My first question is show me your benchmarks. So not to be rep, not to be repetitive, but like a lot of people don't have an answer to that, right? And they'll say, like, oh, that you know, this is a great new process we've developed. And I'm like, okay, like if you just use clawed code without your proprietary harness, can you show me what that would look like? And do you have some metrics? Because like I think it's very reasonable to say you should have internal metrics for whether your product is getting better or not. I'm sure you're building right now. So can you like show me those internal metrics? Like, maybe there isn't some like unbiased third-party benchmark that's out there that's perfect for this. Maybe it's not a perfect fit. Maybe you can do something adjacent, but at least show me internally that you've thought about measuring this in an objective way.
AnnouncementAre you enjoying the conversation? We'd love to hear from you. Please subscribe to the podcast and give us a rating. It helps other people find and join the conversation. If you've got speaker or topic ideas, we'd love to hear those too. You can send them in a podcast review.
Eleanor HoweWhat about stuff out there that's not AI? I know you're I know we're in the AI till world, but like, what are we what are we missing because we're real busy with all of this AI stuff?
Sonia TimberlakeYeah. Well, we aren't investing enough, I think, in new targets, right? Biopharma is really in a Me Too kind of phase. We got a little bit risk averse with the bubble popping. I think there's a lot of biology left on the table. And it's I don't think it's because we're doing AI stuff, though I do think like tech bio and tech investing is sucking up like an enormous amount of capital. And like any idea you have to sell in drug dev, I think, you know, for some investor pools, you are like their opportunity cost is investing in the next like AI SaaS thing. And the growth trajectory of that field right now is is really enormous. So, like, yeah, there like there is some competition. But yeah, so I think we're leaving biology on the table. I think that AI can help there though, too. Like, I think that we can de-risk new targets and you know, sh like show that, okay, like, yeah, you're taking more target risk, but maybe you'll take less commercial risk because like you'll have a first in class, right? And I think we are, I'm hoping that we are just on the verge of that. Like I see, you know, so like if I think of computational biology and like generative AI, like the protein modeling is like very, very mature, right? And it was immediately obvious like how we're gonna use that. And no, it's not gonna like automatically just produce a drug for you, right? But it like it is still transformational. I think a lot of the target discovery stuff, for example, like the target discovery or target evaluation, like the DNA language models, predicting transcription, predicting single cell perturbation or even bulk RNA seq perturbation, those aren't like super exciting, but not as mature as the protein, the protein language modeling. And so I'm very hopeful that those can really de-risk some of those new targets for us and we don't have to keep developing the same old ones.
Eleanor HoweYeah, that would be amazing. Do you think the the the thing I'm skeptical about, the transcriptional profiling, modeling, and prediction is that the data that they used from you know the PDB database to build Alpha Fold and the structural you know models is was massive. And and the data that we have for transcription, I'm not convinced that the data we have now is good enough to build those models in the way that we need them. Do you have a do you have a ballpark? I mean, I'm totally asking you to speculate how much data do we actually need to build the the alpha fold for transcription?
Sonia TimberlakeYes. So I saw this quote and I'm trying to find it real time. I found it. Thank you, Google AI overview. So I like that is such a hard question, right? But there was a Viv Regev Genentech paper that took a stab at this, and I forget if she was publishing with Genentech or part of the virtual cell like Chan Zuckerberg, but they it was just like back of the envelope. What was what was the amount of Data for Chat GPT versus what is on the SRA. And actually, the SRA has 14 petabytes. And this was at the time of writing. So like whatever. It's right order of magnitude. And this is that's a thousand times bigger than the data set used to train chat GPT for. So like a bunch of asterisks is on that, right? Like, how do you measure real information content? It's not like, you know, ACGT and stuff like that. But like order of magnitude were a thousand times bigger. So like maybe there's some there of there. And that was for a virtual cell, which I think, you know, it's it's only looking at the SRA and and and it's like measuring against this target of a virtual cell, which I think is like a harder target. So yeah, the from some people who've thought about this more and much smarter than I am, there's like an order of magnitude.
Eleanor HoweYeah, I'm really curious about the the basically the information content difference given that SRA is full of human data and humans are very similar to each other, right? Like how much differential data is there among all of those sequences? I mean I'm not asking you to answer me because I don't I don't think either of us knows, but yeah, it's a really great question about like what what will it really take to build these?
Sonia TimberlakeYeah. I mean, I guess on the other side, if you wanted to critique like the internet training there, a whole bunch of like Reddit garbage and Quora garbage. Like I don't know how that was all filtered, but it it cuts both ways.
Where Biotech Investment Is Flowing
Eleanor HoweIt does indeed. Okay, then to move again away from AI, although I'm sure we'll be right back there. Sorry. That's fine. This is what it's all about. Biology areas. What are you finding that's popular or interesting for investment right now? And I'm thinking along the lines of disease modalities, sorry, therapeutic modalities, diseases, technology platforms, what are people investing in? What do they want to invest in that people should start building?
Sonia TimberlakeYeah. I think so obviously cardiometabolic is huge, right? Anything that you can combine with a glip or modify with a glip, or how can we look at patient subpopulations, right? So these are like miracle drugs, but they're first generation. And like the idea of painting that patient population with a single brushstroke measured by like percent body weight loss is like it's such such a yeah, it's such a broad brushstroke and such a sort of gross measure. So like obviously there's a lot of funding there, and it's like you know, very impactful for a huge patient population, huge TAN too. Miro is is still hot, and I can't really put it point a finger at the source other than like there, you know, there was the Alzheimer's approval, and that made it a little bit exciting. Again, there's another company that I just that had the schizophrenia readout. I can't remember their name, but I think there's there's been a little bit of of momentum in Neuro. In terms of tech platforms, I think spatial is getting pretty mature, but like high throughput perturbations and having proprietary data on that is seen as really valuable, right? Because we we have a pretty good measure of like what is, you know, steady state single cell data. And that's been transformative for drug development. Actually, like the the effect size on your probability of your drug getting approved if it was based on a single cell or if there's like a clear single cell data signal is about 2x, which is like the same figure that people quote for, you know, if it if your drug was has a genetic signal, a human genetic signal. So it's like it's a pretty big, it's a it's a powerful technology, but we just I think we can we're we're trying to build on it by looking at the the perturbations, and they're they're hard to predict, right? It's out of distribution. So yeah, so those are some of the themes that I see.
Sonia TimberlakeAnd of course, like I'm more interested in the ones that intersect with AI capabilities, right? So, so technology platforms and high throughput data. And I've seen firsthand, so I was working closely with a founding team with founded around a spatial tech. So I'm one of the advisors there. And just like through the back half of last year, like I made a an estimate for like, okay, what would it take for us to pull in, you know, all the spatial data in our top three indications and like reprocess it in a consistent way and like combine it all and put like some of the founders' proprietary models on top. And my estimate was like, you know, two FTEs for six months. And then I looked at that same exercise in November when we were, you know, doing some DD and I was refreshing it. And I was like, wow, I, you know, I could do this myself in like two weeks. And so just like that was that was very concrete for me. Yeah, it's kind of anecdotal, but like I did the same exercise at two points in time, just like six months apart, and and it was amazing to see. So that is to say that like my personal investment thesis is that if if you have something where AI can really accelerate it, like drug development is is hard. And if so, if it's a meaningful, if it's a meaningful step and and AI can accelerate it, you know, 100x, then that's that's part of your whole investment thesis.
Eleanor HoweThat makes perfect sense. And yeah, that that working with those data sets, that's another thing that we see is just it's tremendously faster to collect data and harmonize it together than it used to be. It's fantastic. Well then, you are going to be teaching a workshop by OIT World this year. Do you wanna do you wanna talk a little bit about what you're teaching? What are you covering in that workshop?
Sonia TimberlakeYeah, yeah. The workshop is focused on agentic coding for computational biologists. So you're already a, you know, a great scientist, great computational biologist, but you haven't, you know, you have a day job, you haven't had time to keep up with the like frontier of AI tools that are available to you. And I think a lot about like, you know, this 95.5 sort of divide that I mentioned, where like I think 5% of our code is is written by AI and it's like 95% in software dev. So, what can we learn from classic software dev? What practices do we need to learn? And what do we need to adapt? Because their models don't have the same capabilities in the scientific domain. How can like, what are the most efficient ways for us to inject domain expertise, domain-specific tools to sort of rectify the fact that their models weren't trained as much on our problems as they were on mainstream coding? So what those are the two pieces and you know, adapting your software dev practices, adopting, adopting their software dev practices and and and adapting their their models. And I'm doing that with a friend and former colleague of mine, Ryan Belmore, and we're doing it like hands-on workshop. You will do this and you will leave with like an awesome AI coded product.
Eleanor HoweAnd you and Ryan together, I think that's gonna be a fantastic workshop. That's I didn't realize we were working with Ryan. That's amazing. Okay. That's gonna be great. And so that's at BioIT Worlds. And which day is that on BioIT World?
Sonia TimberlakeI think that is the it's in the afternoon on the 19th. So yeah, I'd love to see people there. And if you have questions about the class in in the lead up, I'm I'm happy to talk more. And Eleanor, I know you're doing a bunch of stuff and Diamond Age is too, bio IT. Right. Do you want to tell me about that?
Eleanor HoweOh, oh, well, now you're interviewing me. Oh my gosh. Yeah, I'm giving the Trends from the Trenches talk. So I am gonna be, I see, I am not entirely AI pilled quite as much as you are. I see a lot of a lot of benefit from it, but there are also some things that I am absolutely going to try to pop the bubble about because I think there's some stuff that is not necessarily very helpful. But there's still also a lot that's that's that is helpful. And so that's what my talk is gonna be about is things like this. What are the trends? What am I seeing from you know my customer base, my my colleagues, my network, what's changing in our work? That's what I'm gonna be talking about there. So I'm very excited about it. I was really, really happy to be asked to.
Sonia TimberlakeYeah, I'm I'm looking forward to that. And I love it like a feisty debate, and I love when people are bubble popping. And I think like, I think it can be simultaneously 100% true that like AI is amazing and transformative for computational biology, and AI is like overhyped and over.
Eleanor HoweYes, okay, yes, and that is absolutely the case because some parts of it are actually that transformative. Coding the AI, the agentic coding agents and the structural biology prediction are the day-to-day life of the people who use those things are completely different than they were a year, two years ago. Like every day is different now because of the impact of those tools, and you can't really understate that. And then there are some other things that are just not sense. So yeah, yeah, absolutely. And we need to benchmark them. So then to wrap up, you're giving a workshop and you're gonna be teaching, but maybe just as a quick preview, like what would you recommend just as a short thing, like for people with getting started? Yeah. What what kind of recommendations do you have for those folks?
Sonia TimberlakeYeah. So I think first you have to see a reason to believe because we're all busy and it's hard to, you know, devote time to learning a new technology unless you have the confidence that it's gonna pay off. And so how do you see reason to believe? So, like go to this Genentech paper. You can see all of the prompts that it gave and all of the results, and they have it all up in GitHub, and you can see that, you know, this is what a vanilla agentic coding off-the-shelf model can do with no help. So imagine what it could do with my help and and my domain expertise. So see, see that reason to believe. And then, and then I think you can. I've I've had experience, I've taught like many workshops with my clients and like one-on-ones, and in the span of, you know, 30 to 60 minutes, have transformed someone. Like they'll come back two days later and they're like, oh my gosh, it can just like it wrote all my code for me, it wrote my tests, I wrote my docs, I can just like, I can write the validations, I can like review the code. So it, it really like if you can invest the 60 minutes with somebody who's proficient in these tools, but also understands your your work, right? It can't be, you know, your neighbor who does it, right? Like they have to under, they have to understand your work. They have to be able to speak your language and have the domain expertise. So I really think like, yeah, if you can put in, you know, that's that's two hours, and if you can commit that, like this isn't going away. This is it's gonna change the way we work.
Eleanor HoweYeah, yeah, it already has, but not for everybody. So, and that's what you're doing is is making it more available.
Sonia TimberlakeAnd I guess I should plug myself that I do these workshops. I'm sure they could also call up Diamond Age, and Diamond Age has a a bunch of people who with expert expertise in agenda.
Eleanor HoweOkay, so thank you for taking the time. It's been a really fun conversation. It's been fun, and let's do it again in a in a few months, and everything will be different. Everything will be different. Yes, actually, yes, let's do that.