Business AI Explained

Why Most AI Agent Projects Die at the Last Mile — Idan Raman

Vlad Season 1 Episode 11

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 26:40

Most AI agent projects stall at the last mile: the legacy browser workflows no API can reach.

Idan Raman, founder of Anchor Browser, joins Vlad to explain why browser agents are the missing piece in enterprise AI, how the Cloudflare partnership is changing the economics of automation, and why the OpenClaude security fallout was a wake-up call for anyone running computer-use agents in production. If you're evaluating AI agents for your business, this is the layer of the stack nobody's explaining clearly.

In this episode:
- Why 90% automated still isn't automated — the KYC last-mile problem
- Computer use vs browser use, and when each one makes sense
- The real security story behind the OpenClaude virality (leaked credit cards, stolen passwords)
- How enterprises are pricing AI: the voice AI framework ($3/call → $0.30) applied to back-office work
- Why open source LLMs are exploding inside large enterprises
- The Cloudflare "web bot protocol" and why it's a win for everyone
- Building a moat in AI infrastructure: "if it's hard, it's good"

Chapters:
00:00 Intro
00:52 Computer use vs browser use
03:34 KYC as the poster child
06:08 The Cloudflare partnership
09:01 When to use browser agents vs Playwright
10:56 Security and the OpenClaude fallout
14:05 Open source LLMs in the enterprise
17:08 Pricing AI tools for enterprise
21:19 Building a real moat in AI
25:24 Dogfooding Anchor to grow Anchor
29:45 What's next

Guest: Idan Raman, Founder @ Anchor Browser. Idan built Anchor Browser to solve the last-mile automation problem for enterprise AI agents.
Connect with Idan: https://www.linkedin.com/in/idan-raman/
Anchor Browser: https://anchorbrowser.io/

Connect with Vlad:
- LinkedIn: https://www.linkedin.com/in/vladeziegler/
- YouTube: https://www.youtube.com/@aiwithvlad
- Work with Vlad (Elements Agents): https://www.elementsagents.com/
- Come on the show: https://cal.com/vladimirelements/podcast-intro-call


Business AI Explained is a podcast for founders and GTM teams who want to understand how AI creates real business impact. Hosted by Vlad de Ziegler.

SPEAKER_01

I think it's sort of an interesting pattern where organizations will start with open AI or anthropic because it's very cheap to run. And then as they s they scale up, they get to these seven-figure, eight-figure uh bills, and they say this is just not viable. In five years, my entire budget is just going to be LLM. So they they try to figure out how to solve it. And often open source or cheaper models in general are 80%, 90% more efficient. The Mac minis aren't really scalable, not for humans, but not for just like people in their home, but definitely not for an enterprise that wants to run like a million open clause. So we we sort of play into that by giving the Mac Mini like security and guardrails and sort of experience, but also the Mac Mini accessibility. So it can still access anywhere on the web. And it's like a Mac Mini in a box, which is virtual, and you can scale it for an enterprise. Um and Anchor sort of tries and often succeeds in taking the raw LLM but converting it to something that is very economically viable. And the way we do that is we try to cache as many of the actions uh that are being done and to not just use LLM every single time. We're using LLMs the least amount possible, actually. We every task we learn.

SPEAKER_00

So hi everyone. Today I am here with Idan Rahman. Idan is the founder of Anchor Browser, which is a platform that allows you to build browser agents. So, in other words, that allow you to go on platforms where they don't have any APIs. So you can think of registries, you can think about like all these old-fashioned platforms where you can fill out information automatically. So, like the RPA in the age of AI. And I've been in touch with Idan for the past year now, playing around and actually using his tool. And so I'm very excited to have him on the pod today to talk about latest releases, but also get his take on what does it take today to actually build browser agents? How can you actually implement these businesses into your business today? Uh so yeah, thanks a lot, Idan, for being here today. Very excited.

SPEAKER_01

Yeah, thank you, Vlad. Hi, I'm great to be here, agree to be meeting you here.

SPEAKER_00

Amazing. Um so Idan, today, if we just dive right in, we've all been overwhelmed with a lot of buzzwords. So we've heard about open claw, we've heard about cloud code, computer use, browser use. So can you just set the stage a little bit for us about what's the difference between computer use, browser use, and why is everyone super excited about this, uh introducing this capability into their agents today?

SPEAKER_01

Yeah. So basically, I think the sort of premise for computer use and browser use is that AI agents are only as viable as their ability to act in the real world. And for most organizations and businesses right now, most of their work is still being done through a person doing their work on a computer and very often through a browser. And what we're seeing is that most agents' initiatives just get stuck at the planning phase or the integration phase because the agent isn't really able to do the work that a person wants it to do for him. And that's exactly where browser use and computer use come in. And every single time where we've seen an agent platform getting super hyped up and going viral, it always had this computer use notion to it. And I think OpenClaw obviously is the last iteration of that where computer use was a very important part of what OpenClaw just was so powerful and able to do anything that a human does as compared to just having API integration. So I think that's like the core value for a person is to know no matter what I want this to do, it will be able to be there for me and to complete this task for me.

SPEAKER_00

Yeah. I remember I am literally just going through your LinkedIn profile right now as we speak, because you mentioned a very concrete workflow that we can maybe go over now. So I'm just going to share my screen. So you talked about this fantastic use case, which is an KYC agent, a KYC analyst. So I think we're all familiar with this concept of onboarding new clients, and we need to verify whether they are they are solvable, whether they are legit, if they are incorporated, whether you're saying they're incorporated. And usually collecting this information takes up a lot of time. You can think about banks or insurance companies. This is like a very tedious process where you're hiring like qualified people to do a lot of low-value added tasks, which are repetitive and tedious. So it's kind of a post-to-child example for browser use agents. So maybe if we go over, you know, where Anchor fits into the workflow and how it's built. Can you yeah, can you just maybe walk us through how it works and where Anchor fits into the picture?

SPEAKER_01

Yeah, absolutely. So when we're working with finance companies, companies doing KYC, we see that they try to automate as much as possible and they got to sort of a good maturity in automation. But then they have this last mile of automation where they can't do it and they still have a person doing it for them. And it's usually like KYC is a really good example of that because 90% is going to be automated, but then you're gonna have these weird government portals that you're gonna search into, or like local state portals, sanctioned portals that were built in the 90s or in the 2000s for human analysts to go into. And that's like the last blocker, the last mile for these banks to get this process to be fully automated. And that's exactly where Anchor comes in. We come in not to be the LL end-all be all automation platform to do every single single thing. We're very focused on displacing what you really need a human to do nowadays. And the way Anchor works is we're able to assume or utilize the human identity, human authentication, and to be able to act through the browser exactly like the woman, the human, and essentially complete the task. Whether it's a very scripted task, so in KYC, it might just be look for this name, add the red the registry, and click on it and collect all the data to make some decision. In in some other cases, it might be more nuanced, more complex, and more of like a research task where it's a free roaming agent. You want to be able to collect data from many different places, and it's all being done through a browser still. And yeah, so so we come in and often we're able to show how we switch like this last mile from human-based to automated.

SPEAKER_00

Yeah. I think on this picture, what's quite interesting is that we can see Okta, Cloudflare. Um, I know that you recently partnered, if I'm not mistaken, with Cloudflare. And it's quite, I mean, to me, the first intuition is it sounds a bit ironic because Cloudflare is always the basically the weapon you need to, or like the army that you need to fight and like win against whenever you're like doing scraping or interacting with these websites. So why are you today partnering with Cloudflare? And what's the what's you know, what's the incentive for them to actually partner with a company like yours that is essentially automating some of you know with agents and the interaction of these websites?

SPEAKER_01

Yeah, so Cloudflare has been a great partner to us, and we're very happy on being partnered with them. I think kind of uniquely in this market, they're very forward-looking and they understand that two years, five years from now, the amount of agents using the web is gonna be so much larger than the amount of humans using the web. And instead of fighting that and sort of putting in barriers, they give in and they understand that this is what's gonna happen and they might they they want to be part of this transition rather than the sort of a legacy company that wasn't able to shift and adapt. And one of the sort of best things technology-wise, that we've done with them is to build this web bot off protocol that essentially allows agents to be not like unknown, hostile agents within an environment, but actually good bots that are verified are known, their source is known, and they can operate in the website in a way which is it's sort of a win-win-win for everybody because the website owner knows that it's good verified bots, not adversarial bots. So they are much, much safer. For the for Anchor, we don't need to use proxy networks or like fingerprint mocking to change the way you look on the web. So it allows us to give a more secure solution to our customers. And finally, the entire solution is able to be faster and cheaper and more reliable for the end users. So everyone in this metrics sort of wins. So I think it's it's definitely amazing to go in that path with Cloudflare.

SPEAKER_00

Interesting. Yeah. Uh yeah, it's good to hear from you uh the the narrative behind it. The the the big difference. So I have a I have a client which is a fintech, and basically they asked me to scrape all the companies across different regulatory bodies uh in the Gulf region. And I had to go through this manually, interacting with like JavaScript and so on. And basically, at times, like there's an infinite variance. So if you're lucky, you can extract like the full table. What very often happens is that you have to enter something in the search bar, click submit, and then like some time, you know, and just have a different fingerprint and so on. And so with the the real value with Anchor Browser is to basically automate all of this. But I know that it can be very expensive, you know, to like use browser agents. What do you suggest? Like what you know, at what point do you determine whether it should be something programmatic where you use, you know, playwright and you automate it, to when you should actually go ahead and use something like Anchor Browser, would the browser use uh you know, like a more intelligent agent, basically? Yeah.

SPEAKER_01

Yeah, I think it's a great question. I think if you look today into having a person do some task on a computer, and then you give the same task to a computer using agent with vision capabilities that actually parses what it sees on the screen and makes a decision uh based on that, most likely in most cases, computer use is going to be more expensive and slower to run. So there is still some way to go from just the raw LLM power for something that is economically viable and beneficial. And Anchor sort of tries and often succeeds in taking the raw LLM but converting it to something that is very economically viable. And the way we do that is we try to cache as many of the actions that are being done and to not just use LLM every single time. We're using LLMs the least amount possible, actually. We every task we learn. So, for example, going into this website, checking for specific data, after we had an agent do it once, we actually cache that workflow and then we sort of replicate it from that point on up until the moment where something changes in the UI or it's there is an edge case. And only then we fall back to to AI. And I think also an extension of that that we also have been doing is to actually convert browser-based workflows to API calls wherever that's possible. So that way we sort of give our customers the ability to not think about how do I optimize this. We give them the confidence to know Anchor is gonna optimize it for me behind the scenes and it's gonna be the fastest, the cheapest, the most reliable, because they're gonna sort of investigate and find what's the best orchestration method for my use case.

SPEAKER_00

Yeah. Yeah. Okay, so cost is tackled. Yeah. If we talk about security, I know that this is kind of your your background and expertise. Today, I think that the basically unleashing this power of browser use and computer use agents uh is extremely powerful, but there are like some obvious risks. And I remember that you posted this thing. I'm literally going through your LinkedIn, I think, which acts as a great guide and guide for the discussion. But you mentioned something about open claw and how you can secure it. So, what you know, how should people actually build those things in a secure way today and deploy them into production? Like, do you have any specific recommendations?

SPEAKER_01

Yeah, I think I think security becomes more and more top of mind for everybody, especially with Mythos, the anthropic model, and OpenClaw sort of having this huge viral moment, but then also all of these security issues and vulnerabilities. And I actually know several people that had to cancel credit cards, change passwords, because they were actually leaked by OpenClaw. So it's not just this scare thing where they tell you it's non-secure, it's actually very much a security vulnerability. And what we've been doing, we're helping to sort of create this universe where uh computer using agents have a secure infrastructure to run in, whether it's OpenCloud today and it can be any other open source, closed source project in the future. We give the runtime where you can actually run it sandboxed securely, but still have the firepower almost of a full local desktop experience. I think the the Mac Mini uh narrative or like side story of the OpenClaw virality was super interesting because like it's not necessarily very intuitive why you choose to use a Mac Mini for OpenClaw. But then it sort of became this least, least bad solution for how to run OpenClaw because you want it to be isolated, but you want it to have a good IP address so it can access everything. You want it to have a real browser so it's not going to get blocked. You want to be able to interact with the computer. But obviously, Mac minis aren't really scalable, not for humans, but not for just like people in their home, but definitely not for an enterprise that wants to run like a million open claws. So we we sort of play into that by giving the Mac Mini like security and guardrails and sort of experience, but also the Mac mini accessibility so it can still access anywhere on the web. And it's like a Mac Mini in a box which is virtual and you can scale it for an enterprise. So yeah, hopefully that makes sense. Yeah.

SPEAKER_00

Yeah. And so would you be able to like run like local LLMs or open source LLMs uh with a browser? How would that look like?

SPEAKER_01

Yeah, so Anchor is built in a way which is very modular, so you can bring in any LLM that you'd like. We're seeing more and more the open source models taking a bigger and bigger role in in the market. I think I think it's sort of an interesting pattern where organizations will start with open AI or anthropic because it's very cheap to run. And then as they scale up, they get to these seven-figure, eight-figure bills, and they say this is just not viable. In five years, my entire budget is just going to be LLM. So they they try to figure out how to solve it. And often open source or cheaper models in general are 80%, 90% more efficient. And this created a kind of this very odd pattern where huge customers just come into open source and become immediately a million-dollar uh open source. So the open source market has been lagging, but it's been exploding over the past two quarters, three quarters. And I think it's only gonna be more extreme for here, from here. And definitely also when it comes to security, most enterprise organizations are saying I'm winning twice by choosing open source because I know my data is gonna stay in-house and I'm also paying so much less. So I think almost every larger enterprise has shifted into this mindset of I'm gonna use open source wherever possible, and I'm only gonna use closed source when it's a necessity because I really need like the latest and greatest performance there is.

SPEAKER_00

Interesting. So I guess uh you're working with Grok with a queue, and uh this is basically uh like a cloud platform that allows you to run uh like open source LLMs. So is this what you see today on that platform, a pickup in open source models?

SPEAKER_01

Yeah, so we used to work quite a lot with grok. They were acquired by NVIDIA. Now they're working on the same things within NVIDIA, and we definitely have some workloads with Grok still, but we're also seeing these really amazing companies like Fireworks, like Base 10, that are sort of keeping on pushing very, very hard on open source, and we managed to collaborate with them and bring in modern open source and really great results through the open source models.

SPEAKER_00

Amazing. Going through my notes. You mentioned also with Anchor Browser, if we switch gears a little bit and focus more on your business at Anchor Browser, that you recently transitioned from helping, I would say, tech enthusiasts to focusing on like enterprise customers. You used to have like tokens or like credit, like a credit system. How do you think about pricing an AI tool like Anchor Browser today? Is it like a simple market on top of LLMs? Or yeah, how do you go about it?

SPEAKER_01

I think pricing is one of the most complex sort of questions right now in AI. Different models work very well for different customers. And yeah, right now we're seeing so much demand and pool both from enterprise and from tech companies, but it's very difficult to give a model that would sort of align well with both. So obviously, for tech companies, they want like to pay per gigabyte, to pay per LLF token, to pay like for VM. So very much on the technical side. If we take this same model to an enterprise organization and tell them this is how you're gonna pay anchor, they have zero understanding of how much things are gonna cost them on an annual basis. So we did a shift actually, and we created a new pricing model, which is enterprise focused, where we sort of learned from what worked very, very well in voice AI. So in voice AI, the premise was you're paying around$3 per call with a human operator, with a voice AI agent, you're gonna pay like 20 to 30 cents for the same call. So there is very clear ROI, very high margin of ROI. And we took this concept and we just transposed it into the browser and back office work and operations work with the exact same premise. So we're looking into the work that is being done already by an enterprise. Almost always it's like this offshore work or work even here in the States being done in masses by many, many people. And we're able to quantify how much you're paying per task completed, per a second of a human doing this specific task, and then to be able to show with anchor you're gonna get this fixed price for these agentic workers almost. Uh, and you can think of them through the premise of I'm paying a human uh$10 to do this task, and I'm gonna be able to pay anchor that much less to do the same task. And this narrative is much, much easier and safer in terms of risk and sort of financial planning for an enterprise to assume, because it's like a no-brainer, I'm gonna show huge ROI savings here. And and I think I think we're seeing more and more these enterprise organizations where the CEO told the CIO or to the help head of AI by the end of 2026, we must show uh a million dollars in savings through AI, no matter what it takes. Like these huge ambitious goals, just to put everyone to do the most that they can do with AI. So they have the the incentive to choose solutions that are very clear ROI gainers. So if we're able to align to what they need to show to their upper management, that's like again a great win-win-win for everybody because there is very low friction and very high alignment between us and our customers.

SPEAKER_00

Yeah, yeah, I think that's uh that makes us interesting. I wonder who came up with with like this pricing and the first voice AI company, because I guess now people will mimic it, uh, which is, I think, with productizing services the ultimate goal. Just in terms of the business itself, like Anchor Browser, uh, we know we often talk about how with you know we can pretty much bytecode anything today and the costs of development are going to zero. But in your case, you're building something that is fairly um sophisticated. Do you think about like new incumbents that can come? Like, how do you think about uh maintaining your business competitive, you know, competitive positioning over the years now that you know any, you know, I don't want to say people can, you know, anyone can build this, uh, but what do you think is like your special source at Anchor Browser, which you know will help you compound over the over the years?

SPEAKER_01

Yeah. So my my take on this is that nothing really changed from the way it used to be in SaaS and how you build emote and what's defensible and what isn't. But everything that wasn't emote is sort of diminishing much, much faster than it used to. So everything is like on steroids, so to speak. But the core foundational principles, to me, they feel feel the same. So what we've been doing here at Anchor is we're very focused on solving very hard problems. That's like the narrative that we go by. So if we feel something is pretty easy, something is going to be eventually solved, eventually get commoditized, we try to veer away from that. One example of that is there is the actual agent layer, so the Genty controller, we call it, that parses the DOM of a website and then decides what action to take. A year ago, there was no solution for that layer at all. And we had to make a decision either to build it in-house or to wait for the market to build it. And understanding that it's such a core sort of need and it's gonna get so much iterations on it, and it's also gonna kind of eventually get commoditized, we just opted out and said, we're gonna ride on the shoulders of giants and enjoy smarter models and better agenda controllers and just not play this game. And I think it was one of the best decisions product-wise that we've done. And instead of focusing on that, we focus on things that even today, and pretty safely, I can say, in the next few years, it's gonna be very, very hard for it to get commoditized. So, for example, we built our own Chromium, our own fork of Chrome, where we change over 100,000 lines of code to make it very reliable for agents, very safe for agents, accessible for the entire web. These are challenges that you sort of need a year plus of iterating, seeing how your Chromium behaves and operates to perfect it. And you need very, very smart researchers to build it. So that's just one example of how we focus on more research bound, more data-bound, more like things that aren't commoditizable almost. I think another quick example of that is uh our VPN or proxy solution. So we we've seen in the market that uh proxy is this um placeable thing where even if you have the best code in the world, but you need to get safe access or actual good access to the web, you need good IP addresses to back that up. You can't code that problem away. And that sort of got us to the other insight, which is all these proxy networks were built without the idea of agents in mind because they were built five, 10 years ago. And so we opted into building our own, we call it an enterprise VPN that is very much agent focused. It's focused for enterprise agents, it's much, much safer, it's faster, and it's more reliable, and it's something that can't be coded away as a problem. So hopefully it makes sense.

SPEAKER_00

Yeah, yeah. Yeah, I think for me personally, I I mean I completely agree with you. And usually when I'm struggling with like building something, either for Kent or I'm also building something on the side right now with another friend of mine, I'm like, okay, it's difficult. So this is a mode. You know, like I'm kind of like embracing the challenge. I'm like, okay, it's a good sign. Like not everyone is gonna work on this thing because it's such a pain. Um and uh, you know, even if the models get smarter, I think there's just like like the iteration cycles that are inevitable. And so the more iteration cycles that you have, the more of a moat or like edge you have, maybe.

SPEAKER_01

Yeah. Yeah, definitely we we go about by that saying of if it's hard, it's good. That's like we want to see ourselves working on hard things and not like just annoying, mundane, operational things that you just need to get through.

unknown

Yeah.

SPEAKER_00

Yeah, exactly. Which which uh which is a horrible feeling to be always uh begging our heads against the wall, but but it but it's also kind of fun. Um if we switch maybe like one more time, uh gears and focus on your role uh at Anchor Browser. I know that you've hired a couple of people, you're growing fast, uh helping you with growth. You have a background in go-to-market also as well. How do you how do you use AI? Do you use Anchor Browser in your stack today, you know, to like grow Anchor Browser? Can you tell us a little bit more about you know the the different AI workflows that you've implemented?

SPEAKER_01

Yeah. So I think I think this has been an amazing change over the past uh three to four months, where I was able to start using our own product with like leveraging it through Cloud Code, for example, to bring huge value to our own company and sort of uh footing our own solution in in the space of go-to-market. So, for example, for intent signals and knowing when a customer might be viable, that's something that is very, very bound to web and web research and collecting data from LinkedIn and from other social data. And I was able to use our own platform and my own personal accounts to uh to sort of get that data. And it's something that was probably impossible for me to build a year ago on my own, even with Anchor, just because coding agents weren't good enough. But now I have this, I call it company factory, where I'm just spinning up these agents that build their own sort of they have their own responsibility, they iterate on themselves, they create tasks, and then They complete them. And yeah, coming my my background is technology always been a coder since I was young. So I'm always looking at problems through the lens of how to build a technology solution to solve it. And I think now, like 2026 is the best year so far for someone to be in this builder mode and builder perspective because there is so much that you can actually build and so much opportunities to gain from building solutions for yourself.

SPEAKER_00

Yeah. Yeah. So you basically save your LinkedIn session cookie on Anchor Browser and you just browse LinkedIn and look for leads.

SPEAKER_01

Yeah. And I think I'm I'm also learning so much about our product through that because whatever does it to work for me probably doesn't work for our customers too. Or if something isn't obvious or isn't clear for me, probably others that sort of onboarded themselves, it's going to be so much harder for them to figure out. So uh it's been super beneficial for our product too.

SPEAKER_00

That's amazing. Yeah, I think the distribution channels, uh, I think LinkedIn automation uh is a channel that has been completely saturated and people are talking about intent-based, like intent like signals, like intent-based. But the problem is even if you work with a company, I mean, I'm not gonna name them, but there are a couple of companies that give you like those kind of signals. But because everyone has got access to the same signals, like you're coming back to the drug board and you're still facing the same issue. Whereas with the browser agents, you can be smarter about what it's gonna type on the search bar to find like more relevant results and really create this kind of edge in fingering out and make it super specific to what you're looking for, which is I think super powerful in your case. Um, yeah.

SPEAKER_01

I think this entire market of uh of marketing, it's very much arbitrage focused. So you need to know things that other people don't for your marketing to be effective. And definitely LinkedIn automation, a couple of years ago, it was amazing. It was huge ROI, and now it has been completely commoditized and sort of everyone knows about it. So we're always trying to figure out how to find opportunities that no one else leverage, or it's impossible to commoditize them for marketing. And definitely the more nuanced you are and you have your own copy, your own queries, it's like that's the only way to go.

SPEAKER_00

Yeah. So about your stack, are you using cloud code and you have a. I think I read, if I'm not mistaken, that you have an Anchor Browser CLI? Do you is that right? Yeah.

SPEAKER_01

We do have a CLI, yeah.

SPEAKER_00

Amazing. So so you basically have a bunch of skills and workflows uh from cloud code that you can kick start uh that kick off basically like browser sessions, like on your LinkedIn with your with your session ID and those like the work basically.

SPEAKER_01

Yeah, we spin up like micro VMs, we put cloud code or agent SDK on top of them, trying to be as non-local as possible. I don't like anything running on my local computer because I want it to be able to run any time, any day, and to be able to scale it up and then connecting it to Anchor and to any other tools that we have to run it. Yeah.

SPEAKER_00

Nice. How much time have you saved building this? Do you have an order of magnitude? Like in terms of, you know, is this creating more work for you and expanding your scope of responsibility or are you genuinely saving time? I think that's kind of the misconception. We're we're all super busy building more stuff, but are we actually being more productive and saving time?

SPEAKER_01

Yeah, it's a really good question. I think it's very hard to measure the ROI. I think it's still very early for the ROI and there is like more to prove there. But I think I I sort of force myself to put myself into this practice of building things and sort of validating how much ROI there is because the models are just gonna get smarter and the ability to automate is just gonna be become easier and easier. So if you're not thinking through automation and through building solutions, your event like whoever goes in this path is eventually gonna show huge ROI, and whoever goes in the path of just doing everything manually is gonna diminish. So yeah, I think it's a long-term bet in my perspective.

SPEAKER_00

Well, uh, I don't know if you saw, but uh, I would announced that they released a 4.7 opus an hour ago. So let's let's go try it out and and see uh what it does and if it's gonna make it even uh more easier to Yeah, I'm super excited about it.

SPEAKER_01

I think uh every model we that came out, what we've seen is obviously like personal productivity and everything, but then also for the Anchor customers, we're always one of the first to offer and and sort of let our customers use these new models. So now if you go into Anchor, you can already use Cloud 4.7. And every time a new model comes out, the ability and sort of bandwidth of what you can actually do with the platform just increases. So our market opportunity grows on every single uh LLM launch, which is amazing.

SPEAKER_00

Yeah, yeah, yeah. That's the kind of tailwind that you want to. Cool. Yeah, is there anywhere people can find you? I'm uh gonna share your yeah. Any anything that I forgot to you know touch on that you would like to discuss?

SPEAKER_01

Yeah, no, I think I think this has been great. Uh really, really glad to talk. I'm on LinkedIn mostly, so uh happy to connect on LinkedIn. My name is Idan Raman, and yeah, thank you so much.

SPEAKER_00

Perfect, yes. Go give him a follow on LinkedIn. Idan is showing great uh use cases and case studies, uh, and Anchor Browser is shipping. Like there is no tomorrow. So congrats with that. And uh yeah, we'll be in touch hopefully next week in New York. Thanks. Thank you.

SPEAKER_01

Yeah, I appreciate it. Cheers. Ciao. Bye.