What's Up with Tech?
Tech Transformation with Evan Kirstel: A podcast exploring the latest trends and innovations in the tech industry, and how businesses can leverage them for growth, diving into the world of B2B, discussing strategies, trends, and sharing insights from industry leaders!
With over three decades in telecom and IT, I've mastered the art of transforming social media into a dynamic platform for audience engagement, community building, and establishing thought leadership. My approach isn't about personal brand promotion but about delivering educational and informative content to cultivate a sustainable, long-term business presence. I am the leading content creator in areas like Enterprise AI, UCaaS, CPaaS, CCaaS, Cloud, Telecom, 5G and more!
What's Up with Tech?
Who Will Build the Builders? Inside Emergence AI's CRAFT Platform
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Interested in being a guest? Email us at admin@evankirstel.com
The quiet revolution happening in enterprise AI isn't about flashy robots or sci-fi scenarios—it's about solving a $200 billion problem hiding in plain sight. Outdated, manual data pipelines are costing businesses astronomical amounts and keeping your smartest people stuck doing digital busywork.
Emergence AI's CRAFT platform stands at the forefront of this revolution with its "agents creating agents" approach. The concept sounds deceptively simple: AI systems that can dynamically create specialized AI tools to handle specific tasks. But the implications are profound. Data scientists spend up to 90% of their time wrangling data rather than analyzing it. CRAFT flips that equation, cutting analysis time from days to minutes through intelligent automation that requires only natural language commands.
What makes this particularly powerful is CARFT's self-improvement capabilities. The platform continuously learns through three dimensions: knowledge acquisition (gaining domain expertise), skill acquisition (optimizing workflows), and agent acquisition (building new capabilities). Like a new employee gaining experience, the system becomes increasingly valuable over time, adapting specifically to your business context.
Real-world results already demonstrate the impact. Semiconductor manufacturers using CARFT identify yield issues in minutes instead of days, potentially saving millions weekly. Telecommunications companies report 70% reductions in data governance tasks. An online forum reduced unsafe image postings by 60 million monthly. The platform balances this power with built-in governance features, ensuring human oversight remains for critical decisions while shifting roles toward strategic direction rather than execution.
Available now in private preview and launching broadly this summer, CARFT represents a fundamental shift in how enterprises handle data. The question isn't whether AI will transform your business operations—it's whether you'll be among the first to capitalize on this transformation or watch competitors race ahead while you're still stuck with yesterday's manual workflows.
Founders—thinking of selling or raising capital? Here's what you should know...
Listen on: Apple Podcasts Spotify
More at https://linktr.ee/EvanKirstel
OK, let's unpack this. We often talk about AI, you know, in really broad strokes, but it's rare we get a chance to actually look behind the curtain see what's really changing, how businesses work, like at the ground level.
Speaker 2Right, there's so much noise out there.
Speaker 1Exactly the pace of AI change. It feels like this tidal wave of information, doesn't it? And it's genuinely hard sometimes to tell what's truly revolutionary versus what's just well hype.
Speaker 2Separating the signal from the noise.
Speaker 1Precisely so. Today we're diving into something that honestly looks like it could cut right through that noise, something that could deliver not just those aha moments but a proper understanding of a really groundbreaking shift.
Speaker 2Okay, I'm intrigued.
Speaker 1Yeah, think of this as a shortcut, a way for you to get properly informed on a topic that seems set to really redefine how enterprises operate, especially around efficiency. Sounds important.
Speaker 2What are we focusing on?
Speaker 1So our mission for this deep dive is Emergence AI's new platform. It's called TAFT, and the really big idea behind it is this concept of agents creating agents.
Speaker 2Agents, creating agents.
Speaker 1Okay, and we're not just framing this, as you know, another cool AI tool. It's positioned as a direct, really powerful solution to one of the biggest, most persistent and, frankly, most expensive problems in enterprise IT.
Speaker 2Which is.
Speaker 1The whole mess of outdated manual data pipelines. Seriously, this isn't some small niche issue. It's a massive global challenge. The sources we looked at estimate it costs businesses over $200 billion annually.
Speaker 2Wow, that's a staggering number, just on managing data flows.
Speaker 1Yep and CRFT comes in, promising immediate real benefits. Think like dramatically faster insights from your data, getting rid of all that tedious busy work involved in data prep and, of course, massive cost savings that hit the bottom line.
Speaker 2And how accessible is it? That's often the catch with powerful tech.
Speaker 1Well, this is the kicker it's all accessible with natural language, plain English. You don't need a deep technical background, apparently, to tap into these really sophisticated AI capabilities.
Speaker 2Potentially huge.
Speaker 1Absolutely.
Speaker 2Democratizing that kind of power.
Speaker 1Exactly, and one thing that really jumped out from the materials is that this isn't just about making things a bit faster.
Speaker 2No.
Speaker 1No, it's pitched as a fundamental paradigm shift.
Speaker 2Yeah.
Speaker 1How companies manage data, analyze it, get strategic value from it. It's moving away from that fragmented, often manual, reactive way of doing things.
Speaker 2Towards something autonomous, intelligent, proactive.
Speaker 1You got it. That's the promise anyway. Interesting.
Speaker 2So where are we getting this information from?
Speaker 1Good question. So for this deep dive, we've pulled insights from a pretty detailed Ask Me Anything session and AMA with Emergence AI's co-founders and some of their key scientists and engineers.
Speaker 2Straight from the source.
Speaker 1Right and we've cross-referenced that with some good reporting from VentureBeat, plus information directly from the Emergence AI website. We've basically sifted through it all to pull out the really important nuggets for you.
Speaker 2Great. So the goal is to get everyone up to speed quickly on the CRFT platform and the whole agents creating agents idea.
Speaker 1Exactly so let's dive in. Let's start with that multi-billion dollar problem. Crfd is aiming to fix these data pipelines Right. What's the actual issue?
Speaker 2Okay, so think about a large company. Data isn't just in one neat place, right, it's scattered everywhere.
Speaker 1Right Legacy systems cloud apps spreadsheets.
Speaker 2Exactly Databases here, data warehouses there, maybe some sensor data over here, customer data in a CRM. It's fragmented, siloed, each piece is kind of stuck in its own corner.
Speaker 1And getting it all together for analysis is the challenge.
Speaker 2It's a massive challenge. The traditional way involves these complex, often manual and incredibly time-consuming processes just to bring that data together. Think of it like plumbing, but really old, leaky, complicated plumbing.
Speaker 1And you said manual. Like people are actually doing this bit by bit.
Speaker 2Often, yes, we're writing custom scripts for every little connection, which constantly break or need updating when data formats change. It's not just inconvenient, it's a huge grain on resources, time, money, people.
Speaker 1And that's where the $200 billion figure comes in. It sounds almost unbelievable.
Speaker 2It does, but when you break it down it makes sense. First, there's the sheer human effort teams of people just wrangling data, moving it, cleaning it, trying to make sense of it. Second, think about lost opportunities. If it takes days or weeks to get the data ready for analysis, decisions get delayed. By the time you have the insight the market might have shifted or the opportunity is gone.
Speaker 1Right, the data is stale.
Speaker 2Exactly. And third, manual processes are error prone. Mistakes creep in when you're copying, pasting, transforming data by hand or with brittle scripts. Flawed data leads to flawed analysis, which leads to bad strategies.
Speaker 1So it's not just cost, it's risk and missed value too.
Speaker 2Precisely and it disproportionately affects the people you'd think would be doing the high value work the data scientists. How so Well? The sources we looked at, including insights from Emergent's AI's team, said you bet data scientists can spend up to 90 percent of their time on this preparatory stuff 90 percent. That's almost their entire job, pretty much Data wrangling, data cleaning, data migration. They're dealing with incompatible formats, missing information, weird naming conventions, constant changes to how the data is structured.
Speaker 1So they're experts in analysis, but they're stuck being data janitors.
Speaker 2That's a blunt way to put it, but yeah, often their skills aren't being used for deep analysis or building those cool predictive models. Most of the day is just getting the data usable, which really begs the question what if you could just flip that?
Speaker 1Free them up to do the actual science part.
Speaker 2Exactly. Let them focus on strategy, on insight, not on the plumbing.
Speaker 1OK, so that sets the stage perfectly for CRAFT. How does it propose to flip that script?
Speaker 2Right. So CRAFT and the acronym stands for Create, remember, assemble, fine-tune and Trust is Emergence AI's first big push into this space. It's a data-focused AI platform Create, remember, assemble, fine-tune, trust got it and its core function sounds simple, but the implications are huge. It lets anyone and they really stress anyone from analysts right up to executives turn their business goals directly into these smart, self-verifying multi-agent systems.
Speaker 1Okay, smart, self-verifying multi-agent systems. Let's break that down. Smart implies intelligence, Multi-agent suggests some multiple AIs working together, but self-verifying what does that mean?
Speaker 2That's a really crucial part. It means the system doesn't just blindly execute a task. It has built-in checks to validate its own work.
Speaker 1How does that work in practice?
Speaker 2Imagine you ask it to pull data for a sales report. A simple script might just grab the numbers. A self-verifying agent, though, might automatically run consistency checks. Does this number align with last quarter's trend? Does it match data from the finance system? It might cross-reference things, flag anomalies. It finds.
Speaker 1Ah, so it's trying to catch errors before they get to me.
Speaker 2Exactly. Instead of just handing you potentially flawed data, it actively tries to ensure the output is reliable and trustworthy, which you know in an enterprise setting, that's absolutely critical. You can't base million dollar decisions on dodgy data.
Speaker 1Absolutely, and the interaction model. You said plain English.
Speaker 2Yeah, that's the other revolutionary part. You interact with it using plain old English. Just describe your goal, what you need, and apparently you get results back in minutes, not days, not weeks.
Speaker 1Minutes, so no complex coding, no query language is needed.
Speaker 2That's the claim. No deep technical background required. It's about democratizing access to this kind of powerful automation. Imagine, like you said, collapsing weeks of work into minutes. Yeah, it's a massive shift, isn't it? It really is. It's like having this super efficient data team just on call, ready instantly.
Speaker 1Yeah, and it's not just a concept. It has concrete capabilities right out of the box.
Speaker 2Like what can it actually do from day one?
Speaker 1Well, it can talk to data in your databases and data warehouses and understands different structures, different schemas, SQL, NoSQL, data lakes, you name it. It can retrieve and process that information.
Speaker 2Okay, so it can fetch the data.
Speaker 1But more than that, it can write code. For instance, it can automate building those complex ETL pipelines. Extract, transform, load.
Speaker 2Ah, the plumbing itself. Exactly those processes are notoriously fiddly and error-prone for humans. Those processes are notoriously fiddly and error prone for humans Moving data, changing its format, loading it somewhere else. Shear-raf-t can automate large parts of that Data ingestion, cleansing, transformation, a lot of that manual drudgery.
Speaker 1Which connects back to that 90% figure for data scientists.
Speaker 2Precisely, and the co-founders are quite explicit Tasks that used to take days, weeks and months are now potentially doable in mere minutes.
Speaker 1It's not just incremental improvement, then it's a whole different order of magnitude.
Speaker 2Right. It feels like it elevates AI to manage higher levels of abstraction. It solves bigger chunks of the problem that people use to handle manually, and at a speed and scale that was just unthinkable before.
Speaker 1Shifting the bottleneck from human time to compute time.
Speaker 2Yeah, that's a good way to think about it.
Speaker 1Okay, so QRFT is powerful, but you mentioned the really core innovation is this idea of agents, creating agents. That sounds kind of sci-fi. What does it actually mean?
Speaker 2Yeah, it does sound a bit out there at first, but before we get to the creating agents part, let's make sure we're clear on what an AI agent is in this context.
Speaker 1Good idea, lay the groundwork.
Speaker 2So traditional software, as we sort of touched on, is mostly static. You give it commands, it follows them, step A, step, b, step.
Speaker 1C. That's right, it executes a script.
Speaker 2Exactly. Agents are different. They're designed to chase outcomes. They adapt. Ashish, one of the scientists at Emergence AI, gave a really neat definition. He said an agent is an entity that can sense, reason, plan and then can act.
Speaker 1Sense reason plan act.
Speaker 2Sense means observing its environment, maybe reading data from a database, looking at a web page, processing a document. Reason means interpreting that information, understanding it in the context of its goal Plan means figuring out a strategy, a sequence of steps to achieve the goal and act well doing those steps.
Speaker 1So it's not just following a pre-written recipe, it's figuring out the recipe and cooking the meal.
Speaker 2That's a great analogy. Yes, and importantly, when it acts, it can change the state of the world, like writing code, updating a database, sending a notification. But, just as importantly, it can change its own state by acquiring knowledge.
Speaker 1Meaning it learns.
Speaker 2Exactly. It learns from what worked, what didn't it?
Speaker 1updates its internal understanding, its strategies, that self-modification is key, okay, and Emergence AI says these agentic systems are good at tasks needing multiple reasoning steps and using multiple tools.
Speaker 2Right, think about a complex request, reasoning steps and using multiple tools. Right, think about a complex request, maybe analyzing sales data. An agent might need to first query the sales database tool one then use a statistical analysis library tool two on the results, then generate a chart using a visualization tool tool three and finally write a summary report tool four An agent coordinates that whole workflow intelligently.
Speaker 1Got it. So they're like smart autonomous project managers for specific tasks.
Speaker 2Yeah, that's the fair way to think about it.
Speaker 1Okay, NOW agents, creating agents. How does that work? This is where it gets really interesting for me.
Speaker 2Right. So this is the ACA agents creating agents breakthrough. The idea is, when you give Chirafty a task, you, the user, don't need to know if there's already a perfect, pre-built agent for it. You don't need to figure out how to chain different agents together.
Speaker 1The system handles that complexity.
Speaker 2Exactly. There's this orchestrator, which they describe as a kind of meta agent, an agent overseeing other agents, and this orchestrator figures it all out. It dynamically creates new agents on the fly if it needs to.
Speaker 1To read them from scratch.
Speaker 2Well, it might be more accurate to say it composes them, it assesses the task, breaks it down. Then it looks at its library of existing agents, the big, powerful ones. But if there's a specific gap, a very niche requirement for this particular task, it builds something custom for that gap. Precisely. It composes existing agents and new agents to accomplish your task. It might generate a small piece of code, a specific function, essentially a mini agent, just for that one step.
Speaker 1Okay, that makes more sense than building whole complex agents from zero every time.
Speaker 2Right Viv, who works on agent engineering. There used a really helpful metaphor. He talks about existing agents as big rocks. These are your general purpose power. There used a really helpful metaphor. He talks about existing agents as big rocks. These are your general purpose powerhouses. Maybe a web automation agent, a complex data analysis agent?
Speaker 1Okay, the standard toolkit.
Speaker 2Yeah, but the dynamically created agents, those are the little rocks. They fill in the spaces for these really niche problems.
Speaker 1Ah, like packing sand around the big rocks to make a solid structure, exactly.
Speaker 2Ah, like packing sand around the big rocks to make a solid structure, exactly so. Say, your big data agent needs one specific piece of info from some weird internal report format. It's never seen. The orchestrator might spin up a temporary little rock agent just to parse that specific report format for that one piece of data. Then it plugs that result back into the main workflow.
Speaker 1Wow. So the system adapts its own toolkit on the fly.
Speaker 2That's the core of it and that dynamic creation and composition is what makes it, in their words, truly agentic. It's not just executing. It's planning reasoning, breaking down tasks, composing solutions and, crucially, verifying the results and improving itself along the way.
Speaker 1And you mentioned improvement. These little rock agents, if they work well, do they stick around?
Speaker 2Apparently yes. If a dynamically created component or workflow proves effective and useful, it can be saved, remembered and reused. So the system gets smarter and more efficient over time. It's like that new employee analogy Again. They learn a new skill, they become more valuable.
Speaker 1So the big takeaway here is this AI isn't just smart, it's adaptively smart, it's constantly learning, growing, tailoring itself to my specific business needs.
Speaker 2That seems to be the fundamental idea a system that gets better with you, which is a massive edge in a world that changes so fast.
Speaker 1That adaptability brings us neatly to the next point, this idea of self-improvement and memory.
Speaker 2It sounds like it's baked right into the system. Absolutely, it's not an add-on. It's core to how these care FTE agents operate. They don't just run a task and forget about it. They learn, they improve, they have self-improvement capabilities fueled by long-term memory.
Speaker 1And how do they build up this experience? Just by doing the tasks we give them.
Speaker 2That's part of it. Yes, they learn through usage, seeing what works, getting feedback, implicitly or explicitly. What's really interesting is they also learn through self-exploration.
Speaker 1Self-exploration. What does that mean? Like they practice in their downtime.
Speaker 2Kind of it implies they might proactively test different ways to solve a problem they've encountered before, even when not actively tasked, or maybe explore adjacent data sources, look for patterns, actively seeking ways to optimize their own internal methods. It's more proactive than just waiting for the next assignment.
Speaker 1Okay, that's a subtle but important distinction. It's not just passive learning.
Speaker 2Right and Ashish broke down this self-improvement through memory into three key dimensions. First is knowledge acquisition Getting more facts. More than just facts. It's about learning specialized domain knowledge. Think about industries like semiconductors or biotech. They have incredibly specific jargon complex processes, unique physics. The agents learn this context.
Speaker 1So they speak the language of the business.
Speaker 2Yes, and beyond that formal knowledge.
Speaker 1They also pick up tacit knowledge, the stuff that's usually not written down Like office politics.
Speaker 2Huh, Maybe not quite that, but more like the unwritten rules of how things actually get done in a specific company, Operational nuances who the real expert on System X is? Maybe even subtle approval workflows. This stuff is critical for fitting smoothly into an organization's real processes.
Speaker 1Okay, that makes sense, embedding itself in the culture almost. What's the second dimension?
Speaker 2Second is skill acquisition. This is about process. Once an agent figures out a really effective way to do something maybe the fastest way to query a particular database or the best technique for cleaning a certain type of messy data that knowledge, that skill is persistent.
Speaker 1It remembers the how-to.
Speaker 2Exactly. It internalizes that optimal workflow and reuses it, so the system gets more efficient, more reliable over time. As she's compared it to a new employee picking up new skills, they get better, faster, more dependable at their job. It's not just remembering facts, it's remembering how to do things well.
Speaker 1Right Learning, best practices and the third dimension.
Speaker 2The third one ties back to our earlier discussion New agent creation, acquisition. This is where the system doesn't just improve existing agents or skills, it actually builds or acquires entirely new capabilities, new agents as it runs into new kinds of problems, so its overall potential grows over time. Precisely. Its scope expands, Its ability to tackle more diverse and complex challenges increases dynamically. It's this hierarchy knowledge, skills, new capabilities that creates this compounding intelligence effect.
Speaker 1Making it more valuable the more you use it.
Speaker 2That's the idea, and they have a concrete example of this in action. They used Kraftist to help build itself.
Speaker 1Right, the dogfooding example eating your own dogfood.
Speaker 2Exactly. They used Kraftit internally to automate writing code for connecting different internal systems needed for Kraft's own development.
Speaker 1Like connecting their analytics database to their CRM. You mentioned, yeah, things like that.
Speaker 2Instead, development Like connecting their analytics database to their CRM. You mentioned, yeah, things like that. Instead of engineers manually scripting those connections, QRef learned the requirements, learned the schemas and built those data bridges automatically.
Speaker 1It saved their own team significant manual coding effort. That's pretty compelling. It shows immediate practical value even for building the tool itself.
Speaker 2It does. It demonstrates that the self-improvement and automation aren't just theoretical. They deliver real productivity gains right away.
Speaker 1So the upshot of all this learning and memory the system gets smarter, it adapts, it tailors itself to your specific business context. That feels like a significant competitive advantage.
Speaker 2It certainly seems positioned that way, an intelligence that evolves with your enterprise.
Speaker 1Okay, so we've got this powerful learning agent creating platform. Where is it actually making waves? What kind of real world impact are we seeing?
Speaker 2Well, Emergence AI is pretty clear that Kraft is purpose built for data heavy environments where rapid decision making is critical.
Speaker 1So industries drowning in data.
Speaker 2Exactly Places where the sheer volume, velocity and variety of data make manual analysis incredibly slow, error-prone or just plain impossible. They list several key industries already seeing impact Semiconductors is a big one, and we should definitely dig into that, but also oil and gas, telecommunications, healthcare, financial services, supply chain and logistics, e-commerce and even research environments.
Speaker 1That's a pretty broad range, all data intensive though.
Speaker 2Very much so. Places where delays mean lost money or missed opportunities, and where complexity is high.
Speaker 1Okay, let's do that deep dive into semiconductors. You said it's a prime example of ROI.
Speaker 2Absolutely. The semiconductor industry is just incredibly data intensive. We're talking hundreds of gigabytes of data per product weekly, maybe even more now.
Speaker 1Wow, where does all that data come from?
Speaker 2Every stage Designing the chip, the actual fabrication and the fabs, those super clean factories, then offshore assembly and testing facilities, plus data from the fabless companies themselves, like design specs, customer requirements, quality targets. It's a deluge from multiple, disparate sources.
Speaker 1And the challenge is making sense of it all quickly.
Speaker 2Precisely A key problem is detecting subtle drops in manufacturing yield. We're not just talking about a simple alert if yield falls below 90%. It's often about spotting subtle downward trends hidden in noisy data or complex anomalies that aren't just a single parameter going wrong.
Speaker 1Things that are hard for a human to spot just by looking at dashboards.
Speaker 2Very hard, especially when you need to correlate data across all those different sources the fab, the assembly plant, the design data. Maybe a tiny temperature fluctuation in one machine in the fab, combined with a specific material batch used weeks ago, is causing a slight increase in failures during final testing overseas.
Speaker 1Connecting those dots sounds incredibly difficult.
Speaker 2It is. It requires pulling data from multiple systems, often with different formats, different timestamps, different owners. Doing that root cause analysis manually is described as almost impossible for humans to be on top of efficiently. The sources say analysis typically takes days.
Speaker 1Days, and in semiconductor manufacturing days mean potentially millions of dollars lost, right?
Speaker 2Easily. If a yield issue isn't caught and fixed quickly, you could be producing thousands of faulty chips. That's wasted materials, lost production capacity, potential delays to customers. It adds up incredibly fast.
Speaker 1So how does CRFT help here?
Speaker 2It automates that whole detection and root cause analysis process. Instead of days, the analysis time gets cut down to minutes. They give an example of offline analysis taking maybe 20, 30 minutes instead of several days.
Speaker 1From days to minutes. That's the game changer.
Speaker 2It really is, yeah, and the value is crystal clear. They talk about identifying issues that lead to millions in weekly savings, because companies can take immediate actions to counteract anomalies. Catch it early, fix it fast, save a fortune.
Speaker 1And crucially, what about the engineers? Are they being replaced?
Speaker 2No, and this is a point they emphasize. The agents complement the skill sets of the existing engineers. Hardware engineers are experts in chip design, materials, physics. Their core job isn't usually data science or AI programming.
Speaker 1Right, they're domain experts, not necessarily data wranglers.
Speaker 2Exactly so. Kraft frees them up from digging through log files and spreadsheets. It gives them the insights they need quickly so they can focus on what they are best at solving the underlying engineering problem, improving the design, tweaking the manufacturing process. It augments their expertise.
Speaker 1That makes sense Empowering the experts, not replacing them. Are there other strong examples outside of semiconductors?
Speaker 2Yes, several. In telecom, for instance, they report a 70% reduction in the time spent on data governance tasks.
Speaker 1Data governance that sounds important, but maybe a bit dry.
Speaker 2It's critical in regulated industries. Think about ensuring compliance with GDPR or CCPA, tracking where data came from and who accessed it, data lineage, enforcing policies classifying sensitive info. It's hugely time-consuming and often manual. Automating 70% of that is a massive efficiency win.
Speaker 1Okay, yeah, I can see the value there Less risk, less overhead.
Speaker 2Then there's a fascinating case with a large online forum. They used AI agents to help reduce the number of unsafe images being posted by 60 million per month 60 million.
Speaker 1That's a staggering scale.
Speaker 2It is, and it's likely not just simple image filtering. It probably involves understanding context, maybe analyzing text associated with the image, applying complex moderation rules at a scale humans just couldn't handle. It lets human moderators focus on the really tricky nuance cases.
Speaker 1Improving safety and probably reducing moderator burnout too.
Speaker 2MARK MIRCHANDANI, very likely. And another really interesting one is the integration with Nymerson, specifically their O-plus solution for complex manufacturing, leslie.
Speaker 1KENDRICK. What was the challenge there, mark?
Speaker 2MIRCHANDANI. Aaron Rousseau from Nymerson explained it well. He said previously manual analysis of production data was so slow that by the time analysis was complete, the next production Insights were always too late to actually influence the current process.
Speaker 1So purely reactive.
Speaker 2Exactly. With Kraft Integrated, they get real-time insights aligned with complex production workflows. This allows for improving yields and accelerating issue detection. Imagine a sensor flag, something subtle. Kraft analyzes, it instantly correlates, it maybe suggests an adjustment before defects start happening. That's proactive control.
Speaker 1That real-time aspect seems key in dynamic environments.
Speaker 2Absolutely. And beyond these industrial cases, the demos on the Emergence AI website show even more potential Research analysis, summarizing academic papers, pulling out key findings, even spotting conflicting results, and sending it all to Confluence.
Speaker 1Saving researchers. Hours of reading time.
Speaker 2For sure. E-commerce competitive analysis, comparing platforms like Shopify and Wix on features pricing, market positioning, health insurance insights polling, investment and M&A data from dense financial reports. Gene therapy market analysis from SEC filings. Even summarizing the huge WHO global nutrition report.
Speaker 1Wow, it seems applicable. Anywhere there's complex information or data that needs extracting, synthesizing or analyzing quickly.
Speaker 2That's the impression. It's acting like a highly efficient, intelligent knowledge worker across a very broad range of tasks.
Speaker 1Okay, this is undeniably powerful stuff, which naturally leads to the big question Trust control, privacy, security compliance how do you manage AI agents that are becoming increasingly autonomous? That sounds like a major concern for any enterprise.
Speaker 2It's a huge concern, and rightly so. Robbie, the co-founder and CTO at Emergence AI, was very clear that governance isn't an add-on. It's designed in as a core design principle.
Speaker 1Meaning it's built in from the start, not bolted on later.
Speaker 2Exactly. The idea is that enterprises themselves should define the rules, the policies. They specify what agents can and cannot do. For example, a policy might state no agent can access personally identifiable information without specific anonymization steps, or any financial transaction initiated by an agent requires sign-off from a specific manager.
Speaker 1So the company sets the boundaries and the system enforces them.
Speaker 2Right. Mechanisms are built into C-RAF to enforce these policies, things like granular access controls, detailed audit logs, tracking every action an agent takes and limiting the action space or capabilities available to certain agents based on policy.
Speaker 1What about human oversight? Is it all just automated rules?
Speaker 2No, and this is critical. They stress the concept of human in the loop. While the goal is more autonomy, human oversight is still considered essential, especially for critical decisions or actions.
Speaker 1So the system might do the analysis, maybe even propose an action, but a human gives the final OK.
Speaker 2Precisely. It's about ensuring the agent's actions always align with the company's principles, ethics and compliance needs. It's collaborative intelligence, not handing over the keys completely. The system flags things that need human review and provides the context for that decision.
Speaker 1That makes sense. It's about augmenting human judgment, not replacing it entirely.
Speaker 2And this ties into how workforce roles are likely to change. Satya, the CEO, used a really interesting analogy with self-driving cars.
Speaker 1Oh, yeah, how so.
Speaker 2He pointed out that even with level five autonomy, fully self-driving, there's still likely a human in the loop somewhere, maybe overseeing a fleet from a control center, even if they aren't physically in the car. He sees a parallel in the enterprise.
Speaker 1So humans shift from doing the task to managing the system that does the task.
Speaker 2Exactly. The role evolves from doing all the coding or building the agents yourself to overseeing these systems, nudging these systems, ensuring they stay within guardrails and making sure the outputs meet the business requirements. It's a supervisory, strategic role.
Speaker 1That sounds like a different skill set, though.
Speaker 2It absolutely is. Satya called them new skills, even metacognitive skills, that people have to be taught, things like understanding how agents reason, how to formulate problems effectively for an AI, how to interpret their outputs critically and how to define and monitor those crucial guardrails.
Speaker 1It's less about execution and more about orchestration and governance.
Speaker 2Well put and recognizing this need, Emergence AI has actually partnered with Andela.
Speaker 1And Andela, the global tech talent company.
Speaker 2That's the one they're working together to specifically train engineers and data scientists for this new reality, training them to go from having to build agents to manage agents and govern agents and work with agents.
Speaker 1That's proactive, addressing the skills gap head-on.
Speaker 2It seems so. The training focuses on things like advanced prompt engineering for guiding agents, debugging agent workflows, setting up, monitoring and defining those governance policies effectively.
Speaker 1And the long-term vision? Is it just for techies?
Speaker 2No, the ultimate vision they articulate is that even business users with some analytical skills should be able to work with systems like this and derive value instantly. So further democratization empowering a broader range of people within the enterprise.
Speaker 1Okay, let's dig into the tech a bit more. Interoperability is always a huge issue in enterprise IT. How does craft play with existing tools? And what about all the different AI models out there Open AI, anthropic meta Is it locked into one?
Speaker 2Great questions and, according to Vive, interoperability was a first-class design goal right from the start. They seem very committed to not creating another silo.
Speaker 1How do they achieve that? What are the mechanisms?
Speaker 2Several things. They offer open API specifications so other systems can integrate with Kira. They have an SDK, a software development kit for developers who want to build custom integrations or extensions, and, critically, they fully adopted something called MCP.
Speaker 1MCP model context protocol. What is that?
Speaker 2Think of it like a universal translator and capability directory for AI models and agents. It allows different AI systems, even from competing vendors, to understand each other's requests, data formats, function calls and context. Keras can act as both an MCP client using other MCP-enabled tools and an MCP server allowing other tools to use Keras capabilities.
Speaker 1Okay, so MCP is key for making different AI systems talk to each other without custom hacks.
Speaker 2That's the promise of it. Yes, enables a more open interoperable AI ecosystem within a company.
Speaker 1And which AI models does Kia actually support?
Speaker 2They're explicitly model agnostic. They support a range of the big ones right now OpenAI's GBT-40 and 4.5, anthropix Clawed 3.7, sonnet, metaslama 4.0 and 4.5, anthropix Claw 3.7, sonnet, metaslama 3.3. And they also support popular orchestration frameworks that developers might already be using, like Langchain, crew AI and Microsoft Autogen.
Speaker 1So companies aren't locked into one specific LLM vendor.
Speaker 2Correct. They can choose the best models for the job or use models they already have access to, and Emergence AI runs continuous benchmarking activity to track how these models perform, ensure results are consistent or deterministic, as they say, even as the underlying models get updated or drift over time.
Speaker 1That model drift is a real concern, isn't it? An update to an LLM could break a workflow.
Speaker 2It absolutely can, so that continuous benchmarking and focus on determinism is really important for enterprise reliability. It raises a broader question, though how vital is this open model agnostic approach for the long-term success of enterprise AI? It feels like a smart strategy in such a fast-moving field.
Speaker 1It definitely provides flexibility and future-proofing.
Speaker 2Okay, looking ahead, what's on the roadmap for Sea Raft? They started with data pipelines, but where do they go next?
Speaker 1Right. They're clear that the current focus on the data space is strategic. It's where they can deliver high accuracy, reliability, task completion and demonstrate that clear ROI we talked about.
Speaker 2Focusing on tangible value first.
Speaker 1Exactly. They acknowledge that the dream of a completely general, open-ended orchestrator, like a true AGI that can do anything, is still a way off and requires major research breakthroughs.
Speaker 2So they're being pragmatic about the current scope.
Speaker 1Yes, but they have ambitious plans to expand On the base platform layer. They're planning more connectors, especially using MCP, to talk to even more tools and data sources. They're building out agent deployment and scheduling so you can run agents automatically on a schedule or trigger them based on events like new data arriving.
Speaker 2Making them more proactive and integrated into workflows.
Speaker 1Right and, interestingly, in the next three to six months they plan to add code viewing and editing capabilities. Ah, so developers can see the code the agents generate and maybe tweak it.
Speaker 2That seems to be the idea More transparency, more control, better debugging, Plus ongoing work on improving the human-in-the-loop experience, making that interaction smoother.
Speaker 1Okay, that's the platform layer. What about the data capabilities themselves?
Speaker 2They're expanding there too, across three main areas. First, more sophisticated agentic data analysis agents doing more advanced stats finding trends, maybe even generating hypotheses autonomously.
Speaker 1Going beyond just fetching and cleaning.
Speaker 2Second, deeper capabilities in data governance. Things like automatically building beta catalogs, assessing and enriching metadata, monitoring data quality, continuously automating more of that critical but often burdensome work.
Speaker 1Automating the compliance and quality checks.
Speaker 2And third, enhancing data engineering capabilities. This includes automating pipeline creation and optimization, maybe even self-healing pipelines that can fix themselves if they encounter problems.
Speaker 1Oh, it sounds like they're aiming for comprehensive agentic control over the whole enterprise data lifecycle.
Speaker 2That seems to be the direction Starting focused, then systematically expanding outwards to cover more and more ground.
Speaker 1So it feels like we're watching the early stages of a really adaptable intelligence taking shape, starting with data but with potential far beyond.
Speaker 2I think that's a fair assessment.
Speaker 1Okay, this is all incredibly exciting for people listening who are thinking I want to try this or my company needs this. How can they get started with Craft?
Speaker 2Right now, Craft is available in a private preview. This is mainly aimed at developers and integrators who want early access to test it out, build on it, provide feedback.
Speaker 1So not quite general release yet.
Speaker 2Not quite Broader. Availability for general enterprise use is planned for later this summer. Interested developers or companies can go to the Emergence AI website right now and sign up for that preview program.
Speaker 1And what if a large enterprise has a really specific urgent need?
Speaker 2They mentioned that direct engagements are also possible on a case-by-case basis, so reaching out directly via the website would be the way to go, and, as we mentioned, there are demos on the site too, which give a good feel for what the platform can do.
Speaker 1Good to know. And what about pricing? Any idea how that will work?
Speaker 2They've outlined three planned tiers. There'll be a free tier, which sounds great for individuals wanting to experiment and just kick the tires.
Speaker 1Lowering the barrier to entry.
Speaker 2Then a pro tier, which will be cost-based and offer more capabilities, higher usage limits, probably aimed at power users or small teams.
Speaker 1Okay.
Speaker 2And finally, an enterprise tier. This will have custom pricing, likely based on usage scale and specific needs. It will include advanced governance features, maybe dedicated infrastructure support, tailored solutions for bigger organizations.
Speaker 1That tiered approach makes sense. Start free, scale up as needed.
Speaker 2Seems designed to encourage adoption and learning.
Speaker 1All right. As we start to wrap up this deep dive, it leaves me with a thought. We live in this world that's just drowning in information, right Data everywhere, complex workflows slowing everyone down.
Speaker 2The daily reality for many businesses.
Speaker 1So maybe the biggest productivity unlock isn't just finding ways to do our current tasks a bit faster. Maybe it's about getting smarter at designing the systems that do those tasks for us. What if the real edge comes from creating and managing these evolving fleets of AI agents that learn your business, adapt to your needs and free up your people for the really high-impact strategic thinking?
Speaker 2That's a powerful idea and it raises a really important question, doesn't it? As these agents get better at creating and managing other agents, essentially building layers of autonomous capability, what does?
Speaker 1that free us up to do.
Speaker 2What new frontiers of human creativity, innovation and strategic planning does that unlock? When we're freed from the drudgery, how do our roles shift from task execution to, maybe, system conceptualization and orchestration, focusing on the truly human skills of innovation and complex problem solving at a higher level?
Speaker 1That's definitely something to ponder. Wow, that was quite the journey into the world of agents, creating agents with Emergence AI's Sucraft. I hope for everyone listening, you feel much more informed now and maybe even a bit inspired to think differently about where AI is heading in the enterprise.
Speaker 2Indeed, it's fascinating stuff. Knowledge is always most valuable when you can actually understand it and think about how to apply it. Hopefully this conversation has provided some powerful insights for you to do just that.
Speaker 1Couldn't agree more. Thank you for joining us on this edition of the Deep Dive. We look forward to exploring more fascinating topics with you again soon.
Speaker 2Stay curious.
Speaker 1And keep diving deep.