Block by Block: A Show on Web3 Growth Marketing

Michael Heinrich with Peter Abilla on 0G as the First AI Layer 1 Blockchain

Peter Abilla

In this conversation, Michael Heinrich discusses his journey into the crypto space and the founding of 0G (Zero Gravity), the first AI Layer 1 blockchain. He elaborates on the unique features of 0G, including its developer-friendly ecosystem, key infrastructure components, and the importance of community engagement. Michael also addresses challenges related to data availability and scalability, blockchain marketing with Peter Abilla, as well as strategies for growth and building a vibrant community around 0G. The conversation highlights the significance of verifiability in AI and concludes with insights on the recent Kaito campaign and its impact on brand awareness.

Takeaways

Michael Heinrich's journey into crypto began with his interest in Bitcoin during graduate school.

Zero G aims to be the first AI Layer 1 blockchain, designed specifically for on-chain AI applications.

The team behind Zero G has strong backgrounds in computer science and engineering.

Zero G offers a one-stop shop experience for developers transitioning from Web2 to Web3.

Community engagement is crucial for the success of Zero G, with a focus on mission alignment.

Data availability and scalability are key challenges that Zero G is addressing.
The Zero G ecosystem includes a service marketplace for various AI applications.

Michael emphasizes the importance of building unique experiences for users on Zero G.

The Kaito campaign was a significant success for Zero G, boosting community involvement.

Verifiability in AI is a major concern, and Zero G is exploring practical solutions. 

Timeline

00:00 Introduction and Background
03:02 The Birth of 0G
05:53 Understanding AI Layer 1 and Zero G's Unique Position
14:17 Key Components of Zero G's Ecosystem
25:16 Growth Strategies and Mainnet Launch
27:02 Building on Zero G: Ecosystem and Support
29:58 Strategic Approaches to Application Development
32:42 Community Engagement and Growth Strategies
34:58 The Zero G Panda: A Fun Mascot Story
36:53 Community Building: From Curiosity to Engagement
39:08 Visionary Thinking: Trust and AI
41:09 Data Verifiability: Approaches and Challenges
43:47 The Kaito Campaign: Community Rallying and Growth
48:03 Future Goals: Leveraging the Kaito Leaderboard

Follow me @papiofficial on X for upcoming episodes and to get in touch with me.

See other Episodes Here. And thank you to all our crypto and blockchain guests.

We're rolling. Michael Heinrich from Zero G. Welcome. Great to be here. Thanks for having me. Absolutely. First off, congratulations on winning the Kaito leaderboard battle royale. Zero gravity will be listed on the Kaito connect, think on Wednesday or next Friday or this Friday. That's exciting. I definitely have some questions around that, but before we get into questions around marketing, positioning, branding. We'd love to first hear about your origin story. How do you get into crypto and more importantly, like why do you stay? Yeah, I've known about Bitcoin probably since 2011, but didn't really pay much attention to it at that time. And really, when I started going to graduate school, about 10 years ago is when I, more than 10 years ago at this point, started to learn about Bitcoin because I heard Marc Andreessen talk about it. I heard Tim Draper talk about Bitcoin. I was a DFJ fellow at that time. I was at Stanford. That was really my initial introduction into it. And basically came in, like many people just invested some Bitcoin on Coinbase and then read the white paper and totally got hooked. when I really got hooked was kind of doing the ICO stage because I just saw this like 2016, 2017, like this explosion of creativity. And I knew I wanted to be part of the industry from an operator standpoint, eventually. At the time I was running a very fast scaling web two company in the health and wellness space, technology health and wellness space, and even tried to see if there's applications of blockchain within that company. For example, there's a way of aligning different suppliers via blockchain and then openly share where different aspects of the supply chain are moving. But it was way too early at that point. Everybody was like, what's a token? Like, I can't hold this on my balance sheet. I'm not going to share information with my competitors. And I was like, okay, too early. So that's how you visit this a little bit later. And then kind of scaled this company quite a bit. But then late 2022, I got a call from my classmate Thomas, and he basically said, hey, you know, we've invested in crypto and a bunch of things together. I know you wanted to do something in this space. I invested in a company called Conflux, Ming and Fan, two of the co-founders, want to do something more global scale. You're interested in meeting with them. And I basically said, yeah, why not? You know, I'm open to something new. Met with them six months of co-founder dating later. I basically came to the conclusion that, wow, they're like the best engineers and computer scientists I've ever worked with. And I don't care what we start, we have to start something together. And so that was the origin of zero G that was in May, 2023 at this point. Yeah. when you first met your co-founders, did you have the idea at that time to start Zero G to the first AI L1? Was that the initial product that you guys sought to build or was it something else? Well, it really started with the team. so then it went from, how do we take this amazing team of great computer scientists and kind business backgrounds and then build something in the space? Just to give you a sense, like San has two Olympic gold medals in informatics. He's MIT computer science PhD, wrote many top academic conference papers together with Ming. He's a professor at the University of Toronto as well. And Ming, he was actually Ming's intern initially at Microsoft research. That's how they met. And then Ming's been at Microsoft research for 11 years and wrote some of the key papers in distributed storage, scalability. He wrote some of the first AI algorithms for Microsoft Bing as well. So like really, really strong kind of backgrounds. And so we took that expertise and said, we want to find something that's at the intersection of what are we passionate about? Where do we have domain expertise and what's the major unlock for the space? And it took us probably a good like six months to really figure that out. Just because we started speaking with builders in the space and initially we're thinking, well, maybe there's something on the interoperability side that needs to be solved because the user experience still was kind of lacking at the time in web three. But that eventually settled that AI is really where our passion lies and where we actually see the biggest impact that blockchains can make as well. And so that was kind of late 2023. And since then, I mean, the mind share of AI plus crypto has just taken off. mean, it's the majority of conversations in crypto have been around AI. And so Zero G has really kind of, you guys are in a perfect sweet spot right now. And really, I feel like there's a lot of momentum. Tell us about Zero G and kind of like the key aspects of like what makes it different from other L1s. you know, this idea of like the first AI layer one is a new category and zero G is clearly like the first in that category and the first in any category is you can make a bet that that's gonna win. Tell us about that and like what differentiates, first of all, tell us about the category, AI L1 and then zero G what's different from zero G versus. you know, your other layer ones. Yeah, we purposefully wanted to purposefully build a layer one that's specifically designed for on-chain AI applications. And so we basically designed every parts of the stack so that it's available for AI type of workloads. And so we basically have a few components. One is the chain component. So that's the layer one that has the consensus and so on. The data availability piece. is kind built on top of that. Then we've got a storage layer, and then we've got a compute layer for things like inference and fine tuning and eventually training as well. And so all of those components are basically there so that you have a one stop shop experience. So if you're a builder, let's say you're coming from Web2 into Web3 and you're like, okay, well, how do I even start? I want to build an agent, let's I have to figure out, what agent framework do I use? What do I use as my backend? What like inference methodology do I use? Do I use ZKML? Do I use TML? I need to fine tune my agent. Like, where do I go for that? Where do I even put a, you know, vector database? How do I then make this all a testable on chain? And so before you know it, you have like 10 different services you need to chat with. And we just try to make it super simple so that it's a one-stop shop experience by putting all of the best of Web3.ai at your fingertips. And so we can't do this alone. We work with more than 300 projects building on top of our chain, as well as integrating with our infrastructure layer to make that a very seamless experience for builders. And so that's kind of where you see a lot of that design thinking come into play. The idea of vertically integrating all the services that developers need, what was the thinking behind that in some of the conversations you had? I'm sure you did a lot of customer development as you spoke with potential developers and customers and the types of services and apps that they needed. Tell us about those conversations that led to the product decisions you eventually came to. Yeah, I mean, we want to target both kind of web two developers and AI coming into web three as well as web three builders. And so there's generally two separate ways of thinking about the world for the web two builders. It's very much like, Hey, why should I switch if I have really simple, you know, API calls and I have everything under the hood with open AI. It's like very simple. So for them, it was about simplicity initially. And then the other factors that play into it is really how do I built my application in a very customizable way. And from there, open source is generally superior because then you can really dial in like which model do I use? How do I like, you know, fine tune the models? What type of context windows do I need? And then recently with, for example, DeepSeq's emergence, also the cost pieces, I forgot exactly. It's like 500X more cost effective than OpenAI 01, for example, with the same performance. And so now certain applications actually also become possible as a result of that. And so that's kind of the Web2 builder perspective coming into Web3. So the Web3 builder perspective, it's very much about like, how do I create something that's performant enough so that I can actually get up and running very quickly? So if I'm trying to build an agent, for example, that's, let's say, or multi-agent system that's chatting on X with each other, how do I reduce the friction and make that kind of performant enough so that I don't have any kind of like performance bottlenecks? So a little bit of a different perspective from a Web3 builder. I looked at the ecosystem, you have 300 vertically integrated applications and services. It's really incredible what you guys have built over the last really two years or so. Tell us about the work involved in doing the outreach to all of these projects, getting their buy-in to integrate and become part of the Zero-G ecosystem. That's incredible work that you guys have done. Tell us about that. Yeah, appreciate it. it's, both inbound and outbound. So the inbound happened because we position ourselves very clearly, like, we're purpose built for AI. And so as a result, we had, you know, data labeling companies, you know, come in inbound, we've had agent type companies come in inbound. And then the outbound piece is very much like being present at events, being present on social media, being present on podcasts like this. very much helps with getting a little bit of that mind share. And then when we reach out to a specific project that we're really interested in, they're like, oh yeah, I've heard of you guys. I've heard of you here or there. And that's been super helpful. But we have a BD team of about three people as well, so they've been pretty active on sourcing as well as handling inbound requests. Lately, the inbound has definitely been much more than the outbound. Well, your team of three, they're definitely punching above their weight with being able to close 300 plus partners. That's really incredible. And really big names too. Now, if I were developer and I was building an app, the value prop that Zero G gives me is that you guys built Zero G from the ground up based on first principles. Like if you were to create a layer one with all the services that AI applications need, like what would they have? What would it look like? How performant does it need to be? It's really valuable that what you guys have done. Like you started from scratch. Like if you were to build this thing from zero, like what would it look like? And that I imagine is a huge value proposition for developers. When... When a developer is looking at building on zero G versus another layer one or another chain, what do those conversations look like and how do you convince them to build on zero G versus another chain? It really depends on where they are, let's say, in the AI stack. So the conversation can be quite different because sometimes what they really need is really fast storage, for example, because they're building a data labeling service and they need a place to store things in a very decentralized and verifiable way. Other times it's about, hey, I just need a very simple way for our users to be able to use your inference service. And so it depends a little bit on the kind of situation. Every situation is a little bit different, but being aligned with an AI specific chain, that alone is very helpful because then you're part of this broader ecosystem that leads to others to want to be part of it as well. So it's kind of the self-fulfilling prophecy, if you will. I think that alone has been a very strong draw. And then... The second draw is the kind of technological superiority in many instances. And then the final is just around how we think about the cost perspective as well. So if you're switching from, say, an Amazon S3, we can be up to 80 % less than that at the same performance levels. So for example, on the storage side, we've tested our storage at about 2 gigabytes per second and throughput. And that's the fastest ever recorded and kind of decentralized storage. so very proud of a lot of those types of results. That's great. There's looks like from a product perspective, there's kind of four key offerings or four key main parts of the zeroes, zero G ecosystem. So there's the chain, there's compute and then storage and then data availability. Could you take us through each of those and maybe share kind of like what's unique about it and keeping in mind like the AI developer specifically. Yeah, I'd say there's also a service marketplace component. And I'll get into that in a second because we consider data availability kind of a bit more part of the chain part and I'll explain why in a second. So on the chain part, right now we're essentially kind of a version one of the chain just to kind of get things going so that people can, so that we can actually utilize the benefits of, you know, consensus layer and so on. Our goal, however, is to build a second version of this chain later this year, where we've essentially figured out a way where we can get to infinite TPS, if you will, so that the chain itself scales depending on the workload. So one of the issues you have today, so for example, on Solana, like Pump.Fun makes up something like 70 % of the transactions. you speak to Pumped.Fun and they're like, well, we want to move off the chain because we have too many failed transactions. We want to build our own infrastructure. And so we can't let that happen for specific AI workloads. So for example, if you're running training on chain, you start having a lot of failed transactions, then you're never going to finish training that particular model or fine tuning that particular model. And so that's the big issue. So how do we make sure that the chain consistently scales itself? And so the way we figured that out is we built this data availability layer where every node that gets added to the network adds to the overall throughput. And so you can get to about, let's say, 5,000 nodes with a custom consensus layer. And so that scales to about that. And so every node, depending on what you use, can get to about 30 megabytes per second in throughput. And so you've got the gigabytes per second of throughput now, which by the way, no other DA layer is capable of doing. And then what happens is that the consensus layer itself becomes the bottleneck. And so we've basically then figured out, what happens if we also horizontally scale the consensus layers themselves? And so... That then leads to infinite scalability because you can just spin up an arbitrary number of consensus layers. So as the nodes increase, the DA nodes increase, you can increase the consensus layers as well that you match with them. Now, what you can then do is you can also modularize the execution layer. So then you can combine the execution layers with different DA layers. And so then you can essentially drive the transaction throughput, you know, however much you want. And so transactions per second no longer is an element or a blocker to actually building fully on chain applications. So there's a lot of like engineering that goes behind the scenes to make that happen. The other thing we're trying to figure out is how do we go below 100 milliseconds and latency on a layer one? Because usually if you have a three shot consensus, if you have speed of light, let's say, you know, you have validators in all parts of the world, the fastest you could ever get to would be 100 milliseconds. That's assuming, you know, speed of light. Now, how do we get to 50 milliseconds? How do we get to 30 milliseconds without giving up some of the decentralization aspects? And so we're doing some kind of research into it with kind of, let's say, local consensus and then kind of getting to global consensus in different pieces, which would be the first time something like that gets figured out on the layer one. So those are some of the kind hardcore engineering problems that we're trying to solve for this next version of the chain. Once that's done, you can basically run any application fully on chain, whether it's AI, whether it's on chain gaming and so on, everything's possible. The storage piece we touched a little bit upon, but the idea is to have a competitor to something like an S3 or kind of a Google Cloud, but just fully decentralized. So there's no censorship resistance. you have something that's disaster recovery built in because you generally have multiple replications across the network. And at a cost, again, that's 80 % less than centralized solutions. Then the compute layer is verifiable inference, verifiable fine tuning, and eventually fully on-chain training. The training piece is something we're still doing research in that's not available. Fine tuning should be available end of the month, which we're really excited about. So that's there. And then finally, there's the service marketplace layer, because if you think about using your iOS or your Mac OS or Android or any of those types of operating systems, you usually have something like an app store built in. And the app store, what does it handle? It handles registration of different providers and it handles payment. of different providers. So we basically enabled that service layer as well so that anybody, whether they're like an Akash or an Atheer or a Fala and so on, can register themselves, register the network, and then other people can utilize that from the service marketplace. So those are the whole components, and that's what we call the decentralized AI operating system. So you've got all that power basically at your fingertips building in this space. That's exciting. Question around the data availability piece. Is that something you guys built from the ground up or is that through, that's serviced through a partner? No, we built that from the ground up because that's the key component to actually get to this massive amounts of scale to put AI fully on chain. Got it. Now, if another data availability project wanted to partner with Zero-G, so that developers that build on Zero-G would have an option to either build on Zero-G DA or another partner, is that something possible? You would give developers an option, but may not have the benefits of a from the ground up DA that is native to Zero-G. Yeah, I mean, it's certainly possible because in this kind of architectural design, because it's modular in nature, you could have different consensus mechanisms. Like we could use another L1, for example, to be a consensus mechanism as well. We could use another DA layer to actually be this DA layer too. Now, the downside of that is that most DA layers usually don't go beyond 10 megabytes per second in throughput. And so even a single node on our network does more than 30 megabytes per second in throughput. Actually, think Celestia may be the fastest at probably doing like 27 megabytes per second for the entire network. And so why utilize something that won't meet the needs of kind of fully on-chain AI applications? So if you think about training, for example, NVIDIA and Finiband, I think is in the hundreds of gigabytes, if not now in the terabytes as well. throughput. So if you want to replicate that level of performance, yeah, even stringing together all of the other DA layers together and utilizing them won't get you that level of performance. But yeah, that's kind of the long answer. No, that makes sense. What if I have already another hypothetical, if I already have an application that needs a data availability service, but it's built on another chain, is the data availability service on Zero-G, that, could I call, I guess, would I be able to use that though I've built my application in another chain? Like, do I have to build it on Zero-G? That particular chain would have to use our DA layer, essentially. So there's a few options, essentially. You'd have to either use our DA layer or you move on to an app chain and then use that DA layer. Another way is to basically have the chain run in alt-da mode, depending on what kind of chain it is. So you could have a primary and then a fallback DA. So there's different kind of options of how to structure it. But yeah, the chain itself would have to run the DA layer. So it sounds like the key offerings that Zero G has, at least the DAPs, could also be kind of chain agnostic, it could be used by other chains. That's pretty exciting from a business development perspective and from growth. Is there a strategy to get the key services that Zero G has onto other chains? Yeah, I mean, it comes from this design philosophy that we don't want to dictate how you build. And so we wanted it to be modular enough so that if you just needed to use storage or if you just needed to use DA, you can definitely do that. And so it's like an introduction to the ecosystem, if you will. And that provides flexibility in terms of how you build and what you build. And even from a kind of our one perspective, if you natively deploy a Dapp, we would want for you to have the choice of what language you build it into. So we're fully AVM compatible, but if you wanted to use the move language, we want to enable that in the future. Or if you wanted to use SVM, we would want to enable that in the future. And so even at the execution layer perspective to be modular enough so that the developer can make choices. Because in Web2, mean, many people write in Python. and another amount of people write in TypeScript and then others write in Rust and so on. So there's all these different preferences in terms of how people build. And so we just want to make it really seamless and easy to get up and running in whatever shape and fashion you want. And maybe you have very specific requirements. You need really fast DAs. You use RDA as well. But then... You want to use a different chain architecture because customization of the token mechanics is really important. So you have your app chain. So it really depends on the end user's use case. I think that developer first approach is really admirable and I appreciate you saying that and designing Zero G in that way. As Zero G approaches Mainnet, and we'd love to hear more about what that looks like and maybe any timing, curious about your growth strategies. When Layer 1s moved up to Mainnet, having those first few set of applications that kind of act as a magnet for users is really key. What would love your thoughts on kind of what that looks like for Zero G? Yeah, it's kind of coming. A good analogy is like you go to an amusement park and there's no rides. You just bought a ticket, but there's no rides. And so very similarly, if you come to a chain and there's not some really interesting good applications, then it has the same type of issues. so generally, L1 needs to have some kind of basic applications like bridging, like a DEX, lending type of protocols, type of launch pad as well. there's kind of the basics that need to be fulfilled. And so we do that through a combination of acceleration where we kind of handpick teams to work with, as well as builders that are just really excited and want to deploy in our chain because of, let's say, the kind of AI components that we offer. And so we believe more in quality versus quantity, even though sometimes meeting long-tail needs is also very important. So the quantity aspect needs to be there. but from a kind of focused perspective, we're very much like, how do we really attract the best builders to the chain and how do we create really interesting experiences? So for example, on the deck side, our team, the team that's essentially working on the decks is looking at, how can we include ZK Darkpools, for example, or how can we include a deaf AI agents as part of the trading experience to really utilize the capabilities that Zero G offers. so that it's not just like, you're coming to our L1, and it looks like every other L1. There needs to be something unique as well so that you can actually utilize the capabilities that we've built. So every app will have like an AI component or can I build an app and not have an AI component? I mean, it's really up to the builder, but I think the builders that have chosen to come to our chain so far, they really want to utilize all of the capabilities. And that makes sense. I was just trying to differentiate between kind of a general purpose L1 versus the very specific kind of AI purpose built piece. That's really interesting that this project or this team that's building the decks has a, what is it, deaf AI component to it. We'd love to hear more about that. And that's gonna have its own kind of growth strategy. in terms of bridging assets over and then participating on the decks. How does Zero G support these developer teams that are building apps and how do you help them grow? Yeah, we have a few kind of ecosystem programs that we're going to be releasing pretty soon. We're going to have a big ecosystem kind of fund initiative where essentially we will have both an investment component as well as a kind of grant component to it. We have an accelerator where the teams get a lot of my time as well as the team's time to craft very clear go-to-market strategies, very clear. investor structuring approaches, all of that type of perspective. So a lot of attention and time is really spent by the team to make sure that whoever is building on our chain or with our infrastructure is ultimately very, very successful. So that's from the team side. And then there's the external components. We're building a very strong ecosystem of builders that can help each other, whether it's, you know, you've come through the accelerator or you're just building on chain or you're building. with some component, you get to be part of this community and then basically exchange ideas and learnings together. And so really create a very strong builder community as well. kind of internal external components that we support with. That makes sense. I remember I led growth at Harmony Protocol when one of the key apps that really helped to grow was DeFi Kingdoms. we didn't expect it at the time. knew we saw something that was really special. And then all of a sudden we started seeing TVL go up. all of a sudden TVL was over a billion dollars and bridged to Harmony. our PCs were dying and we didn't know what to do. And it was one of those things where, you know, some of it was in our control, some of it was planned, but a lot of it was just like kind of luck. And so as you're looking for these applications to build on Zero G, I guess what's the worldview you... you look at, you use as you look at these applications and which ones will become the magnet to really draw in, you know, assets to be bridged to zero G, people, eyeballs, attention. It's a really hard problem. But I'm curious the approach that you're taking. It's a very hard problem because you have to take multiple bets in a way. it's just like if you're a VC and you say like, with 100 % certainty, I'm going to tell you all of these portfolio companies are going to return and they're going to be successful. Like nobody can guarantee that, right? Even the best, let's say on the Web2 side, I think it's still close to a coin flip basically from a probability outcome standpoint. If I could tell you like with a hundred percent certainty, here's the 10 companies that are going to do like amazingly well, don't trust me. Just statistically, it sounds very, very infeasible. given kind of human nature and taking a first principles approach, we can say, well, there's certain things that people really want to enjoy doing. Like one is entertainment. And entertainment can come in different types of forms. can come in a game form. It can come in more of a form like a launchpad, like pump fund up, for example. It can come in the form of like, want to feel a sense of like belonging. So based on kind of these core human needs, can then map out and say like, okay, here's the likely type of application with an additional kind of overlay of AI that can do well. And so then based on kind of this map, we can then say, okay, there's probably this 50 different application areas that we should look at. Then we can rank order and prioritize them and say, okay, based on this, we're then going to focus on this. So the way we've approached this to say we need to get the basics right. So let's focus on kind of the default components first, get all of the kind of key liquidity elements in, and then we can focus on the more entertainment pieces second. That's how we've approached it in general. I think that strategic approach that boils down to the tactical pieces around which app categories make the most sense and then even the sub-segments below that I think is really wise. And it's really the only way to do it. Otherwise, you'll just be guessing. But it sounds like you guys have a clear approach there that makes a lot of sense. and I think that you'll find a lot of success there. We hope that a more structured approach basically leads to a higher probability of success versus one that's like, let me just try a bunch of things. A type of approach, yeah. No, absolutely. You know, another approach that I've seen other layer ones take is they, um, you know, they court current apps that are built already on other layer ones. And so the problem that, that I've seen is, especially with EVM chains is that they, just clone the exact same app. They ported over, but then they don't give it full support in terms of, uh, know, supporting developers that are using it or users, or trying to build a community on the, on the new chain. And that could be a problem too. And I've seen that actually happen quite a bit and I've committed that same issue, that same problem. As I've done business development and work with other applications and other chains to get them over to my chain or the chain that I'm supporting. But then the development team behind the application doesn't also build the community on that chain. You could have some issues. Yeah, the thing that also we want to avoid as well, kind of that brings up this point of we don't just want to be like, who's the highest bidder come to my chain. We want much more of a kind of mission type of alignment. Like I'm here because I'm excited about the future of AI. I'm here because I want to kind of stress test the limits of what decentralized AI is able to offer. I'm here because I want to make AI a public good. So there's this mission alignment. And then I also want to build really cool technology that makes a lot of people happy. So that's what we want to stress versus just this kind of idea of let's be like super mercenary and just give like tons of grants to everyone. So they just start building on our chain. We want something that's like very long lasting basically. And I think there was a time when the mercenary vampire approach and just giving grants to everyone worked and then it didn't work. And so I think that having built Zero G from first principles that is very AI focused, that key differentiator, I think will weed out exactly who ends up on Zero G. They've already self-selected and I think that's really wise. And being first in the category, is really key. I think you'll see a lot of growth there. And I'm excited for you guys. Let me ask a kind of a fun question. Tell us about the zero G panda mascot. What was the thinking behind that? Because when you think of AI, it's highly technical. It's a lot of work, right? And how did you end up with a mascot like a panda? We wanted to humanize the aspects of technology or just make it a little bit more fun from that perspective. So I think the way it came about is somebody in the community called Wint that one of my nicknames is Kung Fu Panda, which lovingly my wife gives to me because I love doing Shaolin Kung Fu and I don't know, I guess I must look like a panda when I'm doing it. So I have a bit of a bigger frame, I guess, from that perspective. Somebody called Wintervid and they're like, why don't we draw like a space panda or something like that. And then it was kind of this naturally organically created community initiative. And we're like, okay, this seems super cool. We should just adopt this as our mascot. I love that. Okay, that brings up several questions. I do Wing Chung. And so that's really cool. That's really cool that you're doing, that you do Kung Fu also. Tell us about the community. I understand it's a really lively community. I'm into the Discord and it's quite active. How did you go about kind of, as a very developer-focused platform and a community that, you some of whom are not developers, like how did you bring a community together of people that can still have stuff to do, feel part of like they're part of something, yet are not developers? Like tell us about the community building aspect to it. Yeah, there's many things we still have to build around it as well. But it really started when we came out of stealth last year, sort of around March timeframe. All of a sudden we were just everywhere, kind of all at once. Like we launched at ETH Denver and just did a bunch of kind of social campaigns. And then people were just like, whoa, what is zero G? I need to check this out. And then just that curiosity led to a lot of people just coming into the discord. and trying to figure out like, what is this about? Who's doing what? Like, who's the team behind this? This seems super fun. Let me be a part of it. So it came from this just like drive of curiosity is what we noticed. And then we started building up very quickly, like the moderators and ambassador program. We even hired somebody from the community that helped us build the Discord, Jaan, for example. He was like one of the first to raise his hand and say like, Hey, you're getting a lot of influx of people. We should have put some structure around this. Like maybe we should have a poker night. Maybe we should have like a quiz, like piece to it. And so it really came organically from the community itself saying we want to engage and we want to engage on different levels. And a lot of our team members are pretty active in the discord, whether it's like a validator saying like, this why wasn't I selected or for this like active set at the test net or, you know, what else can I do or can we do some like marketing thing together? So it became this like kind of self-fulfilling prophecy again, a natural five of which we were really excited about. So yeah, this, this initial kind of boom and expansion led to just a lot of excitement. And you could really tell, like the communities that grow and build organically is, they have so much more staying power than those that feel kind of contrived and fake. And you can really tell the zero G community is like, they're really interested and they're driven by this mission of AI. When you first came out last year at last, I guess it would be last February, right? When you came out of stealth, did you feel, late February, around East Denver, did you feel that AI Were you worried that AI might be too early? Because in the last 12 months, it's really taken hold. But probably last February, I'm guessing it probably felt early. I mean, we've heard you were starting to think about AI back in November before that. And that's kind of really what we settled on because, and to us, didn't matter because we took a very first principles approach. said, what are we really worried about in the future? Like, can we trust AI that we can verify? We have this like, you know, GPT 3.5 and GPT 4 as it was coming out, like very powerful systems. But what's under the hood? Like nobody can verify it. You don't know where the data came from. You don't know who labeled it. You don't know how the model was trained. I don't even know which version of the model I'm being served. And on top of that, you utilizing my data to inform future models? Yeah, very likely. In most cases, yes. And so I don't even have control over that. And so how can we trust this type of system when we start running bigger societal level systems with AI? let's say, you know, like airports, for example, or could be as benign as like trash collection or like large swaths of androids building buildings for us. How can we trust that if we can't verify it? And so then we started getting really worried about the future and saying, hey, there's a potential terminator situation that can actually happen here without the right oversight. And so that was the initial line of thinking. And then it didn't matter if we were early or not, because we knew we had to build this future. I think that's pretty visionary. On the verifiability piece, I spoke with another project that is infrastructure for projects with data that needs trusted execution environments. And so I'm curious about the data verifiability and how you are, I guess what's the approach that Zero-G is taking with that? Are you using T's underneath? For now, it's the most practical approach. So if you want to basically have no overhead in your inference and be, you know, comparative to an open AI performance, then TEs is still the way to go. ZKML is just too far off from a kind practicality standpoint for large scale LLMs. So anything, you know, even above, you know, 10 billion has significant slowdowns in terms of parameters. Another approach would be OPML. still suffers from quite a bit of lag. So really, TML is the most practical. I'm hoping that in the future with better hardware, with better algorithms, that ZKML becomes the kind of overall solution, or even something like FHEML. But we're just not there yet, from just a kind of development standpoint. So TML is still the most practical right now, comes with its own set of issues. Many academic papers talk about how it can potentially be exploited. But for now, it's kind of, I would say the 98 % solution that works the best. Yeah, when I led growth at Oasis Labs, which is known for trusted execution environments. And that was a common question that we had around these TEs, they're potentially exploitable. And so that was a common question that we had answers for. But... but definitely a lot of academic papers kind of questioning that. But I think it's really the smart thing to do. And it sounds like Zero G also looks at potentially other, what are they called? Privacy enhancing technologies that are, when they become available and when they don't impact latency, then they become available on the network. Um and there's major strides that are being made. Like we talked with a FHE team in Bangkok, for example, and they figured out a way of like 1000x improvement versus, let's say, the leader in the space, Zama. And so those types of things are like super exciting. Of course, it needs to be more than 1000x in order to be practical. But it's already a huge step of the way there. That's huge. Let's a couple of final questions. Tell us about the Kaito campaign and how did that go for you and what are your expectations that now that you'll have a Kaito leaderboard? It was super fun. was kind of this like epic Olympic battle, I would say. Because the first day basically we couldn't even vote because the website kept breaking down. And so it's like, I can't even place my own votes on my own kind of Kaito campaign. And so we started probably in like, I don't know, like the 18th position or something because of that particular perspective. then Once we understood all the mechanics, then we really got our community rallied around, this is going to be important. Once we have our Kaido leaderboard, we can also then tell who's really talking about zero GU is really excited. And so then it started with an upswell of the community. then so within, I think a few days, we started being like number five or number four or something. And then it was really like, how do we get the last bit? I, what I basically looked at and mega-Eath, think at that time was already leading. I basically looked at their growth curve and I estimated that they'll end up at roughly around 20 million votes. And I was like, okay, how do we get beyond that? How do we get to like 21 or 22 million votes? And so I tried to basically write all of our investors, all of the communities that I'm part of and then be like, Hey, need to lobby for all these votes. Let's make sure we get Zuruji kind of the number one spot. basically three, four days of just doing that and talking with a lot of different communities. And there was a bit of a joke as well that basically the cabal stepped in and helped us out. so a lot of top influencers basically came on our side, come on our side. And then I think Probably 20 hours before it was over, we eclipsed MegaEath for the number one spot. then every 10 minutes, I was checking to see if we're either decreasing our lead or if we're increasing our lead. then the last 20 hours were just literally like that. And I couldn't go to sleep. was just like, OK, how are we doing? How are we doing? OK, looks like we're on a good track. We're increasing our lead. And then, no, somebody just put in 50,000 votes for MegaEath where we're decreasing our lead. So that's kind of what it felt like. And I just rallied the community around it as well. I just said, believe in something, go all in. And then there were so many posts on X with different community members just saying, I only have 300 votes, but I went all in for zero G. It had to be zero G and just hundreds of those posts. It was super fun. Super, super fun. Well, I definitely, I pointed all my apps and smart followers to Zero G. What are the goals now that, you know, that Zero G will have its own leaderboard? And how will that help with growth and or brand awareness? It's really helpful for our visibility because with that data, we don't have to be kind of in the dark because, and actually chatter with you about this from Kaido, the founder, CEO. And basically what many projects end up having to do is they have to basically figure out, okay, I need to figure out some KOL round. I'm going to get these 200, 300 KOLs or whatever on board. And how do even know if they're effective? Like, are they even doing any work? because some might just say like, just want to get in and invest and then forget about doing anything because I'm busy with my other stuff. So this actually gives us visibility, not only for influencers, but then also for like, who are the community members that are actually really engaged? Like who care about us? Who care about writing consistent stories? Who are consistent over time? And it's a much more fair way of them saying like, okay, well, If you really care about what we stand for on the mission, how do we then make sure you're adequately rewarded as a community member? And so the Kaito board really gives that amazing visibility from a lot of signals that they collect on X. Now I wish they could also do this for other social media pieces like YouTube or Instagram or TikTok and so on in the future, but this is a great start and we'll definitely give some product feedback as well to you and the Kaito team. Awesome. And the leaderboard goes live on Friday. That's exciting. Well, I look forward to that and I hope that the Kaido leaderboard will help raise brand awareness for Zero G, both for just fans, but also developers. think developers are interesting because they really follow momentum and they're always asking themselves like, why I should build on your chain? And they look at the community size. And not only just like the features of the chain itself and its offerings, but also kind of the liveliness of the community. And I think I'm hoping Kaido will help with that. So that's exciting. It should help quite a bit. Yeah, it was a great mechanism. Actually, managing director of the ecosystem, she made a kind of joke on X as well. And she basically said that now that I'm done playing CMO for Kaido, I can actually resume my real job. But it's been an amazing kind of mechanism for Kaido as well, getting their brand awareness out and then, but then in turn, helping others with their awareness too. Amazing. Well, Michael Heinrich, thank you so much. And we look forward to watching Zero G and you guys grow. Yeah, super excited. Thanks for being part of the journey. Thank you.