
Code with Jason
Code with Jason
259 - Chris Chilek and John Cunningham, Founders of LegiPlex
In this episode I talk with Chris Chilek and John Cunningham of LegiPlex about their AI-enhanced legislative monitoring platform. We discuss how they identified the market opportunity, the technical challenges of processing government data, and their approach to building beyond simple AI prompts.
Hey, it's Jason, host of the Code with Jason podcast. You're a developer. You like to listen to podcasts. You're listening to one right now. Maybe you like to read blogs and subscribe to email newsletters and stuff like that. Keep in touch.
Speaker 1:Email newsletters are a really nice way to keep on top of what's going on in the programming world, except they're actually not. I don't know about you, but the last thing that I want to do after a long day of staring at the screen is sit there and stare at the screen some more. That's why I started a different kind of newsletter. It's a snail mail programming newsletter. That's right. I send an actual envelope in the mail containing a paper newsletter that you can hold in your hands. You can read it on your living room couch, at your kitchen table, in your bed or in someone else's bed, and when they say what are you doing in my bed, you can say I'm reading Jason's newsletter. What does it look like? You might wonder what you might find in this snail mail programming newsletter. You can read about all kinds of programming topics, like object-oriented programming, testing, devops, ai. Most of it's pretty technology agnostic. You can also read about other non-programming topics like philosophy, evolutionary theory, business, marketing, economics, psychology, music, cooking, history, geology, language, culture, robotics and farming.
Speaker 1:The name of the newsletter is Nonsense Monthly. Here's what some of my readers are saying about it. Helmut Kobler, from Los Angeles, says thanks much for sending the newsletter. I got it about a week ago and read it on my sofa. It was a totally different experience than reading it on my computer or iPad. It felt more relaxed, more meaningful, something special and out of the ordinary. I'm sure that's what you were going for, so just wanted to let you know that you succeeded, looking forward to more. Drew Bragg, from Philadelphia, says Nonsense Monthly is the only newsletter I deliberately set aside time to read. I read a lot of great newsletters, but there's just something about receiving a piece of mail, physically opening it and sitting down to read it on paper.
Speaker 1:That is just so awesome Feels like a lost luxury. Chris Sonnier from Dickinson, Texas, says just finished reading my first nonsense monthly snail mail newsletter and truly enjoyed it. Something about holding a physical piece of paper that just feels good. Thank you for this. Can't wait for the next one. Dear listener, if you would like to get letters in the mail from yours truly every month, you can go sign up at NonsenseMonthlycom. That's NonsenseMonthlycom. I'll say it one more time nonsensemonthlycom. And now, without further ado, here is today's episode. Hey, today I am here with John Cunningham and Chris Shielek. Gentlemen, welcome.
Speaker 2:Thanks for having us.
Speaker 1:Yeah, so we used to work together a long time ago. Let's see, this was like 2008 or 9 or something like that, which, as we speak, is like 15 years ago.
Speaker 3:Yeah, um, you guys both still live in austin both still in austin, still living downtown wonderful, yeah, um.
Speaker 1:So yeah, we work together at a place called my edu doing php. Not everybody listening to this podcast might know that I used to do php. It's it's kind of a ruby on rails podcast, also kind of a technology agnostic podcast, but I've been doing rails now for like the last 12 years. Um, but you guys just started a new thing, right, right.
Speaker 2:We did. That's great. We started a new company and it's one of those been in the works for about a year and we just finally announced it about two weeks ago, I believe called Legiplex.
Speaker 1:Okay, and what is it?
Speaker 3:Legiplex is an AI-enhanced legislative monitoring and research platform. Its main goal is to cut out a lot of the grunt work that currently is required to figure out what's going on in politics and tracking legislation, so that there's more time to concentrate on strategy and other aspects of the legislative process other than just sort of data crunching.
Speaker 2:That's for all 50 states as well as federal legislation.
Speaker 1:Okay, and this is pretty different from what we were working on at myEDU. It was ed tech at myEDU and so sometime between then and now, something in your life led you in this direction, to go down this path with this particular startup.
Speaker 2:How did that come about?
Speaker 2:Well, it's not that different actually, jason, and one of the early ways that I described it when I was talking to some folks that I knew and just talking about it is, to a certain degree, it's my to you, but for grownups and in a different situation, it is still the skill sets that we have from that part of our lives of pulling together publicly available data it is still from multiple sources and creating ways to utilize that data that had previously not been available to people.
Speaker 2:So one of the biggest strong points, one of the biggest strengths that my IDU had, was the schedule planner, where you could have in one place and you see all your class schedules and how people grade and what people think of their professors and all that kind of things. And it was an app and worked through a web browser and was all in one place, part of, as part of, legiblex, where you can see everything that's going on in your world where, however that world stretches, whatever bills, whatever members, whatever committees you're tracking, whatever keywords, whatever states you're tracking them in, everything kind of comes together in one place. For what is it that's important to you?
Speaker 1:Okay so it's different.
Speaker 2:It's, but it's more like a shift in topic than it is anything else.
Speaker 3:You could think about it this way In both cases, it's national data collection from multiple sources as you know from my ADU experience, that was something like 800 campuses at some point or something ridiculous like that and then creating useful applications with that data. And, as John just said, in the previous life it was putting all of that into a schedule planner so you can make decisions about what you're going to take each semester. In this case it's well, how do I pull all that data together so that I know what's going on at any given point in time?
Speaker 1:Okay. And why this domain? What led you to that the name? No, no, okay. And why this domain? What led you to that the name? No, no, not the name, but the domain, the type of business.
Speaker 2:Why do we? Why are we looking at?
Speaker 3:oh we were actually approached uh from some people who were in government. They uh loosely they had asked us to do vendor selection and said hey, we have some ideas in the space. There's a couple of people who are playing what makes sense. And we went out and kind of looked at what was available currently in the market and said nothing's really quite hitting right. We loosely grouped the current offerings into two categories One is power catalogs and one is toolboxes.
Speaker 3:And the power catalogs- I thought you were going to say something like we divided them into two categories shit and complete shit well, I wouldn't be that harsh, but you know, um, but the power catalogs are just kind of like google you go, you search for the thing. They have the data. You kind of munch together, you know what you can by searching for various subjects, but it's not really, uh, aggregated in a meaningful way. And then you have the toolboxes which are oh, here's an AI prompt, oh, here's another AI prompt, here's a tool that'll do one thing, but none of the tools really talk to each other, so there's no integration or data carryover between the things. So you still end up having to do a lot of things manually. And so whenever we looked at this problem, we said oh, actually we have a lot of experience making things that not only have a fundamentally better UI, ux and experience for the person using it, but also are far more productive in terms of what you can get out of the platform whenever you're using it to execute legislative activity.
Speaker 1:Got it Okay.
Speaker 2:One of the interesting things on the journey of this over the last year, as we've been building it, is how much AI has evolved during that timeframe.
Speaker 2:I mean, you know, it was really the beginning of 2023 that it kind of came to the masses with ChatGPT and everything else, and this was kind of the end of 2023 and through 2024.
Speaker 2:So there's been a lot of movement already, and this was kind of the end of 2023 and through 2024. So there'd been a lot of movement already. But even just as we're building and developing and figuring out how to integrate AI correctly in something like the workspace and how to utilize it in different places, that changed almost completely from January through even June, and then another big set of changes from there into October, november, as to how we use it and how it integrates in and the capabilities that it has. And, unlike anything else that you do in development, ai is one place where you don't always know that you're going to get the same answer out when you ask the same question over and over again. So how do you test it? How do you validate correctly? How do you ensure that it's working the way that you want it to when you're using it from from this angle. So it's been um, that's been a really interesting part of this trip as well.
Speaker 3:I mean, you can only imagine this between different versions, the same prompt will give you entirely different responses. Um, I think, from the user experience standpoint, we looked at the current state of how people are applying AI to things and it's literally just throw a prompt at it. Add a prompt to this, add a prompt to that. I think there's a I don't think he actually said it, but there's like an old quote that's something to the effect of if I had to ask people what they wanted, they would have said faster horses. I think maybe ford is attributed for that quote. If they had asked us, if we'd asked the market what they wanted, they would have said better prompts. Um, and so we intentionally deviated from that mindset to create a cohesive platform instead of just another prompt that you can throw things in. I mean, there are obviously prompts involved as well private agents and stuff like that so that your data is kept safe but we wanted to get a little bit beyond the prompt.
Speaker 1:Yeah, I want to get deeper into that. First, I want to go back to earlier for a second and then come back, because there's something I think I have context on, because I know you guys, but maybe the listener doesn't. So I'm always curious like business origin stories, because I've learned that it's really hard to like come up with a business idea. I think a lot of developers are in that place where they're like I really want to start an online business but like I don't have an idea, and so I've always been super interested in like when people come up with things, like how do they come up with that? And you said that somebody approached you.
Speaker 2:I'm guessing that that that was like a victory cto, like consulting client situation or something like that no, uh, this was a personal friend okay, and they just like they knew you as somebody who knew stuff about yeah, somebody who's been in our in our circle, for you know, 20 years, good, good old friend and kind of said this is the tools we have to work with. And I said why didn't you tell me about this three years ago? Um, because it really. Some of them are really bad and some of them are just pretty bad so some of them were shit and some were complete shit some were, some were shit, some were complete shit and some were shit that were way overpriced.
Speaker 2:yeah, and that's that's part of the journey with this too, is is the way that these things are sold traditionally is completely different than how we're doing it, and we're looking to make kind of a change in and flatten the market a little bit in the process.
Speaker 1:Interesting. Yeah, that's something that I learned about online business is it doesn't just matter that there's a problem First of all. It doesn't just matter that there's a problem and it doesn't matter if you have the solution to that problem. A big part of the picture that matters is how do those two things connect? When people have this problem first of all, are they even looking for something to solve that problem? Do they think they have a problem? Because, like when I I don't know if you guys know this, but I made software for hair salons back in like 2011 to I do remember that, yeah, and a lot of people used pen and paper, I thought that was a problem. They didn't think it was a problem. So they're not going to buy what I'm selling if they don't even think they have the problem. And the other thing is, in the case that people do have a problem, they want a solution for it.
Speaker 1:How is that bought and sold? In this particular case, like it was bought and sold kind of on an in-person basis. People would go to beauty conventions and people would go door to door at salons and stuff like that. I didn't have the bandwidth for that kind of thing with a full-time job doing this nights and weekends, that kind of thing and so I've been very sensitive to that with later business attempts. The product I'm building now is a continuous integration tool, and to me that feels like a much safer bet, because I know how to reach these people. I know how this product is bought and sold and I can see myself doing that In fact I've been doing it already because these are like my people. I know how to reach them, they're receptive to me, that kind of thing. So I'm curious more about that how is this bought and sold now and how do you intend to maybe change that?
Speaker 3:I want to address something you said just a half a second before we get to that, and then I'll kick that over to you. John, sure, the first thing you mentioned is, uh, what I refer to as market validation. It's like, is there a market for this that people know about? The people are actively making purchases in this space, as opposed to informing them that they need it. Um, and obviously we did pretty extensive market research before we kicked this off to validate that the market is real Always a good thing.
Speaker 3:And the second thing you mentioned was scalable marketing, more or less. How do you get from sale one to fully productized mass market, getting the word out? And we approach that in sort of a two stage method. Obviously, previously I had a lot of national marketing experience my IDU at all, and so we have kind of a really good playback in terms of mass market channels that we can take advantage of and some really great partners in paid media and stuff like this that we can also leverage. But a lot of the initial to your point is, you know, let's talk to the people that are at arm's length and start getting the ball rolling, and then, you know, lean in with some PR, another air cover and then, once you get momentum, then you can start looking at paid advertising and things like that to really get the work. But it's sort of a it's a stage to say, hybrid stage approach. We're running, we're figuring out how to do the mass marketing stuff.
Speaker 2:Simultaneously we're doing the personal market and to answer your other question, john, so, as far as the market for this product goes, what's really interesting about it is you have two flavors of of offering right now. You have big national companies and there's really three of them that sell a wildly overpriced and generally almost identical to each other product. That is just. It's just that power catalog. Here's, here's the most recent information, and when they're doing anything, they're all, of course, doing an AI thing, because as soon as ChachubiT came out, came out, every marketing professional on the planet became an expert in AI. But their AI is what Chris laid out earlier. It's here's a bill, here's a prompt next to it. Ask the prompt things about the bill. It has no context beyond that. It's only looking at that one bill.
Speaker 2:Those guys do the I want to say, wildly overpriced $1,000 a month per license and they require two-year contracts and they don't talk about their pricing until you're three demos in and you've gotten to know the salesperson really well and that kind of thing, and it's that kind of old school enterprise sales mentality mentality. And then the other side of it is the locals and there's a small number of folks that have figured out how to offer a little bit better of a service in like a single state. But they're also hampered in the same way by that kind of old school mentality. They also are all looking for two-year contracts. The two-year contracts is especially prevalent in Texas because we only do session every other year, so they're trying to capture the revenue for the time period that we're not in session.
Speaker 2:In Texas. The and they, and, and you know they got to do three demos with you and they, and and similarly they. They want to have. They don't talk about pricing anywhere, but you can kind of figure it out. So we decided early on that we're not going to do that. We're going to. We're going to kind of fly in the market. We're going to offer this at a better rate than anyone else is offering it. We're going to offer it at a with no commitments necessary. You get a discount if you buy a year in advance, but other than that is, you can be month to month and we want to provide the very best product that's on the market and make it a clear and easy choice. So that's kind of the flattening the market mentality that we're looking at in the process of launching.
Speaker 1:Okay, and who are the people who buy this product Like? What role are they in?
Speaker 2:It's a broad group. State agencies are a large number of them. Somebody like a governor's office will buy 50 to 100 licenses. I saw a order from Texas DPS for 30 to 50, that kind of thing. A professional lobbyist this is a tool designed very much to lean into a professional lobbyist as well. Um, and we have a one, a close friend of ours, who's been testing it, who actually was um, a very, very early partner in picker prof, which is what predated mydu. Um, uh, a company, a national pack, called Every Library that is a works, is the only political action committee specifically aimed at local libraries and and they're very excited. You know, looking at this, they said this is going to save us. You know, 3040 hours a week, every week, and just understanding how we work through the legislature. Now they have, you know, five, six people on staff and and so it'd be. You know it's a tool for people, folks like that as well.
Speaker 3:You can actually include in that group pretty much everybody in the Fortune 500.
Speaker 2:Yeah, I didn't get to that. Any company that is keeping an eye on legislation which every lobbyist that has you know a big company as a client will tell you that the client also would probably want 20 or 30 licenses.
Speaker 1:Okay.
Speaker 2:Because they're all.
Speaker 1:They're all keeping an eye on what's going on, and my great strength as a podcast host and great weakness as a human being is I need to have things explained to me about 75 times before I understand them, five times before I understand them. So if I were, I don't know a lobbyist or whoever, and I buy this product and I'm going to start using it what exactly does that look like Like? What answers am I trying to get out of it? Or is that even the right question, or whatever?
Speaker 3:It is and it's kind of it's a life cycle. So, as you, as you know, legislation runs generally starts on January 1st and runs through the end of the year as a term, essentially for legislation. So give me a subject that you, jason the lobbyist, would be interested in Motorcycles. Okay, wonderful. So if it's this point of the year, right now, here in December in Texas, they're introducing lot of bills and that'll continue for the first couple of months of next year as well. But they're basically registering all those new bills that they want to try to put through the process.
Speaker 3:So you would log into Legiplex and you would create a new workspace and you'd say here are the loose keywords that I'm looking to monitor. You'd say motorcycles, maybe there's certain highway laws that you're interested in following regulations on exhaust. I don't know what it would be, but you'd basically put in, let's say, half a dozen keywords to get started. You can add more later and you would say go, and then, as the state is registering all these new bills for consideration as part of the next legislative term, it'll automatically find the ones for you, using contextual search, that are relevant to what you're looking for and it'll put all of those into one workspace for you so that you'll just see basically an activity feed like you would see in LinkedIn or Facebook or anywhere else of those bills as they're registering, as they're being registered, flow past you and you would say I do want to follow this one, I don't want to follow this one.
Speaker 3:It's essentially an inbox of information for you and as you add those bills to your workspace, it begins tracking them and whenever we move into the actual session and they talk about one of the bills you're following in the senate or the house or in a committee, it'll our ai monitors will automatically find that information for you and pull it into that same activity feed.
Speaker 3:So so that as you move through the session, all of the information, regardless of where any of those bills are or where they're talked about, will automatically be aggregated in one place and, instead of having to watch a 12-hour house floor session, it'll automatically pull the direct spots in that video where that bill is talked about. So you just click one button and you skip to hour 10 minute five, whatever it happens to be, when they actually announce your bill and talk about it. The current the art is you actually have to watch that entire video to figure out when you're talking about it, and this is why our friends in various places have been very excited about it. It's it's saving them literally, not figuratively literally 20 plus hours a week of sitting there watching things that they don't care about oh, wow and in.
Speaker 2:And just to contrast that to how that process would have worked previously. Uh, let's say that you're a you're a lobbyist in texas and you're trying to ensure that there can be, that you can have safe child seats on motorcycles. That's your new, that's the thing that your harley dav that you can have safe child seats on motorcycles. That's your new. That's the thing that your, your Harley Davidson, wants to put child seats on motorcycles, and you and you're trying to watch this for them Every day. You can receive a report, or just go look up on the Texas legislature website what are the new bills today? And you'll see 500, you'll see a thousand.
Speaker 2:There were 13,000 bills in the last Texas session. There's no indication other than one line, one sentence, of what that bill is about, and so you'll. The best thing you can get would get without the system would be a daily report with every single one of those bills, and you're going to sit there and read that line for every one of those bills and then say, okay, I need to watch hb 57, I need to watch sb 224, and then you can go and put that into a tracking system that the state offers for free, which just lets you know. Hey, a new draft came out. Hey, it's been voted on. It was voted on three days ago.
Speaker 3:Please check all brand references from the record.
Speaker 1:I'm sold. I'd like to sign up, please. I don't even do this stuff, but it sounds so compelling that you know, just in case. Okay and dear listener, if you're interested in educating yourself on how the legislative process works, especially if you're located outside the united states, um schoolhouse rock provided a wonderful video. I think it's called. I'm just a bill, something like that on capitol hill yeah, or if you know you were born after 1995 or something like that, you might not have seen it.
Speaker 1:Um, yeah, okay, and, and you guys have been working on this for like a year roughly and you have just like gone public with it just recently, is that right?
Speaker 3:that's correct. That's correct and there's a. We have a free trial running right now, um, and I think I'm allowed to say that within the first two weeks, I mean, we obviously have been not public with it, so we've been having a lot of conversations with people we know in various places about it and giving demos, but the date, including the first two weeks or three weeks of having the free trial, we've already had over 300 registrants. So it's the message is resonating, the idea of getting beyond just a prompt and the other things that we can do. It's really catching.
Speaker 1:Yeah, wow, getting beyond just a prompt and the other things that we can do. It's really catching. Yeah, wow, um, what was the? What was the like milestone or whatever that you guys were waiting for before you went public or whatever? Just maturity of the product, or what?
Speaker 3:combination of maturity of the product and the timing. So bills are starting to be introduced in several states right now. We wanted to be able to watch those roll through and have the conversations with people who are relevant, as those bills are coming in so we can actually show them live. Our demo is fantastic. We asked them not unlike what I did just now what are you interested in? And then we just show them how the platform works and they're like oh, that's everything that it would have taken me a week to do Fantastic. There's no real questions here. It week to do fantastic this. There's no real questions here. It makes a lot of sense. Okay, it's the uh. Keep. Well, there's two different acronyms. Right, there's the kiss method, which is keep it simple, stupid um. And then there's the um. What's the other one? For ui ux, it's uh. Basically don't. Don't make them guess, just give them what they expect. Oh.
Speaker 1:Oh yeah, Don't make me think.
Speaker 3:Don't make me think, there we go.
Speaker 1:Yeah, excellent book by Steve Krug. Okay, and tell me a bit. Let's get maybe deeper into the technical details. Can you tell me about the tech stack that you're using?
Speaker 2:Sure, we're still PHP-based and been a big fan of PHP for, you know, as you know, for 20 plus almost 30 years. At this stage, the entire back end is all and I can't. So my team works on the back end stuff and Chris's team takes everything kind of, as we always say, php back, php forward. So I'll talk about the back-end part of the stack and Chris can talk a little bit about the front-end part of the stack as well, because it gets really interesting on the workspace and how those pieces work. We are 100% serverless. So, utilizing AWS Lambdas a wonderful piece of kit called Bref, which is an add-on for Laravel Laravel is probably, in my opinion, the best PHP framework that you can use these days, and Bref allows you to take that and do it completely serverlessly. It's a great setup because you get all the really great benefits of serverless. You instantly scale both in and out.
Speaker 2:The primary database is serverless as well, and we also take advantage of Elasticsearch in a variety of ways, both as search engines and cache stores and that kind of thing. So the I'm trying to think what else to lay through on this. Data collection happens constantly. There are, you know, we get data from a variety of sources. There's state websites, there's federal side, some people have APIs, some people have, you know, full database dumps they give you.
Speaker 2:It's really kind of a kind of a shortage board of ways that all the data comes in and then all process through and filter through and run through our system so that we get them all in one consistent methodology tested, validated AI, improved where necessary and made available for users. And the nice thing about doing that all completely serverless is that you know when a push comes through with extra stuff. You don't even have the weight of machines of. You know 10 machines have to scale up and then process all this stuff and then go away. It's just all of a sudden you had 57,000 invocations happen and they might take a few minutes to go through, but it's really fast and it's really smooth and it creates a good user experience.
Speaker 1:Interesting, and some of this even from the top of the conversation, when we were first talking about what this is. I'm reminded a little bit of something a friend of mine is working on. Her name's Colleen Schnettler. She was on the podcast a few episodes before this.
Speaker 1:She's working on something called Hello Query, where it's something where you can plug in your database, basically just give it a read only user access to your database and if you are like a I don't know like, like a BI business intelligence type person, you're not that technical, you don't know SQL, don't want to learn SQL, but you still want answers out of your database. You can plug this into your database. Ask a natural language question and then, it will give you basically an SQL result. Initially it showed you the query, but then Colleen realized people don't even want the query, they just want the language and then the answer back from it.
Speaker 1:And I thought that was really smart, because a lot of AI enhanced features in a lot of products is just like something sprinkled on top where the value added is really dubious. I saw this this like Tik TOK or something, where the label was like companies adding AI to their products. Maybe you guys have seen this too, but somebody had at a restaurant. Somebody was like preparing this box of food and they like squirted this like mayonnaise sauce on the food then they closed the box and squirted mayonnaise on the top.
Speaker 1:And then they they wrapped it up in a bag and squirted mayonnaise on the top of the bag. It's like, yeah, this, this is not needed in this place that's joking, what I call the add a prompts to it.
Speaker 3:Add a prompt to it. Add a prompt to it add a prompt to it.
Speaker 2:Honestly, we seriously debated whether to show any prompts at all. Like the value of the prompt, there's a lot of value in the prompt, and especially in the workspace where you get your own AI that is trained specifically to your workspace, as well as the laws and regulations of the state that you're looking at the states that you are looking at. So you get this very narrowly trained AI and we did decide ultimately that that was valuable enough to put the prompt out. But at first we're not going to put any prompts out, but to use AI to create a better overall research tool to use AI to create a better overall research tool.
Speaker 1:Yeah, yeah, I think these use cases, like what you're doing and what Colleen is doing, are going to be the more valuable use cases. Rather than just slapping a prompt on it, obviously and I applaud you for not putting AI in the name of the company it's going to be so dated in like five or 10 years.
Speaker 2:It's like, oh, they have.
Speaker 1:AI in the name. They must have been from like 2024 or something like that.
Speaker 2:We did buy that AI name just to make sure we covered that. But yeah, I agree with that. It was another serious set of conversations of do we lean into that fad or not?
Speaker 1:Yeah, yeah, because it's like what does your product do? It's like, oh, it uses AI to blah, blah, blah. It's kind of like saying what does your product do? Oh, it uses computer code to do such and such. Yeah, we get it Like every program uses code. In the future it's going to be like every program uses ai, like you don't need to say that part, that's just. Nobody cares about that part, they already assume it's going to use ai. It's like where is the value that's created?
Speaker 3:well the word website and the name of the website right, yeah, and I have.
Speaker 2:I have a couple of nieces and nephews that are now in college and on their way out of college at this stage, and I've pulled some of them aside and said AI is going to be like spellcheck for you guys and you need to learn it. You need to figure it out. You need to understand that this is going to be part of your regular flow and not just how you're asking chat GBT for a recipe or something like that. Really understand how you can use it to enhance I use it every day to enhance everything that I'm doing in subtle ways, whether it's it's, you know, github, copilot or or AIDR or one of those you know pieces of tech that that directly integrates with what you're doing, or even just you know gosh. I'm stuck on this problem. Maybe I can get a little help and a boost and, yeah, it sure works pretty well I can.
Speaker 3:There are certain marketing applications that will be heavily disrupted by AI, that are beyond just sort of a using AI, but sort of anticipating how AI will impact markets. Seo is a huge one. Nobody really understands what the impact of AI browsers will have on the traditional seo markets because you do or don't have more or less coverage and or the same algorithm. I know openai just released a browser of some kind, but a lot of search of some kind, I should say, but a lot of markets will be impacted by ai in ways that we don't even fully understand yet. Impacted by AI in ways that we don't even fully understand yet. So, beyond just the do you use it on a daily basis? There's also, especially since I'm, as John said earlier, php forward, which includes all marketing and UX how we use some of those traditional marketing channels like SEO will be directly impacted, beyond our control, and how we present our data to the world so that we can best take advantage of that. Now there's still a lot of questions about what exactly that means yeah, it's.
Speaker 1:it's interesting, like there's obviously the question that people want to think about all the time, including myself, which is like what impacts is ai going to have in the future? And like I don't even know what impacts ai is having now. Like, forget about what's going to happen five years from now. I don't even know what's happening now. I mean, everybody has some idea of some of the stuff that's happening, but it's all moving so fast and obviously you don't have like a crystal ball into what everybody in the world is doing. And so it's not only is it hard to predict the future, it's even hard to know what's going on in the present. I often fear that I'm just like um behind um and I'm not taking advantage of everything I can because, honestly, like in my work as a consultant and observing a lot of other developers working and stuff like that, um, a lot of people are are totally behind Um.
Speaker 1:Like John you mentioned, you use AI all the time. Id2, I use it like on and off all day, every day, Like all my coding work. I definitely for any coding work that I can where, like, the client doesn't prohibit it or something like that, I'm using AI for that, and when I have to switch back to not using AI. It's just awful. It feels so stupid. It's like why am I using a handsaw to cut down this tree when I have a chainsaw right here? You know, but a lot of developers are not using AI, or they use AI but they don't really get how to harness it to make it really productive.
Speaker 1:Same way you know, I'm sure you guys have observed people formulating Google queries that are just like why is that the thing that you're copy pasting into Google? Like, don't copy paste your unique file name and paste that into google because nobody else is going to have that file name. Like it's a skill to do that.
Speaker 2:And it's an even more sophisticated skill to use ai to to help you with programming and sadly, a lot of people aren't using it effectively yet you know, what I'm seeing, um to a certain degree, is a number of different products that are where companies are adding AI onto what they're doing for an extra 20 bucks a month. So Microsoft, google in the workspaces, google I get an email every day from Google asking me to turn on Gemini for every single person in the organization for an extra $20 a month, and I played with it a little bit. I just didn't see and understand how it really made my life $20 a month times everybody better, necessarily. But the first one of those that started that really is incredibly impactful is GitHub Copilot, and if you use any kind of an IDE where you can get a plugin for it, github Copilot is the best thing since sliced bread and you can go deeper with you know. I talked about other tools like AIDR, where it's much more interactive and it's a little more geeky and command line.
Speaker 1:What is that?
Speaker 2:one called AIDR A-I-D-E-R. David Haefeli actually a friend of ours introduced me to it. It's a command line tool that you can set up with whatever LLM you want to. You can even run your own. You can use keys off of OpenAI or whatever, and you more describe to it what you want to do.
Speaker 1:It's a coding.
Speaker 2:It's like a coding buddy. So you would describe to it hey, want to do it's a coding. It's like a coding buddy, so you would describe to it hey, I need to make a new model and I need the database table to do this and that, and it'll actually create the code and put it in a commit. And then you go and review the commit and accept the commit Interesting. It's a neat little piece of kit. It's a little more interactive than kit. It's a little more interactive than necessary, but something like GitHub Copilot. I'm now to the point where I start typing and I'll stop and wait a second to let it finish filling things in. And you're talking about when you can't use it. The other day I was debugging something on a utility box and I pulled up Vim and I did the same thing and sat there for a second and I was like, wait, this is not going to fill anything in.
Speaker 1:Yeah, that's happened to me before. My internet wasn't working the other day and I was waiting and I'm like shit. A couple comments. So I tried out GitHub Copilot some time ago and I found I found it to be like, on balance, a slightly actually net negative productivity wise, because the suggestions were were not high enough quality enough of the time for me to feel like it was a net positive, enough of the time for me to feel like it was a net positive. And so I tried it out and I'm like this this is no good.
Speaker 1:Um and then I like didn't think about it much again for a while. I'm like okay, I've made my opinion about that, that's my opinion, blah, blah, blah, Um. But then sometime later, um, somebody encouraged me to try out.
Speaker 1:cursor the editor, cursor which I tried out before and had a similar experience, but they're like try it again, it's better now. And so I did, and it was better. It was so much better. Um, the only thing that I didn't like about it was that it wasn't vim. But then my friend trey introduced me to a tool called super maven and that gave me like everything I wanted. Um, the, the, the suggestions were very high quality, because it seems that github pilot, at least before, maybe maybe it's different now. Um, it wouldn't take your whole code base into account, but cursor and super maven they seem to take your entire code base into account and so they can make suggestions that that do use that context, so they're more usable more. More of the time. In my experience, what I would really love to see and I'm sure is inevitable is more voice integration, because, like, typing is kind of a bottleneck. It's like I conceive of the idea in my mind now I have to type it in order for it to get into the computer.
Speaker 1:It's like why? Why should I have to type it? Like ai can can understand my voice, so why can't I just tell my computer to do something, and then it does it? I hope and expect that we'll be seeing stuff like that.
Speaker 3:I would expect that very soon and I can't no crystal balls over here either, but you can only imagine the direction the entire market's moving, apple et al, with the next version of whatever iPhone's coming out, and I'm sure everyone else as well, everybody's, I'm going to expect. I mean, we already know agents are coming early next year.
Speaker 1:That's exciting. Yeah, wait, tell me about that. I don't know about this.
Speaker 2:Oh, openai has talked pretty openly about this as kind of the next major milestone. So right now you have a chatbot that has contextual understanding back a certain amount of time, and then you also have through OpenAI. They have a concept they call assistance, which an assistant is like. You can train something for you just by giving it some direction, and, and you can do that through chat, gpt, through their website, you know, you can just create an assistant, tell it you're an expert in x and you need to always speak to me with a new jersey accent and it'll it'll respond in that way, you know lends a bit of credibility to it yeah, there, there you go.
Speaker 2:But those have a limited amount of contextual understanding and a limited amount of distance they can travel in like a timeline.
Speaker 2:So if you want something to help you plan a trip between you and my wife and I. Let's say, my wife and I want to go on a road trip and we need to plan four places to stop and we might want to rent a car and we want to get an Airbnb in a couple different places and that kind of thing, and we don't agree about anything easily when it comes to that. And so an agent would actually be able to take on that job from the beginning and come back to it and execute the job all the way through until the end, even if it takes weeks and months to do so. So it would, you know, I give it the task initially of planning this trip and it would contact my wife and it would ask her what she wanted to do and it would lay out maps and that kind of thing and actually contact the places where you're thinking about staying and find you a good um, vrbo or whatever. So all.
Speaker 1:What I'm taking away from this is that you can use ai to be a third party arbiter between a husband and wife, and it can side with you and you can say hey, look like it's not me, the computer's saying that this is the right thing to do, so like here's a simple and practical example of what john's trying to illustrate.
Speaker 3:If you had a list of 100 urls and you said, in the current state, you said, visit each of these urls and write a summary for the pages, the current ai prompts would get through about the first three and then kind of wander off. After that. They wouldn't have the persistence of thought to say and now get the next URL and go do the same thing. And now get the next URL, go get the same thing. Agents will go through the entire list, keep context of where they are in that list executing your command and actually complete that entire list for you. It's. It's almost like the difference between add and uh, I don't know whatever.
Speaker 1:The opposite is, yeah, the exciting thing about a lot of this stuff is that, in order to make things that are dramatically more useful and valuable, we don't even need advances in the technology, like the building blocks are already there, like you know.
Speaker 1:Voice controlled programming, like we already have all the building blocks. There's just like, uh, what I'll call traditional coding to connect those things, like we don't need any. We don't need the ai to advance anymore in order to build these things, and I don't know anything about this agent stuff, but you know a lot of this stuff just by like harnessing the AI technology in a little bit of a different way, I guess what you guys are doing is a good example too.
Speaker 1:Like you, can go to an LLM and ask it like hey, tell me about legislation in Texas or something like that.
Speaker 2:You can do that, but that's not really the thing I'll tell you that ensuring that you correctly pick out the location where any one of 150 bills are discussed in an eight-to-half half hour long session meeting is not one prompt.
Speaker 1:I can imagine.
Speaker 2:Yeah, it is so like that. That was a significant piece of engineering, just just making sure that you could consistently, accurately work through that full understanding of everything that happened in this one legislative meeting. And when I say eight hour meeting, that is more the rule than the exception, though the four meetings will just have the camera rolling all day long and they'll come in and go out and that kind of thing, committee meetings, especially as they get into later sessions and things are getting, you know, a little intense. They'll go 10, 12 hours, um, almost non-stop.
Speaker 3:Well, this touches back on what you're saying before as well, though, like the state of the art has changed since we started this project. Uh, and one thing a decision we made very early on, knowing that the velocity of ai was only accelerating, was we built the platform in such a way so that, as new things come up, we can very easily plug them into the backend architecture. So we're not beholden to any one given platform LLM, et cetera. We literally can leverage the best of the best as it becomes available.
Speaker 2:Yeah, and if they all go offline, our system still works. You just wouldn't have a problem.
Speaker 1:Yeah, and, dear listener, that's an excellent technical takeaway there. Loose coupling. It sounds like you design the architecture so that it's not only not tightly coupled to any particular product, it's not even tightly coupled to access to the network at all. It'll still work if those things have to change. Yeah, quite a lot of applications have a lot of very tight coupling, making things hard to change. So that's that's a wise choice. It's probably pretty soon for us to head toward a conclusion here, but I'm curious a little bit more what we're talking about with. You said it's not a single prompt to like identify the location of such and such in a video and stuff like that.
Speaker 1:I've thought about this a little bit. How do you deal with the non-deterministic nature of AI agent responses? My naive thought, without ever having done it, is like I don't know, if you get a response and you want to kind of see how good of a response it is, you could then feed that response back into a subsequent prompt and do like a meta prompt on it and say like hey, tell me about what this says. Does this actually talk about such and such? Is there anything extra in there that like tells me that this is actually barking up the wrong tree or something like that? But like, how do you do all that?
Speaker 3:Are you talking about vectors? Checking vectors?
Speaker 2:That approach is useful in one set of circumstances. It's a good way of thinking. The set of circumstances it's useful in is did you or did you not correctly prepare the information that you sent in to the AI? So, to use a simpler example than a video, new bill comes out and we have a text of a bill it's brand new ones that are one page, a page and a half, and we create a set of summaries, impacts, that kind of thing that that bill is going to have just a high-level analysis. If there's a bug somewhere and you're not sending the actual bill in, the AI might decide to go ahead and give you a summary anyway, because it's trying to solve the problem for you, right, and you give it enough context that it decides it's going to give you a summary anyway, even though you did something stupid and the bill was left out.
Speaker 1:Having another AI read that summary and try to identify whether or not that is a proper summary of the thing that was summarized is one way to validate that yeah, so kind of like, if you're doing math homework or something, it's like you find the answer and then you work backward from the answer to the original question, or whatever and if they agree, then it's probably right, and if they don't, you know it's wrong.
Speaker 2:Yeah, yeah, yeah so for for that, but but really what it comes down to is is how do you prepare the data that you're sending in, and you got to think about everything from vector stores and how they deal with data and how you structure the data what does this mean?
Speaker 1:vector stores.
Speaker 2:Vector storage is the is the main way that ais like to get data. Chris actually can explain it better than I can um, I don't think we have enough time left it's different.
Speaker 2:It's a different. It's a different storage methodology. Um elasticsearch actually is building their whole own vector store system that they've been releasing that you can incorporate with IAF if you want to, but when you're using the back end and you're loading it up with data, you're putting that into a vector store. Every time you use Chet, gpt and you drag and drop a document in, it's going into a vector store and that's how the AIs understand how to read that data from you.
Speaker 1:Okay, and I think the parents listening to this episode will know that a vector is a force with direction and magnitude.
Speaker 3:That is correct and very, very loosely and I'll probably have to call myself out on not getting this exactly right but very loosely the way. A probably have to call myself out on not getting this exactly right, but very loosely. The way a lot of the computation is happening in LLMs. It uses the gravity between words or the force between words to connect what you're talking about. Usually when we talk about vectors in that context it's two dimensions, but think about it as a million dimensions. But basically you have weighted maps of gravity wells in certain directions so that if you start at point A and generally the gravity from point A to point B, the vectors point this way. I'm sorry the listeners at home can't see that I'm using my hands to draw in the air.
Speaker 3:Now, loosely it creates a certain amount of gravity in a direction or a force in a direction and whenever you overlay that, other word combinations. So and I'm trying to think of the one that I always see but it's basically like kitten is to cat, as puppy is to what. There's a vector between kitten and cat, that is an association between a small one and a large one and it's an animal. That's loosely a vector if you remove cats and you say, apply the same type of thinking to puppy. Well, puppy is small and therefore dog is the larger, older version of an animal. That is the same thing. But it has that sort of parallel idea of this is an object related to this object in this way, and so whenever you're using it in natural language processing or in LLMs, et cetera, it has this way, and so whenever you're using it natural language processing or in llms, etc. It's that same sort of vector math that says these are how these things are associated, got it well.
Speaker 1:It's probably more accurate explanation than my quote from despicable me too, or whatever it was. Um, yeah, but that's even helpful to understand that. It's like it's not a simple thing that you can just answer in one sentence. This is something, something that maybe I should go look up and read about. Okay, well, before we go anywhere, that people should go online to learn more about Legiplex and anything else you want to share.
Speaker 2:Legiplexcom Legiplexcom I mean the front page site's got explanations of the system. You get a seven-day free trial right now just for registering. Please feel free to do so. There's info at Legiplex. We'll come to people who will happily answer your questions, as well as love getting any feedback we're in early stage launch mode and love anything we can get.
Speaker 1:All right. Well, john and chris, thanks so much for coming on the show.
Speaker 3:Thanks, jason appreciate it great to see you really appreciate it. Thank you.