Code with Jason

271 - Hotwire with Radan Skorić

Jason Swett

In this episode I talk with Radan Skorić about his book Master Hotwire, the challenges of Hotwire documentation, blogging in the AI age, how AI affects content creation, the Chinese room thought experiment, consciousness and computation, trust versus critical thinking, and why quality content that goes deeper than AI can produce still matters.

Jason's stuff:

SPEAKER_02:

Hey, it's Jason, host of the Code with Jason podcast. You're a developer. You like to listen to podcasts. You're listening to one right now. Maybe you like to read blogs and subscribe to email newsletters and stuff like that. Keep in touch. Email newsletters are a really nice way to keep on top of what's going on in the programming world. Except they're actually not. I don't know about you, but the last thing that I want to do after a long day of staring at the screen is sit there and stare at the screen some more. That's why I started a different kind of newsletter. It's a snail mail programming newsletter. That's right. I send an actual envelope in the mail containing a paper newsletter that you can hold in your hands. You can read it on your living room couch, at your kitchen table, in your bed, or in someone else's bed. And when they say, What are you doing in my bed? You can say, I'm reading Jason's newsletter. What does it look like? You might wonder what you might find in this snail mail programming newsletter. You can read about all kinds of programming topics like object-oriented programming, testing, DevOps, AI. Most of it's pretty technology agnostic. You can also read about other non-programming topics like philosophy, evolutionary theory, business, marketing, economics, psychology, music, cooking, history, geology, language, culture, robotics, and farming. The name of the newsletter is Nonsense Monthly. Here's what some of my readers are saying about it. Helmut Kobler from Los Angeles says, Thanks much for sending the newsletter. I got it about a week ago and read it on my sofa. It was a totally different experience than reading it on my computer or iPad. It felt more relaxed, more meaningful, something special and out of the ordinary. I'm sure that's what you were going for, so just wanted to let you know that you succeeded. Looking forward to more. Thank you for this. Can't wait for the next one. Dear listener, if you would like to get letters in the mail from yours truly every month, you can go sign up at nonsensemonthly.com. That's nonsensemonthly.com. I'll say it one more time. Nonsensemonthly.com And now without further ado, here is today's episode. Hey, today I'm here with Radon Skorich. Radon, welcome.

SPEAKER_01:

Hi, thank you for having me. It's great being here.

SPEAKER_02:

Thank you for being here. You recently wrote a book called Master Hotwire. Tell me about that.

SPEAKER_01:

Yeah, that's right. So I recently, in actually in December 24, I um I released uh As Beta. It's not yet finished, but part one, which is it's uh it's uh most of the book and it's sort of a rounded hole. It's like it's uh it's uh it's not halfway, it's a part which stands on its own. That's been I've released that. And um my kind of take with that is I wanted to write a book about hot wire that's specifically for experienced Rails developers. So I think there's there's quite a lot of material, but mostly I think it's sort of aimed at trying to capture people entering the field. It's not specifically tailored for obviously if you're experienced, you can pick it up, you're gonna learn it. But it's not specifically tailored for them. Um, so I kind of cut out, I don't, I don't go, I don't spend too much time explaining things, which I know that experienced developers already know. I kind of get to the point, which makes the text shorter. I can do explain more in in less text, and then I also have time to kind of go deeper. I kind of explain how it all works uh under the hood, I explain actually how it's implemented in Hotwire.

SPEAKER_02:

Okay. Okay, yeah, this is actually really interesting to me, and I think I'm gonna buy your book because on a product I've been working on, my CI platform, um, I'm using Hotwire a lot, but I've never really used it all that deeply before. Um, but this app is like kind of like single-page app type thing, um, even more than I realized it would be. It's kind of like, I don't know, if you think about like Gmail as a as an email client, it's kind of that level of single-pageness. Um, and so I'm utilizing Hotwire quite heavily, but I don't always understand what I'm doing. And unfortunately, like AI doesn't have great ways of like knowing a lot about Hotwire yet. And so yeah, yeah, it often gives somewhat inadequate answers. It's still a little helpful because it can um, I don't know, it's it's correct some of the time, at least. And when it's not, I'm at least like, okay, this isn't working, I'll go read about this certain area. It can give me a general idea. Um, but I like have been thinking to myself as I've been doing this, like, man, somebody needs to write some better documentation on this because like, no offense to 37 signals, but like the official docs are like they they feel a little bit fragmentary or unfinished or something like that. I can't like get a full understanding of the whole thing from just that. So I'm really glad that you are writing about this.

SPEAKER_01:

I've I've noticed that that complaint about the official documentation, it's very common. So I I did I kind of scoured, you know, the Reddit and forums looking for kind of the problems that people run into before I even wrote anything for the book. And this is very common, and this is has been my experience as well. Like when I started, I went to the official docs because of course that's where you start, and I was totally lost. Now that I'm kind of I'm I'm I have experience with it, I know what I'm doing, I reference it a lot, but I don't I go to a specific place and I look up a thing that I know is there, kind of to remind myself, but it doesn't do a great job of um explaining it, which um in one way is is not great, in another way it opens up a space for a lot of uh sort of hotwire books and courses, and there have been quite a few recently, actually, that uh I think there's uh uh a lot of interest. Um there are quite a few people making kind of materials for learning hot wire.

SPEAKER_02:

Yeah, interesting. Yeah, um maybe there's not like a huge you know, not to be mean, but like frankly, most content out there is not that great. Um you know, by definition, like the average blog post is average quality, you know. And so when when I've like Googled various hot wire things, like I'll land on some page and I open it up and like just from the first glance at the blog post, I'm like, uh, this is gonna be like this isn't gonna like hand me the answer on a silver platter like I want. The writing's not gonna be all that clear. Because you know, sometimes you can tell just by the the way the site looks that it's just not all that high quality of an affair. Um and I don't know how much blogging you've done on Hotwire, but like that's an area where there definitely seems like some opportunities is is like taking those those common things that people look for and just having like a really nice answer to it.

SPEAKER_01:

Yeah, I I do have a I have an active blog um um that I've been writing. Actually, I've had a domain for a while, but I've like I started acting actively writing it about a year and a half ago. Um, and I've really enjoyed the experience. Quite a few of the articles are specifically on hot wire as I was kind of getting deeper, and some of them are things which I started writing for the book. I'm like, well, this is not a fit, it kind of doesn't necessarily fit in the in the flow of the book. So I just ended up polishing it, putting it out as a blog post, um, just freely available on the internet. By the way, it's very interesting. Uh it's radan.dev. Uh it redirects to ranscorage.com, but it's just that whenever I say to um English speakers, I say ranscorage.com, they can't spell my last name. Uh so I registered radan.dev and then made it redirect.

SPEAKER_02:

That's a good idea. By the way, dear listener, that's r-a-d-a-n.dev.

SPEAKER_01:

Yeah, that is that is well. Oh, yeah. Um, so yeah, so there's I, yeah, on the blogs, and I'm I I'm kind of afraid that maybe the average quality will start to trend down because of all the AI content uh as well. But yes, quite quite often it just doesn't. It kind of maybe it rehashes the official docs or it rehashes the change logs. A lot of the blog posts, it just kind of rewords in the change logs. There have there, it's very rare to find a blog post which just kind of breaks down and does it thoroughly, completely in a kind of second manner in a short manner.

SPEAKER_02:

Yeah, and this kind of raises an interesting question of like, what is a blogger to do in light of AI? Um, because like I pretty rarely use Google now. Like I go to AI first for all my programming questions. Um and so like is blogging still relevant anymore and stuff like that? And something I taught something I talk about sometimes is like the distinction between um things that are possible and not possible currently based on the way technology is right now, and things that are possible and not possible in principle, and like it won't even change no matter what happens to the technology. Um and a thought that I had like right away when AI came out is like okay, for answers where it's just like there's a factual, there's a there's a factual objective answer to the question and AI is good at it, um that we don't need a blog post for that. It's like what's the circumference of the earth? Like we don't need a blog post about that because like AI can answer that for us. Um and if it can't, like the way things are trending, like it'll be able to answer you know more and more factual objective questions as time goes on. Um so even in principle, like people people who are in the business of writing just like dry, objective, factual blog posts, like that's not really needed anymore unless it's something new that is not written down anywhere else. Um what becomes more valuable um is is content that only a human could have come up with.

SPEAKER_00:

Yeah.

SPEAKER_02:

And you know, I I think it's it's maybe not obvious what that is because an AI can pass a lot of Turing tests that somebody could devise. You can tell an AI to to write like an opinion piece about something, and it can. Um but um I think there's certain things that will I hesitate to even say this, but I I think there's maybe certain things that in principle will never be true of an AI.

SPEAKER_00:

Like what?

SPEAKER_02:

Like having like a really strong controversial position on something.

SPEAKER_01:

Oh, because it's like it's the average of the corpus, the training corpus.

SPEAKER_02:

Yeah, but the reason I hesitate to say that is like I can't think of any necessary reason why somebody couldn't create an AI that does have a specific opinion on something, like even a controversial opinion on some issue. So I'm not sure about that. But at least currently, like there's a lot of stuff, like an AI won't tell you, for example, like, oh, I don't know, like, don't use node. Like, oh, I have a question about node, how should I do it? It's like, bro, don't use node, like it's not gonna say that to you, you know.

SPEAKER_01:

Yeah, I mean that's the conditioning, that's right. They're making it uh sort of like that's the old post-filtering. If they put out a raw model, a raw model out um that doesn't have this kind of conditioning. First of all, it would be kind of you know the average. Apparently, when they trained it initially, it took a long time from what I understood to actually make it be a pleasant uh conversationalist because it was it was it picked up the attitude of the internet, which is very bad. Like you take that's if you consume the entire internet, like it was it was, you know, it would randomly just spew out racist stuff because it read racist stuff on the internet and do things like that. So then there's a lot of sort of conditioning in front of it to make it do that and agree. So it's sort of like you know, like how if you're using something and you you ask, you say that something is incorrect, it does this thing. A lot of them do this thing, certainly, exclamation mark. Certainly, exclamation. They immediately jump, certainly exclamation mark. This is very rarely that humans do that. They just immediately go, certainly, and then they agree. This is a lot of this, from what I understood, it's a lot of the conditioning.

SPEAKER_02:

Interesting. I want that unfiltered version. Like I ask it a question, it's like, you fucking idiot. You don't do that, you do this. It would be entertaining.

SPEAKER_01:

It would be, yeah, it would be entertaining. It would um I guess you can get if you like if you um if you run it locally, you could get something. By the way, on the uh um I wanna I want to get back to the blog post and in the age of AI, but um in on the Turing test, are you familiar with the Chinese room?

SPEAKER_04:

Yeah.

SPEAKER_01:

Yeah, so just for anyone listening not familiar, and around the same time when the Turing test was um taught up as a theoretical test, there was also a theoretical argument as to why the Turing test is not a good test of intelligence, and it was the Chinese room. And the theory was you're let's say you're talking to somebody who's in another room, you're talking in Chinese, they don't speak Chinese, but they have these giant volumes of all the possible phrases in Chinese and an appropriate answer. So they're looking as you're talking to them, they look up the Chinese uh what you're saying in Chinese, and they look at the answer and they read it without understanding it. And I think you can make a case that the current LLMs they're passing the Turing test because they are actually a real-world implementation of the Chinese room concept.

SPEAKER_02:

That's really interesting, and I've I've thought about that exact thing before. Um, and I kind of like my gut wants to call bullshit on the Chinese room thing because like okay, let's imagine that this guy this is what I thought when I like first read about this Chinese room thought experiment. Um let's say you can pass the guy the Chinese characters, and just instantly, like, he's so fast that he instantly passes back the answer. Um, from your perspective, that's like completely indistinguishable from the guy actually knowing Chinese. Um and it's like, is that really that different from the guy okay? So like a guy instantly retrieving a translation on paper, is that that different from instantly retrieving uh that information somewhere in your mind or something like that? But I mean it it doesn't even matter. That part doesn't matter because it's like if you give an absolutely perfect appearance of understanding, uh uh is there any meaningful difference between that and actual understanding? I don't think so.

SPEAKER_01:

I see what you mean. Yeah, so that's the that's the scary thing with the LLMs, isn't it? That's the the scarier thing is not that they sound human, the scary thing is that maybe there's nothing more to us. There isn't. Yeah, that's that's the possibility. Like uh it's a little bit scary that we're just you know pattern matching machines that are just we're we energy-wise, we're significantly more efficient than the current LLMs, right? So a human brain uses significantly less energy for when it accomplishes the same thing as an LLM. Yeah, but like scary to think, yeah.

SPEAKER_02:

Uh uh humans are machines. Like, if we're not machines, what are we, you know? Um, and consciousness is nothing but an emergent phenomenon. Um, and there's no reason like we're machines with consciousness, and I don't think there's any reason why we can't. In principle, there's no reason why we can't build machines with uh what's the word? I just said the word consciousness.

SPEAKER_01:

Consciousness, yeah. There is there is one idea, and it's it's not mine, um, it's of a significantly greater mind than mine, uh, Penrose. Uh he said, well, maybe the mind is not um uh Turing machine. Because theoretically, the mathematics allows for uncomputable functions. So anything that a Turing machine can calculate is a computable function. Uh and theoretically, there is it allows for functions which are uncomputable. Computable functions are not all possible functions. It's just that practically we don't have an example, like anything we can write down, any function we can write down and define turns out to be computable. But there is uncomputable ones. And then Penrose said, well, what if human brain is not actually a computable function? That would mean that nothing we do on computers can ever fully simulate the brain.

SPEAKER_02:

Um what's an example of a non-computable function that we can carry out?

SPEAKER_01:

No, there isn't. There isn't. There's a problem. In mathematics, so uh my background, I'm I'm a I work as a software engineer, but actually my my university degree is in mathematical engineering. Um so there are this in mathematics, there are constructive uh constructive proofs where we actually construct an example that say counters a certain statement. But there are a lot of proofs that are not constructive where you prove that something outside the set exists, that there are entities outside the set that satisfy certain conditions. And here we say, well, it's just a function, it maps from one set to another. And exists, but you don't you can't produce an example.

SPEAKER_02:

How can you prove that it's true without producing an example?

SPEAKER_01:

Typically, mostly in mathematical proofs, the way they do it, you say, well, let's um uh let's assume the opposite. Let's assume that all possible functions are computable functions. Uh and then you proceed logically until you reach a contradiction.

unknown:

Okay.

SPEAKER_01:

A very clear contradiction. So you say, well, if I've reached a contradiction, therefore the original premise is false, and not all functions are computable functions, therefore there are uncomputable functions.

SPEAKER_02:

Okay, so you start with certain premises which you treat as true, and then you follow a series of logically sound steps until you reach a contradiction, and since all of the steps are logically sound, uh the contradiction must mean that some of the premises were false.

SPEAKER_01:

Yes. So it's um so uh closer to software engineering is the halting problem. The typical way so you can't have a program which is going to determine deterministically for any other program if it finishes or not. That's the halting problem.

SPEAKER_02:

Okay. Okay.

SPEAKER_01:

Uh and the way you typically uh the way you do that is um again, uh you assume that uh you assume that the uh such a program exists, and then the I it's hard to get the details now in the air, but basically you assume that such a program exists, that the halting problem is solvable, therefore there is a program which, given any input, uh can determine if it finishes or not, and then you feed it a program that calls the halting program. So you fix it itself with a with a few things around it. And then you're like, well, if if it determined then you get into a contradiction. I'll have to write that on paper, sorry. Uh but basically you you assume that there is that is solvable, and then by using that program to feed it to itself with some other extra logic, you get into a contradiction where if it exists, then it can't determine it for itself.

SPEAKER_02:

Interesting. It can't determine it for itself. Why? Because it doesn't know what it's gonna get. Um you you have the halting, you you have like the halt detector thing, um, and you feed it itself, and so it's like the outermost one has a definite program that gets fed fed to it, but then the nested one it doesn't.

SPEAKER_01:

Ah no, wait, wait. So it's um so here it goes. Um you feed it a program that says if um if the detector if the detector uh if the detector terminates on every input, okay, if the detector terminates on every input, um then enter an infinite loop.

SPEAKER_02:

Okay.

SPEAKER_01:

It messes itself basically.

SPEAKER_02:

Okay.

SPEAKER_01:

Why why why am I why am I proving a halting problem on a podcast? This is such a bad idea.

SPEAKER_02:

That's funny.

SPEAKER_01:

Um I've not done I've not looked at that proof in like eight years, and I'm trying to uh uh prove it. Now I have to write like a blog post explaining it or something to redeem myself because there's I guarantee you there is somebody listening who knows this proof really well, and they're they're right now laughing because I'm getting the proof wrong, uh, because I can't remember it uh correctly. And also I'm on a podcast, so I'm kind of conscious of uh right.

SPEAKER_02:

For me, it's like my IQ drops by like 20% when I'm when I'm on the podcast. Um, okay, so how did we get here? Where were we just before this?

SPEAKER_01:

We're talking about blogs and AI, and we went to the Chinese room, uh, the Turing experiment, uh and then um oh yeah, the Penrose and his idea that maybe brain is uncomputable, and then we were talking about how do you prove that, and then the example was how the proof of the halting uh the halting problem being unsolvable involves a similar method. And before all of that, I wanted to say something about blogs in the age of AI. Yeah. Uh while we why don't we switch to that?

SPEAKER_02:

Sure. Coming all the way back up to the surface.

SPEAKER_01:

All the way back up to the surface for air. Um, one of the things, like in the age of AI, when you write blog posts, one of the things is like, well, the AI can summarize the uh the blog, so therefore people will just use AI to summarize the blog and then not the blog post and then not read the original blog post. And to me, that kind of seems well, why don't you write the blog post so that it's already as to as summarized as possible, so that it gives the needed information. Because especially for if you're writing a technical blog post, the person is there to either solve a problem or get an insight. And uh if you write, this is what I try to do, which I hope I succeed most of the time. When I write the blog post, I kind of try and make it as short as possible, and I try to front load the um the most useful point as close as possible to the beginning. So basically, there's no value. If you run the AI, it only has a chance of summarizing it. It's not gonna get to the point faster than me because I've put in a human work of getting to the point as fast as it makes sense for the person to still learn it.

SPEAKER_02:

Although, to be fair, even if the AI spits out your blog post verbatim, um, people don't have a reason to leave the AI and go find your blog post.

SPEAKER_01:

That is true, but if it's if AI spits my blog post verbatim, uh the only other benefit they get is that the blog post is like, oh, subscribe to get more content from me. If AI transferred that as well and then subscribe, then the only thing that missed out is my my website, my domain, and that doesn't have feelings.

SPEAKER_02:

Well, let's talk about this for a second. Um, writing a blog post has certain benefits. Um if an AI, um let's let's not even worry about whether the AI uses your blog post as the source, but let's say the AI can answer the same question as well as or maybe even better than your blog post that you wrote. Um the benefits that come from writing a blog post, if an AI can duplicate your content, what of those benefits are lost?

SPEAKER_00:

If it can do it just as well, right?

SPEAKER_02:

Basically, the AI makes your blog post redundant and irrelevant.

SPEAKER_01:

One potential benefit is that um when I write, and when anyone writes, they write you know in a certain way, and they're not everyone approaches the problem in the same way, right? So the same problem can be explained in multiple ways, and for some people, some of the approaches will be better, right? Now you can say typically people will say, well, AI, then you can ask it to explain better. Like AI is a is there for you, it especially is gonna tailor it specifically for you.

SPEAKER_02:

Okay. Um yeah, well, that too. I also want to talk about what benefits remain, but right now I'm asking about what benefits are lost.

SPEAKER_01:

Well, what is lost is the incentive for people to write, right? Because you you sit down, you write, you have some motivation. At some point, you know, if you put enough practice, you should be getting above AI, right? If we if we can't raise with your quality above AI, then you know we've reached AI has superseded humans, and then we've got a whole nother set of problems. But currently it can't, it gets to a certain level, and then anything below the level basically anything, any work that is lower quality than AI, it's it becomes worthless, right? Because you can just get AI to do it for cheaper. But on the other hand, anything that is higher quality than AI is should be in higher demand. It is in higher demand.

SPEAKER_02:

Okay, because

SPEAKER_01:

Because everybody who's like looking for content, you want the best content, you know, your stuff you've been satisfied by the AI for the lower stuff, and then you go for the higher quality stuff, it's so much more valuable to you.

SPEAKER_02:

Um I don't know if I should apologize or say you're welcome for this, but I'm gonna hold you to a very strict level of thinking for this. So, like, okay, you said the incentive goes away, but that's a different thing. What I'm asking about is the reason the incentive goes away. Like the incentive goes away because some benefit is lost. What I'm asking about is what exactly is that benefit that's lost when the AI kind of steals your content?

SPEAKER_01:

The readership, it's um you you will you if you put your writing out, you're doing it. I mean, yeah, some of it is you do it for yourself. Like I like to write, and um I know I've read on your blog that you also you like to write mainly for yourself. Um it helps clarify the thoughts. But part of the reason is somebody else reading it and giving you feedback, the human connection. We need a human connection. So if AI steps in between, and especially if it replicates your content and then removes you from the equation, it removes the human connection.

SPEAKER_02:

Right. I'm okay with that. Like when I when I get content from AI, there's no human involved. I'm okay with that. I just need the information. Um when you write, what's that?

SPEAKER_01:

When you write, do you uh do you do it at least in part for the human connection, knowing that you produce something that has an effect on another human and that they are perceiving oh JSON really helping?

SPEAKER_02:

Yeah, so like okay, so what you said is like the benefit that's lost is that a paraphrasing, but like people won't find and read your content. Um, they might read the AI's version of your content, but they won't visit your website and read your content and like have an opportunity to subscribe to your newsletter and buy your book and all that stuff. So that's lost. Um and then I want to think about like, okay, so that benefit goes away, but what benefits still remain? And you mentioned a couple of them. There's like the human connection, um, there's like the the particular way in which you present things and word things and stuff like that. I think it's probably always gonna be the case that an AI is um not gonna present the same thing in the same exact words that you would have. Um and you know, I I uh at the risk of sounding conceited, I think the way that I word things is almost always gonna be better than the way an AI would word the same stuff. Um so there's that. But like there's also there's there's I think a number of other benefits that aren't lost too. Like um, I think there's like a time gap between the time the blog post is first published and the time that it gets taken up by AI. I expect that that time gap will get shorter and shorter over time as technology improves improves, but I'm guessing that that time gap will never be zero. Like not even in principle, because like even if the AI picks it up immediately, like people won't start looking for it in the AI until after you put it out.

SPEAKER_01:

Well, the problem with that line of reasoning is the um search engines with their current AI summary uh features, which they're all rolling out. So essentially, like if you go to a search engine, you s you you ask it for an answer, it finds you know, a blog post that was published even you know like half an hour ago. It's gonna read through it, it might give you the answer right there in the search engine, and you never click on the link, even though you see it right below.

SPEAKER_02:

That's a good point. Well, I guess if the AI summary of your blog post is just as useful to people as the blog post itself, then you fucked up.

SPEAKER_01:

Yeah, I I'm with you on that. That's I think, yeah, I think that's that's true, right? I think that's you need to raise the bar. If if if the AI, if the if the AI is making your blog post better, then you know why did you publish this version? Why didn't you improve it?

SPEAKER_02:

Yeah, yeah, and I'm just trying to think through right now, like, okay, like what does this mean for writers exactly? I don't have all these answers like pre-thought out or anything. Um, it's also the case that like, I don't know, when I write a blog post, for example, I'll post it on the Ruby subreddit and people will have a chance to read it there. And that's like a separate channel from Google or AI or anything like that. So that's a way that like, you know, maybe there's an expiration date on the content because it'll get picked up by the AI summary or by Chat GPT or whatever, but um but I can still feed it through that channel and get some use out of that before it expires.

SPEAKER_01:

Yeah, possibly. Um it's also that that's uh like you said before, sort of if you write I think the incentive is to write about harder problems, right? The low-hanging fruit, where you just kind of explain something simple that's much more likely to be answered immediately by LM. If you go for sort of harder problems where the where the reader actually needs to increase their own you know knowledge and understanding, then you can set yourself apart from AI much, much more. Because it's harder for AI to there's there's a limit to the all of the LLMs, and the limit is raised. But when I sit down and I I I use I use um coding assistants in my day-to-day work, and I ask a coding assistant something, and sometimes it gets it right, I can see that it's the right thing. Sometimes it just goes off the deep end, it starts you know producing shit. And I've learned to recognize that, and very quickly I stop even reading its answer. Um, and I switch to looking for a content created by a human. Because, like, okay, I uh this is clearly not a simple problem. I need to actually uh understand this. I need somebody who has been in my position, they they didn't understand it, and then they understood it, and now they've explained it. Um, I need to go read something that they made so that I get the same understanding. And there I think I think there you if you do if you do it, if you attack those kind of problems, you have a better edge against AI.

SPEAKER_02:

Yeah, I think that's true. Um, yeah, and obviously we can expect LLMs to get better and better, and they'll be able to answer harder and harder questions, but we're not there yet. And so, you know, you or me could probably write a blog post that that beats AI or that like goes after something that it's not gonna go after. It's out of its reach one way or another.

SPEAKER_01:

Yeah, and I've like um the blog post that I've written, the ones which go kind of like into the deep details, those do better for me. Um, in general, I've seen like I enjoy writing those more, like to to really like explain how something works and then kind of comment also on the trade-offs that were made and um try and do that in a kind of short and compressed format. Those have done well. Like, I have I I see traffic to those and I get comments from people, they're like, Oh, thank you. I found this blog post, it helped me solve the problem.

SPEAKER_02:

Yeah, I've found a similar phenomenon. Um, like um sometimes when I write a post, I make it my goal to write like the best piece of writing that humanity has ever produced on that topic. Um, I I pick a narrow thing, you know, like Ruby blocks, for example. Yeah, it's like, all right, I'm gonna write a big ass blog post. And actually, I discovered that this took like a whole series of blog posts. Um, but like I'm gonna write about Ruby blocks and I'm gonna go deeper and like I'm gonna take it slower and be more clear and just like explain every little nook and cranny of it better than anyone else ever has. And so I put out those blog posts, and I like to think that I achieved the objective that I was going after. Um, and those those posts did very well. And I think that's true before AI, and I think it's still true now.

SPEAKER_01:

Yeah, I think so, because you're you're you're you're working above the uh line of quality of what AI can accomplish, right? It becomes it becomes that much more valuable, even with especially because there's the lower quality stuff, it's just multiply it's just multiplying. I thought people are producing articles with AI. There's if you have a C you know, a C of crap and you're still above it, it kind of I think it stands out more. Maybe it's hard. The problem is it's harder to find. Um, but it increases the chance that when somebody finds it, they're going to see, okay, well, you know, this person solved my problem. Uh I'll remember that, or maybe they'll subscribe or not. I'll remember that, and I'll come back and check for more content. Yeah. Because I know that that's it's a good place.

SPEAKER_02:

Yeah, that's another thing I was gonna say that like people can become fans of a certain person, and then when they see something new by that person, they'll be like, oh, like Radon wrote something new. I know that I always like his stuff, so I'll go check that out.

SPEAKER_01:

Yeah, yeah, exactly. I mean, and when we say, you know, fans, uh it's uh it's a different program. It's like fans evokes the idea of, at least for me, of you know, like fans of a band or somebody. But somebody, if somebody I I have people, you know, if some if you like somebody's writing and you come back for more, yeah, in some small specific way, yeah, you're a fan of what the person's you like what the person is producing and it's been useful to you. And I I think that has value. Even just saying that, uh, you know, knowing that you can do that, I think I think that's already valuable in itself.

SPEAKER_02:

Yeah. Um, and it's maybe a bit of a chicken egg problem because it's like, well, if people are just using AI, then like how do you become a known author to somebody? Um, so I I think maybe that part gets harder. Um, but once you achieve that and become a known entity, then you can have those like fans who will read everything that you write. Um another benefit that still exists is it comes from the fact that people will only, at least currently the way things work, um you'll only get out of AI what you ask for. Um like I've never received an email from Chat GPT saying like, hey, check out this fun fact about eggs or something like that, you know. I I'd probably love some kind of fun fact about eggs, but you know, it just doesn't like come to me randomly and like tell me stuff that I'm not seeking. But you know, you you um Radon can can write a blog post about like fun facts about eggs, and I might see it and like whoa, I wasn't even looking for that, but like, okay, tell me these facts.

SPEAKER_01:

This is I think is this I think this is a call out to all the people listening that you should send egg facts to Jason. There's people listening there, you know, your fans. I think I think you're gonna get a bunch of egg facts.

SPEAKER_02:

Uh Jason at codewithjason.com, send me your egg facts.

SPEAKER_01:

Yeah, that's true. Yeah, it's um yeah, because sometimes that you yeah, that's a very good point. Um, because there's there's multiple types of articles on blogs, and some of that's on tech let's focus on technical blogs. Some of them are, you know, I have a problem solve the problem. But quite often, very, very valuable blog posts are sort of like this is a different way to look at a problem that you didn't think of before, or this is a different approach you didn't think of before. Like, how many times have you read a blog post that explained something you weren't asking, and then you realized oh, I can actually make good use of this in my work?

SPEAKER_02:

Yeah. Yeah. Hmm. Yeah, so I think we'll be okay.

SPEAKER_01:

I think we'll be okay. I I um so yeah, so I think that's um it's one of the bigger problems of our age, but I think we solved it.

SPEAKER_02:

And you know what else? You you can write things down just like on a note in your pocket and and don't show it to a computer ever. Just be like, hey man, look at this.

SPEAKER_01:

And then like on conferences, you like call people over and you you know you open your trench coat.

SPEAKER_02:

Hey man, want to buy some blogs?

SPEAKER_01:

And you take out like a yeah, like a a paper, like a you take a piece of paper, it's there, you go like, but watch out for the cameras, man.

SPEAKER_02:

Exactly. They have to like hold it really close to their face and read it, and they have to give it back to you when they're done.

SPEAKER_01:

Yeah, exactly. Yes. That would be yeah, I think that would that would be fun to do. Uh well, until you get arrested, then it's not so much fun. But yeah, I think until that point, that would be very fun to do.

SPEAKER_02:

Yeah, I can imagine some dystopian future where like you know, writing something down like that is illegal. Just like, I don't know, what was it? Is it in like communist Russia or something? Owning a um like this certain kind of printing machine was uh an offense punishable by death.

SPEAKER_01:

I'm not sure if it that does the um what was it for Fahrenheit um Fahrenheit 451? Yeah, yeah, 451. Yeah. I um I don't know. I I was actually so I was born um I wasn't born in Soviet Russia. I was born in Soviet Yugoslavia. I was born in a country that doesn't exist. I was born in 1984 in Yugoslavia, um, which fell apart. Uh there was some sense, there was there was none of that. There was some you know, there were things happening in Yugoslavia. There wasn't quite something so so radical. So I'm not I don't I don't know if it's um if it's from the book or it's the book is certainly rooted in some real things that uh happened in the in the Eastern Bloc.

SPEAKER_02:

This I understand is a real fact that that in Russia it was illegal to own a uh like a printing machine because you weren't allowed to like make your own news. There were people who made underground newspapers at great risk. Um, but if they caught you as somebody who was, you know, you had this machine, the only reason to have it is to make a newspaper, then you get killed. So I can imagine a future where like, you know, there's cameras covering like basically the entire, you know, whatever country this is, like every square inch of this country, there's cameras on you, there's microphones everywhere, um, all of your um computer communication is like surveilled by the government. And so basically anything that you ever do or say is um is um detectable by the government, and if you try to subvert any of the surveillance, then that's a crime. I I yeah.

SPEAKER_01:

I think you've described the plot of uh George Orwell's 1984.

SPEAKER_02:

Yeah, exactly. Yeah, yeah, I guess that is pretty much the way it is, huh?

SPEAKER_01:

Yeah, because if they have the in 1984 the TVs are two-way. Right. TV, but the T so the I think remember the main character, there's like this one place in the apartment where if they really like squench and turn around, that's one place that they know that the camera is not covering. And then later I think it turns out that there was actually a camera they were not aware of that was also covering that place.

SPEAKER_02:

Yeah, he had like this notebook that he thought nobody knew about it, but turns out that actually, yeah. Yeah. By the way, you ever watched that show Severance?

SPEAKER_01:

No, I've heard it's really good. I've heard from multiple um uh from multiple places that it's really good. I've not watched it.

SPEAKER_02:

I heard the same thing and then I started watching it, and I expected it to be good based on what I heard, but it's like mind-blowingly good. Um yeah, and it like really reminds me of 1984, and then I looked up on Wikipedia because I was curious like if I was the only one who who thought that, so I like searched Severance 1984, and turns out the guy who wrote it was like inspired partly by 1984 for the the premise of the show.

SPEAKER_01:

I might have to check it out. That sounds very interesting. I like the book.

SPEAKER_02:

Yeah, yeah, it's really interesting. And just a side note, like I think 1984 is a book that like everybody should read because it's like I I hate to say it, but it's like still applicable.

SPEAKER_01:

Yeah, yeah. George Orwell, like someone, it's uh it's depressingly applicable, uh, both 1984 and Animal Farm.

SPEAKER_02:

Um I don't think I've read Animal Farm, but I know we have it on the shelf at home, so I'm I'm gonna go I'm gonna go read that.

SPEAKER_01:

It's uh it's uh it's um it's right up there with 1984. Okay. 1984 he wrote it in 1948. It's that's why it's like he twist he switched the digits, so that's why it's 1984. Oh, really?

SPEAKER_02:

I heard a different story of about that. I heard that there was this um like socialist party or group or something like that, and they were called the 1884 Club or something like that. And so that's why it's 1984. Yeah, because I think the book came out in like 1939 or something like that.

SPEAKER_01:

Okay, yeah, I might be wrong. Uh, I'm not gonna contest you on that. I might be wrong. I don't know. Uh, but I what I do know is that the book was written as a kind of warning. I was kind of like, well, you know, if we do things, if we don't do things wisely in developing the world, this is where we might end up. And the fact that it's relevant in 2025 is maybe a little bit uh it's maybe a little bit uh depressing.

SPEAKER_02:

Yeah, yeah, and I've learned some certain things recently. Like um DHH actually had a post recently, it was called like Europeans don't have or understand free speech. Um I've I've that's kind of a sweeping statement, but like you know, in England people are getting arrested for social media posts, similar things happening in Ireland, um, I guess in in Denmark they have similar laws. In in Romania, apparently, if you use the word gypsy, then you can get put in jail just for saying the word. Um and what was the other thing recently? Oh, in well, this is another thing. In Germany, it's illegal to criticize politicians, apparently. Like, I had no idea that like so many countries in Europe had like these um anti-free speech laws.

SPEAKER_01:

So I don't I don't wanna like I don't so the thing is when you s when there's a statement like that, there's the statement and there's what's there in reality. Like I've not personally looked at the laws, but quite often the laws are in are not actually as egregious as they sound on first, because there's like a lot of disclaimers and nuances. Now, in this case, I've not actually I've seen the DHH's post, I've not actually gone and checked, but there have been play statements before that were made where somebody did the fact-checking and it's like, well, it's not actually quite what the person presented, right? So I can assure you, as a citizen of European Union, it's not a police state. I don't go around being scared of what I say. Like I can in certainly in private, you can say whatever whatever the fuck you want. Um there are sort of things where you can get in trouble if you go like you know, like um if you say something that's uh that as a fact, but you don't have cover for it and it produces damage for the person. There are some laws around that, but there are nuances, there are like a lot of nuances, and um the laws are very different in US and Europe. It doesn't necessarily mean that one is one is bad, one is good. It's probably very nuanced case to case. So I I kind of I'll tell you, putting that aside, I'm I have a kind of respect for DHH. I think he's a very intelligent person. It's just that I'm kind of hesitant when he goes into politics because he's not an expert on politics, neither am I. Um, but his words carry a lot of weight. Like in terms of expertise on politics, he's not that much you know, more experienced or more researched than your average average citizen. But when he speaks up on Twitter, the effect is significantly greater than let's say if I write something or my neighbor or anybody else. I so I I would wish that he maybe put a little bit more thoughts uh before making those statements on Twitter. Yeah, interesting.

SPEAKER_02:

Well, going back to what you said a minute ago, um, that is a very good point. Because you see like news articles about like, oh, some guy was arrested just for saying something at some protest or something like that. But that it it turns out, like, oh yeah, he said that thing, but he also like punched some guy in the face right before that.

SPEAKER_03:

That's what he was really arrested for. Yeah, yeah.

SPEAKER_02:

Yeah, so you gotta be careful about that. And it's funny, like, speaking of AI, like some people it sounds like some people like kind of refuse to use it, or at least they're very wary, um, because it gives wrong answers a lot. Um, but to me, it's like, well, hang on a second, like how much credence are you giving to everything else that enters your eyes and ears, you know? Like, as a human, we just need mechanisms for um telling what's true and what's not true. Like, if AI gives wrong answers, the answer is to not use it because like literally everything that comes into your senses is potentially wrong, and so we need to have like good ways of telling what's wrong and what's not.

SPEAKER_01:

Um this is how I'm thinking about it. Let's say, you know, would you go, would you take medical advice from AI? It's a question to you, to you specifically.

SPEAKER_02:

Yeah, that's a good question. I have before. Um, but it it really depends on what.

SPEAKER_01:

Let's say something like it's you know, it's it's something serious. Um you ask AI, I think most people would be like, no, I'm not gonna, I'm not gonna take that. If you go to your doctors, you know, you might if it's something really rich illegal, you might get a second opinion. For for most things, you're gonna trust your doctor, right? In either case, you don't under you don't know, you can't judge yourself if something is uh is a good answer or not. So you kind of assume that, well, the the doctor, um the doctor knows more, right? They have the conventionals they've been checked, so you trust them. I think the problem with AI is that the tendency is to trust them more because the way they speak, they speak with high confidence, uh, in a way that a human that has high confidence would speak, especially if the conf if the confidence is justifiable. But we can't judge AI because they are not human, they are mimicking high confidence that's justified, even if they're talking complete bullshit. Yeah, so I think that's the problem. So my I've when I program, I have because I have a ton of experience, I have a high confidence that um I can judge whether it's good or not. I can very quickly do that. And if I was, let's say, working with you, you have a ton of experience, I would trust you to have a make a judgment when using AI. But if I'm talking to somebody who's very junior, my tendency is to say, okay, use it only to sort of shorten your research. Don't actually ask it to write the code or to explain how to use something, because you will not be able to make a judgment of whether it's good or bad. You need to get your knowledge higher first.

SPEAKER_02:

Hmm. Interesting. I'm like very wary of the idea of trusting what you hear from a certain source based on like the credentials or authority of that source. Um, one of my favorite quotes is Richard Feynman Um, Science is the belief in the ignorance of experts. Um and so it's like you know, by default, I'm gonna like give more credence to what I hear from a doctor than what I hear from Chat GPT. Um partly because at the current state of technology, I just know that ChatGPT is very likely to be wrong kind of a lot of the time. Um but at the same time, whenever a doctor tells me something, I always treat it with um with a mixture of um with a combination of open-mindedness and scrutiny. It's like, okay, well, what you're telling me may well be true, but also I'm not gonna just like uncritically accept it because I try not to uncritically accept anything. Um yeah, I think you know, there's there's this thing that has bothered me so much, this like meme tweet that it it was like during the pandemic, it was making the rounds rounds a lot. It was um trust the science, which is like such an unscientific thing to say because science isn't about trust at all. Um anyway, I think I got completely off track of well, yeah.

SPEAKER_01:

No, I I I I think you're right, but there's also this question of time, right? At some point, anything, whatever you do in life, on a daily basis, at some point you trust another human without actually you satisfy your curiosity at some point, but you can't go and understand everything. No human can. No human could since you know Leonardo da Vinci or was the last Renaissance man. Um true. You have to you have to draw the line somewhere, and then at some point, credential like credentials or credibility has to come into play. But if you do have there is it it also depends on what the risk is, right? If it's something really benign, like what's if the person is wrong, what is the risk, right? If it's something medical, it's not something serious. Well, maybe you're a little bit skeptical, but you go with it. If it's something more serious, you ask for a second opinion, third opinion, fourth opinion. That's why you can you know do that in especially in the medical profession, because there the stakes are very high.

SPEAKER_02:

Yeah, yeah, and you can also go by a on a person-by-person basis. Like I've talked with some doctors who I would trust very much, and some who I would trust very little. You know, they're they're yeah, they're all doctors with basically the same credentials, but person to person they're different, and like based on based on the way, I don't know, based on like their history or whatever, or just my judgment of them for whatever you know, whatever way I came about that, I might give them different credence.

SPEAKER_01:

Yeah, yeah. And I mean, it's also yeah, it depends on a lot of factors, like, but you can't you know when you when you go like if you go and fly someone. Where you're you kind of blind, you you don't even see the pilot. They could there have been cases of pilots being bad and crashing the plane, right? You kind of trust that the system has put a pilot there. You have to do that. You're not going to go and before the flight study you know avium mechanics and then come with a questionnaire and quiz the pilot. You know, okay. Sorry, can I just get into the cockpit for a moment? I've got these 20 questions and I need at least 18 correct answers to uh to actually sit in this plane, right?

SPEAKER_02:

Yeah, and actually it is that is the word for that trust. Is that is that what's happening? When you get on a plane, are you trusting the pilot? Um I think there's a case to be made that that's not trust, it's just a gamble. It's just a very good, safe, sensible gamble.

SPEAKER_01:

Yeah, but the r your risk assessment is heavily influenced by your trust into the system that put the pilot there. If a company, if AV, if one company said, Yeah, we no, we do whatever, you know, they have to go to pilot school and so on. And if another company was like, we have the same planes, uh the pilots, he just he's a guy who just came and said they can fly, right? That would significantly alter your uh risk assessment. No. No, like if if you went into a plane and they were like, yeah, he just uh we don't know if he we didn't do any exams, he said he can fly, he seems honest.

SPEAKER_02:

Yeah, no, that's um that's not the thing that I base my assessment on. Um obviously that would be a bad way to do it. Um but the the thing I base my assessment on isn't a trust in the system that produces pilots and stuff like that, it's the end result. Like the statistic is that one plane crashes in every two million or something like that. And so if theoretically, if theoretically there was some system where they picked pilots based on like randomly grabbing people off the street, but it was still the case that one plane crashed in two million, I would still fly with just as much confidence, you know.

SPEAKER_01:

That's a good point. It's just that um you have to put that aside if you want to be an early adopter of anything. Yeah. Somebody went, somebody went on a first that was a first commercial flight. Somebody went there, and that well, that person probably had a very, very high tolerance uh for risk. Is your is your is that also um like how you approach uh things in life? Is that also related to why you're organizing a conference in um in Las Vegas?

SPEAKER_02:

Oh, the the gambling aspect?

SPEAKER_01:

Yeah, yeah, yeah.

SPEAKER_02:

Yeah. No, I actually don't gamble. I hate gambling. I've tried it a couple times and it's just like this sucks.

SPEAKER_01:

I don't like gambling either, though. I don't get too thrilled about it.

SPEAKER_02:

I get anxious more than thrilled when uh yeah, it's like I have nothing against making bets because people make bets all the time. It's just I want to make good bets.

SPEAKER_01:

Yeah, I've had I've made some silly bets. They were on very low stakes though, so but yeah, I made some bets where I'm like, why did I make that bet? Because I had no way I win that.

SPEAKER_02:

Yeah, and also there's that rule like never bet more than you can afford never bet more than you can afford to lose. Yeah. It's like sometimes it's like, okay, there's a 1% chance that I'll win this bet. I'll make the bet anyway, because like it's I'm not gonna lose that much even if I lose.

SPEAKER_01:

Yeah, yeah. I mean, yeah. I how much would somebody have to offer you to, you know, try Russian roulette? Like, ha! What's your risk? What's your risk appetite for Russian roulette? Like, I'll I'll I'll you know, I'll I'll give you an up a payout that makes it worth it. Like, yeah, there's no sorry, there's no payouts.

SPEAKER_02:

Is there a big enough dollar amount that would get you to play Russian roulette? No, no, no. Yeah, me neither. No. Um, we should probably wrap up soon. Um this even more than a lot of other episodes. This this episode went in a whole bunch of different directions, which is great. I love it. Um, but I do want to make sure to bring up at least one more thing before we go. Um, just because, you know, in in this uh 60 minutes of knowing you or whatever, um I I think you would enjoy these two books. I've been I've been talking about one of these books incessantly. So, dear listener, I apologize if if if you've been hearing me talk about this on and on. Um, there are these two books by David Deutsch, um The Fabric of Reality and The Beginning of Infinity. And the reason that I thought of those, um specifically the fabric of reality, is because in the beginning he talks about like caveman times or whatever, and how you know, in modern times, people are such hyper specialists. Um, like there, you know, many, many years ago, it was like if you're a doctor, you're just a doctor, you know, you're just a general practitioner because that's the only kind of doctor there is, and now there's like, you know, you're a cardiologist or whatever, and even there's subspecialties inside of that that I don't even know what they are. Um, and it's like now there's like so much more stuff to know that you like couldn't possibly be like a Renaissance man like uh Leonardo da Vinci or whatever. But then he goes on to kind of refute that idea, which I thought was really interesting, and my memory of it's foggy, but basically, like he talks about the distinction between collecting facts and gaining understanding, and so like you can you can collect all these like superficial disconnected facts about whatever, and this is just me talking and paraphrasing now. This isn't the words of the author, but you can just collect all these facts, like uh I don't know, the Sears Tower is 1700 feet tall or whatever. Um, and like but it doesn't really give you any understanding, you just like know these facts. Um or you can gain some some like kind of um broad reaching pieces of understanding, and then you don't have to know nearly as many facts, but that small amount, well I shouldn't say small amount, but that like little bit of understanding you have it has so much reach that on balance it's way more useful than these like collections of disconnected facts, and I thought that was really interesting. Um, and so even before encountering that, like I try to be like a generally like educated person all around, not just in programming. Um and it's like you know, there's that concept of a T-shaped person where you're like really deep in one area and like you're also a bit broad. I guess I try to be that, but like maybe even like a T-shaped person with a really thick top line of the T. Because if you like, I don't know, very few people, for example, and myself included, even though I thought I was, like, very few people are actually like scientifically literate. I don't even know if I could consider myself scientifically literate, although I'm continually working to get more and more so, and like statistically literate and stuff like that, and those pieces of understanding are so much so helpful. So that for example, when you talk to a doctor or something like that, if you're like if you have deep, um deep, strong understanding of like the fundamentals of the universe and stuff like that, you are going to have a better chance of like putting whatever you hear through a filter of critical thinking and end up with the correct answers.

SPEAKER_01:

Yeah, yeah. That makes a lot of sense. That's um that's very interesting.

SPEAKER_02:

Yeah.

SPEAKER_01:

I'll check out the book.

SPEAKER_02:

Yeah, and the beginning of infinity talks um a lot about computation, not really about programming, more about just computation um and physics and all this stuff. Just again, based on knowing you for the last hour, it seems like stuff that you would be interested in.

SPEAKER_01:

No, it does sound, it definitely sounds like stuff that I would be interested in. It sounds right up right down my eyes. So thank you for the recommendations.

SPEAKER_02:

Yeah. Um well, I was intending for us to talk more about hot wire. I'd love to have you back again sometime and we can get deeper into that if you're up for it.

SPEAKER_01:

Um I would I would I've really enjoyed the conversation. I would definitely uh be very happy to uh to come back anytime. Just yeah, ping me. Yeah, I was I also believe it or not, after writing the book, I also had intention of talking about hot wire, but this was a very interesting conversation. It was like uh it went on a tangent, but um like I've uh qu quite a lot of your episodes that I listened, um uh do like that, right?

SPEAKER_02:

Yeah, yeah, and people tell me they enjoy the tangents, so I'm I'm not apologizing one bit about that. Um okay, before you know you know what, you know what?

SPEAKER_01:

Yeah, AI would not go on a tangent. I guarantee it. So we've got an upper hand on that. Yeah.

SPEAKER_02:

That's true. Um, AI wouldn't have talked about uh, I don't know, the Chinese room experiment or whatever tangents we got off of. I don't even remember. We went so many places. Um before we go, do you want to mention your book again and where people can find it and all that?

SPEAKER_01:

Yeah, so the book is Master Hotwire, uh and it's masterhotwire.com. Uh very simple. That's where the book is. And um actually, um, do you mind if like can I give a little gift to the um to the listeners? Please do. Uh so if I'll um Jason, I'll put uh Jason code. If you put a JSON code, you'll get a 30% discount to um to commemorate. This is actually my first podcast appearance ever. Um so yeah, um I don't know how much it shows, but I was actually quite nervous.

SPEAKER_02:

Uh oh, not at all.

SPEAKER_01:

So yeah, I'll use JSON code for 30% off until I don't know. When when do you think this uh is gonna be out?

SPEAKER_02:

Oh, I don't know. Um maybe March or even April of 2025. I don't know why I said 2025. I sure hope it'll be this year.

SPEAKER_01:

Put it, I'll put it until end of May. I think that's uh use J use code JSON for 30% off until end of May uh on MastercodeQuiet.com.

SPEAKER_02:

Yeah, and and from one author to another, if you just make it open-ended, I think the risk is very low that you'll lose your shirt on all the people coming and using it.

SPEAKER_01:

You've convinced me. I'll I'll make it open-ended. Yeah. Um you have you have more experience than me. Uh you've published a complete book. I'm still finishing this one. And and actually, like, as you as you know, like it's uh it's the books rarely make it rain. So uh benefits.

SPEAKER_02:

Big time. Um awesome. Well, I've really enjoyed this conversation. I look forward to us talking again. And Ron, thanks so much for coming on the show.

SPEAKER_01:

Thank you so much for having me.