UX for AI

EP. 93 - Why Intentional Creators Will Win in the New Era of Building w/ Robin Jose

Bonanza Studios

Send us a text

Ready to build smarter, not louder? Robin Jose breaks down how AI tools are leveling the playing field—and why true creators win by building with intention, not just chasing clout. 

Interested in joining the podcast? DM Behrad on LinkedIn:
https://www.linkedin.com/in/behradmirafshar/

This podcast is made by Bonanza Studios, Germany’s Premier Digital Design Studio:
https://www.bonanza-studios.com/

1
00:00:00,000 --> 00:00:02,440
[MUSIC PLAYING]

2
00:00:02,440 --> 00:00:05,300
Welcome to UX for AI.

3
00:00:05,300 --> 00:00:07,380
Robin, we finally made it happen.

4
00:00:07,380 --> 00:00:08,180
Really happy.

5
00:00:08,180 --> 00:00:11,680
Finally, it's been a while that we've been thinking about this.

6
00:00:11,680 --> 00:00:13,140
Yeah, I've been persistent.

7
00:00:13,140 --> 00:00:14,060
I keep moving it.

8
00:00:14,060 --> 00:00:14,880
But yeah.

9
00:00:14,880 --> 00:00:16,560
Yeah, I've been persistent.

10
00:00:16,560 --> 00:00:18,880
I really wanted to have you on.

11
00:00:18,880 --> 00:00:21,540
Although, time is short, but I think

12
00:00:21,540 --> 00:00:25,720
it's worth just having you in the room locked in

13
00:00:25,720 --> 00:00:29,320
and just tapping into your thoughts about,

14
00:00:29,320 --> 00:00:35,040
where this thing is going, 2025, one quarter in.

15
00:00:35,040 --> 00:00:37,640
So maybe start from there.

16
00:00:37,640 --> 00:00:41,360
Have you perceived the first quarter?

17
00:00:41,360 --> 00:00:44,720
Were the progress that you've been monitoring?

18
00:00:44,720 --> 00:00:48,520
And for all of you that you're not following Robin on LinkedIn,

19
00:00:48,520 --> 00:00:52,040
LinkedIn, the description, you should follow him.

20
00:00:52,040 --> 00:00:54,640
His content is top notch when it comes to AI.

21
00:00:54,640 --> 00:00:59,760
So yeah, maybe give us a basically 360 degree analysis

22
00:00:59,760 --> 00:01:04,960
of where this thing is going and is that surprising you?

23
00:01:04,960 --> 00:01:09,020
Is that aligned with your prediction of this year?

24
00:01:09,020 --> 00:01:09,520
Yeah.

25
00:01:09,520 --> 00:01:10,000
Thank you.

26
00:01:10,000 --> 00:01:12,120
First of all, thanks for the kind words on my LinkedIn

27
00:01:12,120 --> 00:01:12,620
content.

28
00:01:12,620 --> 00:01:14,040
I really appreciate that.

29
00:01:14,040 --> 00:01:17,640
Luke, the progress has been absolutely breakneck, right?

30
00:01:17,640 --> 00:01:20,760
So every time I would say that, look, this is great.

31
00:01:20,760 --> 00:01:21,600
I love it.

32
00:01:21,600 --> 00:01:22,920
They come up with something new.

33
00:01:22,920 --> 00:01:25,840
And same as the case with OpenAI, for example, this week, right?

34
00:01:25,840 --> 00:01:27,060
So we've got 4.1.

35
00:01:27,060 --> 00:01:31,660
We've got all for Mini and all three in general.

36
00:01:31,660 --> 00:01:35,240
And every time I feel like there is a benchmark that's

37
00:01:35,240 --> 00:01:38,360
beaten, there's a new thing that comes along and just surprises.

38
00:01:38,360 --> 00:01:43,560
But to me, I think the biggest one in 2025 has been Gemini 2.5.

39
00:01:43,560 --> 00:01:44,640
Yeah.

40
00:01:44,640 --> 00:01:47,540
Honestly, I'd written Gemini off, right, especially with two.

41
00:01:47,540 --> 00:01:48,920
I mean, Flash was good.

42
00:01:48,920 --> 00:01:51,520
It was good to build some agent code, especially because it

43
00:01:51,520 --> 00:01:53,800
was so fast and cheap.

44
00:01:53,800 --> 00:01:56,160
But 2.5 came and changed all that, right?

45
00:01:56,160 --> 00:01:59,560
So it's currently the best coding tool available, as can

46
00:01:59,560 --> 00:02:01,800
be seen in SuiteBench, right?

47
00:02:01,800 --> 00:02:04,060
And there's, of course, hundreds of benchmarks.

48
00:02:04,060 --> 00:02:04,800
This is a problem.

49
00:02:04,800 --> 00:02:06,720
We'll talk about the benchmarks in a minute.

50
00:02:06,720 --> 00:02:09,760
But it is the best current coding tool available, although

51
00:02:09,760 --> 00:02:12,560
Cloud 3.7, for example, is very close.

52
00:02:12,560 --> 00:02:14,120
And it was surprising.

53
00:02:14,120 --> 00:02:17,160
It was surprising because a lot of people, including me, has

54
00:02:17,160 --> 00:02:20,840
been saying that, look, Gemini is never there with the top

55
00:02:20,840 --> 00:02:23,080
models, although they've been making all these claims.

56
00:02:23,080 --> 00:02:25,640
But the first time, they actually came and did that, and

57
00:02:25,640 --> 00:02:27,360
it's been phenomenal.

58
00:02:27,360 --> 00:02:31,220
And it's like my favorite coding tool, coding model today.

59
00:02:31,220 --> 00:02:32,500
It's very creative.

60
00:02:32,500 --> 00:02:33,560
It's very creative.

61
00:02:33,560 --> 00:02:36,440
And of course, there's a huge advantage of 1 million

62
00:02:36,440 --> 00:02:41,000
context window, although now 4.1 has come out, GPT, and API

63
00:02:41,000 --> 00:02:43,440
only, and they also have 1 million context window.

64
00:02:43,440 --> 00:02:45,280
I've not done extensive tests for it.

65
00:02:45,280 --> 00:02:48,520
Although I actually just downloaded Windsurf only for

66
00:02:48,520 --> 00:02:51,320
this, and I was curious to test out the 4.1.

67
00:02:51,320 --> 00:02:53,920
That seems to be the easiest way currently for anyone, and

68
00:02:53,920 --> 00:02:57,440
this is useful for the listeners as well.

69
00:02:57,440 --> 00:03:01,320
If you ever want to try out 4.1 for free, it's available for a

70
00:03:01,320 --> 00:03:02,800
week with Windsurf.

71
00:03:02,800 --> 00:03:04,720
Just download Windsurf, and you're good to go.

72
00:03:04,720 --> 00:03:06,800
You'll get it for at least the next five days.

73
00:03:06,800 --> 00:03:07,840
So go ahead and try it.

74
00:03:07,840 --> 00:03:08,840
It's absolutely good.

75
00:03:08,840 --> 00:03:10,320
So it has 1 million context.

76
00:03:10,320 --> 00:03:14,400
So overall, the real trend that I'm seeing is, number one, the

77
00:03:14,400 --> 00:03:17,880
models, especially in terms of benchmark and in terms of

78
00:03:17,880 --> 00:03:20,520
practical usage, is getting better and better.

79
00:03:20,520 --> 00:03:23,760
So Gemini 2.5 Pro to me is currently the best model

80
00:03:23,760 --> 00:03:25,400
available right now if you're into coding.

81
00:03:25,400 --> 00:03:28,480
And actually, in many of the other tasks as well, Cloud 3.7

82
00:03:28,480 --> 00:03:29,320
is pretty good.

83
00:03:29,320 --> 00:03:32,160
4.1 GPT is pretty good, API only.

84
00:03:32,160 --> 00:03:34,080
And the tools are getting better.

85
00:03:34,080 --> 00:03:36,720
All of these systems that we are using, I know you've done

86
00:03:36,720 --> 00:03:38,920
quite a bit of experiments with web coding.

87
00:03:38,920 --> 00:03:40,880
I see your posts, and there are some quite amazing.

88
00:03:40,880 --> 00:03:42,360
I haven't tracked all of them.

89
00:03:42,360 --> 00:03:46,280
Although it seems like now you're definitely going to be a

90
00:03:46,280 --> 00:03:48,760
much better developer than I ever will be with all the

91
00:03:48,760 --> 00:03:49,840
things you're doing.

92
00:03:49,840 --> 00:03:51,760
And look, all of them, right?

93
00:03:51,760 --> 00:03:53,560
So I've not used all of them, to be honest.

94
00:03:53,560 --> 00:03:56,880
So I tried a little bit experimenting with Loverbird,

95
00:03:56,880 --> 00:03:59,320
but I'm more of a VS code person.

96
00:03:59,320 --> 00:04:00,720
So this is what I've been using.

97
00:04:00,720 --> 00:04:04,840
So I'm a little bit more biased towards Cursor or Windsurf

98
00:04:04,840 --> 00:04:09,160
because they are closer to the VS code experience that I have.

99
00:04:09,160 --> 00:04:10,020
But they're good.

100
00:04:10,020 --> 00:04:10,880
Replete is good.

101
00:04:10,880 --> 00:04:12,000
Bolt is great.

102
00:04:12,000 --> 00:04:13,240
Then there's V 0.

103
00:04:13,240 --> 00:04:14,680
There's so many of these systems.

104
00:04:14,680 --> 00:04:18,880
And there's never been a better time to learn or use coding.

105
00:04:18,880 --> 00:04:20,400
And that's amazing.

106
00:04:20,400 --> 00:04:21,480
It's a great time to be.

107
00:04:21,480 --> 00:04:25,000
And this is just the tip of the iceberg, as I see.

108
00:04:25,000 --> 00:04:29,000
It will be the rest of the next quarters will bring even

109
00:04:29,000 --> 00:04:30,520
more capabilities.

110
00:04:30,520 --> 00:04:31,680
So that's one side.

111
00:04:31,680 --> 00:04:35,120
The second side is that it's been getting a lot cheaper,

112
00:04:35,120 --> 00:04:37,840
right, in terms of using these LLMs as well.

113
00:04:37,840 --> 00:04:41,200
So when Owen came along, I said, well, Owen is great.

114
00:04:41,200 --> 00:04:41,920
This is good.

115
00:04:41,920 --> 00:04:43,560
It's really good capabilities.

116
00:04:43,560 --> 00:04:45,840
But can I really use that?

117
00:04:45,840 --> 00:04:46,580
Expensive.

118
00:04:46,580 --> 00:04:48,220
But that's not the case anymore, right?

119
00:04:48,220 --> 00:04:50,440
All 4.0 has come out, and it's really, really cheap.

120
00:04:50,440 --> 00:04:54,480
So it's, I think, 4x cheaper than Owen ever was.

121
00:04:54,480 --> 00:04:56,200
And it's just a trend.

122
00:04:56,200 --> 00:04:59,440
If you look at GPT 4.1, same story.

123
00:04:59,440 --> 00:05:03,800
It's actually $2 for 1 million contacts for input tokens

124
00:05:03,800 --> 00:05:06,320
and $8 for output tokens.

125
00:05:06,320 --> 00:05:08,240
It's really, really cheap compared

126
00:05:08,240 --> 00:05:11,600
to what it was when GPT 4.0 turbo came out.

127
00:05:11,600 --> 00:05:13,880
And it's significantly more capable models.

128
00:05:13,880 --> 00:05:15,480
And the whole thing, probably that's

129
00:05:15,480 --> 00:05:18,440
where we have to come to the last part of what got me excited,

130
00:05:18,440 --> 00:05:20,720
is the whole open source thing.

131
00:05:20,720 --> 00:05:23,240
Despite the whole llama for controversy aside,

132
00:05:23,240 --> 00:05:25,360
and I think that was a really bad launch.

133
00:05:25,360 --> 00:05:26,360
It was a horrible launch.

134
00:05:26,360 --> 00:05:27,520
Absolutely bad launch.

135
00:05:27,520 --> 00:05:30,440
But other than that, I think the open source momentum

136
00:05:30,440 --> 00:05:32,560
gained a lot of--

137
00:05:32,560 --> 00:05:35,500
I mean, they are now frontier models.

138
00:05:35,500 --> 00:05:37,760
With DeepSeq and Quen and everything,

139
00:05:37,760 --> 00:05:41,520
it's really gone to this level where they are highly respected.

140
00:05:41,520 --> 00:05:42,840
As top tier models.

141
00:05:42,840 --> 00:05:45,480
And this is maybe a little bit worrying to OpenAI,

142
00:05:45,480 --> 00:05:46,840
and Anthropic maybe.

143
00:05:46,840 --> 00:05:50,640
But for us, I mean, the little folks,

144
00:05:50,640 --> 00:05:52,160
we should be very excited.

145
00:05:52,160 --> 00:05:54,680
There is a lot of different places

146
00:05:54,680 --> 00:05:56,880
I could take this conversation to.

147
00:05:56,880 --> 00:06:00,880
But let's just say on the open source versus close source

148
00:06:00,880 --> 00:06:03,520
for a minute, I think that's very interesting.

149
00:06:03,520 --> 00:06:07,160
Although open source models are catching up really fast,

150
00:06:07,160 --> 00:06:10,320
but we see that, at least in my experience.

151
00:06:10,320 --> 00:06:11,520
My experience is limited.

152
00:06:11,520 --> 00:06:13,640
That's why I would like to pick your brain on it.

153
00:06:13,640 --> 00:06:16,760
The close model that we have, which is, I'm not sure,

154
00:06:16,760 --> 00:06:18,880
but Gemini is open source.

155
00:06:18,880 --> 00:06:20,020
It's not.

156
00:06:20,020 --> 00:06:23,720
OK, so then Gemini, Cloud Sonnet, and GPT,

157
00:06:23,720 --> 00:06:29,280
still, they are providing the most reliable, useful outputs

158
00:06:29,280 --> 00:06:31,720
that I have experienced.

159
00:06:31,720 --> 00:06:34,900
Although open source is leveling the play field,

160
00:06:34,900 --> 00:06:37,660
but when it comes to output and reliability,

161
00:06:37,660 --> 00:06:39,960
the close model is still, in my opinion,

162
00:06:39,960 --> 00:06:42,140
in my experience, again, you probably

163
00:06:42,140 --> 00:06:46,080
have deeper knowledge in there, is the one I could rely on.

164
00:06:46,080 --> 00:06:49,080
So what's your take on, OK, open source

165
00:06:49,080 --> 00:06:50,640
is leveling the play field.

166
00:06:50,640 --> 00:06:52,240
It's catching up.

167
00:06:52,240 --> 00:06:55,000
But still, the close models are the ones

168
00:06:55,000 --> 00:06:57,440
to go when you want to actually build the application.

169
00:06:57,440 --> 00:06:58,560
First of all, I agree.

170
00:06:58,560 --> 00:07:00,400
And I think I mentioned it as well, right?

171
00:07:00,400 --> 00:07:03,920
Gemini 2.5 Pro is my favorite model right now for coding.

172
00:07:03,920 --> 00:07:05,280
And this is a close model.

173
00:07:05,280 --> 00:07:07,360
There are some open models that's released,

174
00:07:07,360 --> 00:07:10,640
like Gemini by Google, but 2.5 is their premium model.

175
00:07:10,640 --> 00:07:11,880
And that's a close source.

176
00:07:11,880 --> 00:07:14,300
And the other ones, which are currently at the top,

177
00:07:14,300 --> 00:07:17,640
3.7 Sonnet from Cloud is also a close source model.

178
00:07:17,640 --> 00:07:18,880
So you're absolutely right.

179
00:07:18,880 --> 00:07:23,220
In terms of the top tier models, they are still close source.

180
00:07:23,220 --> 00:07:25,880
For now, we're just talking about coding models, right?

181
00:07:25,880 --> 00:07:29,920
And if you think about it from a large perspective,

182
00:07:29,920 --> 00:07:32,560
let's say, for example, like DeepSeek, right?

183
00:07:32,560 --> 00:07:35,840
Not necessarily as a coding model, but as a frontier LLM

184
00:07:35,840 --> 00:07:39,640
model, which is used widely for many things, it's pretty good.

185
00:07:39,640 --> 00:07:42,740
I won't say it's at the top, but the real question is that,

186
00:07:42,740 --> 00:07:44,520
is that 99% as good?

187
00:07:44,520 --> 00:07:47,120
Or maybe let's give it a little bit more wiggle room, right?

188
00:07:47,120 --> 00:07:50,000
95% as good as the top tier models?

189
00:07:50,000 --> 00:07:53,240
And then I would absolutely say yes, without any worries.

190
00:07:53,240 --> 00:07:56,260
And, of course, there's questions about, where does the data go

191
00:07:56,260 --> 00:07:59,160
and everything, those we can handle later.

192
00:07:59,160 --> 00:08:03,360
For 95% of the users, it's almost as good as the close

193
00:08:03,360 --> 00:08:04,160
source models.

194
00:08:04,160 --> 00:08:06,440
For coding specifically, I do agree with you, right?

195
00:08:06,440 --> 00:08:09,400
So I use Geminade, I use Cloud, I tried a little bit.

196
00:08:09,400 --> 00:08:14,080
As I said, I downloaded Gwintz two days back and I'm just

197
00:08:14,080 --> 00:08:16,760
testing the 4.1, it's definitely better.

198
00:08:16,760 --> 00:08:18,960
And of course, one of the biggest problems with coding

199
00:08:18,960 --> 00:08:21,880
is also that it needs a larger context, right?

200
00:08:21,880 --> 00:08:25,600
So it needs, I mean, one million is kind of absolutely needed.

201
00:08:25,600 --> 00:08:28,160
This is one place I think Cloud will do better,

202
00:08:28,160 --> 00:08:31,720
because currently, still 3.7 Sonnet is still at 200k.

203
00:08:31,720 --> 00:08:35,400
Typically, if you ask, for example, let's talk about

204
00:08:35,400 --> 00:08:36,160
SuiBench, right?

205
00:08:36,160 --> 00:08:39,360
SuiBench, I really like, I mean, I'm a huge fan of Benchmarks

206
00:08:39,360 --> 00:08:41,540
because Benchmarks can be very deceiving.

207
00:08:41,540 --> 00:08:45,880
We saw the whole Llama controversy, right?

208
00:08:45,880 --> 00:08:51,120
They just released a highly customized version of Llama for

209
00:08:51,120 --> 00:08:54,720
only for Elamarena, and this is not even the real model, right?

210
00:08:54,720 --> 00:08:57,840
And they just said, every time somebody asks a question, it's

211
00:08:57,840 --> 00:08:59,560
like, that's an amazing question.

212
00:08:59,560 --> 00:09:02,560
You are, you know, what should I have for dinner?

213
00:09:02,560 --> 00:09:03,860
What a wonderful question, right?

214
00:09:03,860 --> 00:09:05,000
This is so brilliant.

215
00:09:05,000 --> 00:09:06,580
Let me tell you that answer.

216
00:09:06,580 --> 00:09:10,300
Naturally, people like it, but it's, it's just leeching the Benchmarks.

217
00:09:10,300 --> 00:09:15,920
If you look at SuiBench, it's providing a GitHub source code, right?

218
00:09:15,920 --> 00:09:21,720
And then asking, take a code fix now from a real GitHub repository, and then it's

219
00:09:21,720 --> 00:09:27,440
running the tests and checking how good this, you know, this test is, how good the

220
00:09:27,440 --> 00:09:33,280
Llama has solved the problem, and a larger context window keeps all the code, you know,

221
00:09:33,280 --> 00:09:36,640
in context and, you know, propose a more wholesome solution.

222
00:09:36,640 --> 00:09:39,860
And 2.5 has a 1 million context window.

223
00:09:39,880 --> 00:09:42,800
In certain cases, 2 million context window makes a huge difference.

224
00:09:42,800 --> 00:09:48,640
And not surprisingly, it's also the biggest, has the highest rating in terms of a SuiBench.

225
00:09:48,640 --> 00:09:50,640
I think I don't remember the exact numbers.

226
00:09:50,640 --> 00:09:52,760
I think it's 65% or something.

227
00:09:52,760 --> 00:09:54,120
So, I agree.

228
00:09:54,160 --> 00:09:58,120
I also use closed-source models, but open-source models are catching up.

229
00:09:58,120 --> 00:10:02,400
And I'm pretty sure hopefully they have a newer version of Llama.

230
00:10:02,480 --> 00:10:06,600
I think it's a capable model, despite all this, you know, all the things that have

231
00:10:06,600 --> 00:10:10,080
happened, which is very unfortunate, they should have a newer version, which is very

232
00:10:10,080 --> 00:10:13,800
capable because in the end, it still has a 1 million context going up to a 10 million

233
00:10:13,800 --> 00:10:18,600
context. And I'm sure they will figure out a way to make that work much, much better,

234
00:10:18,600 --> 00:10:23,680
right? Compared to probably even close the gap or even, you know, even overtake the

235
00:10:23,680 --> 00:10:24,560
closed-source models.

236
00:10:24,560 --> 00:10:25,480
So, I'm waiting for that.

237
00:10:25,600 --> 00:10:30,560
I have high confidence in that 10 million context window eventually will lead to that

238
00:10:30,560 --> 00:10:32,600
capability. Yeah, it's not.

239
00:10:32,600 --> 00:10:33,800
The jury is still out, right?

240
00:10:33,800 --> 00:10:34,720
So, we need to wait for that.

241
00:10:34,800 --> 00:10:37,120
So, let me just look ahead.

242
00:10:37,880 --> 00:10:43,880
Right now, actually, I started, I was actually coding till like last night, midnight, with

243
00:10:43,880 --> 00:10:47,840
4.1. Robin is actually really good.

244
00:10:47,960 --> 00:10:50,440
It's very systematic.

245
00:10:50,440 --> 00:10:58,360
So, I've been coding an MVP with Cloud Sonnet 3.7, and then now handed off to Chadgbt.

246
00:10:58,400 --> 00:11:05,480
It's really approaching my build, actually doing a lot of robust refactoring that makes

247
00:11:05,480 --> 00:11:08,400
my application a lot faster and smoother.

248
00:11:08,560 --> 00:11:11,200
So, I actually coded with Gemini 2.

249
00:11:11,640 --> 00:11:14,760
Gemini is very creative, absolutely creative.

250
00:11:14,760 --> 00:11:21,800
Sonnet 3.7 is a bit like a raging bull, in my opinion. It just tries to do things.

251
00:11:21,800 --> 00:11:28,200
Sometimes it does really well, but doesn't have that systematic, "OK, let me look at

252
00:11:28,200 --> 00:11:32,320
your code and build very systematically 4.1 has." Right?

253
00:11:32,520 --> 00:11:37,640
So, these models are strong, but it's going to get to a point, let's say three months,

254
00:11:37,640 --> 00:11:44,240
six months, nine months, a year from now, that I really think that Cloud Sonnet 4 is

255
00:11:44,240 --> 00:11:49,120
going to be making anyone possible to build applications and launch it.

256
00:11:49,280 --> 00:11:55,280
Right? Let's say we are at that point in the future that either closed model or open

257
00:11:55,280 --> 00:12:03,600
model get to a point that allows a business owner with very primitive coding, understanding,

258
00:12:03,800 --> 00:12:05,920
get to launch your application.

259
00:12:05,960 --> 00:12:07,680
What would be the difference?

260
00:12:07,760 --> 00:12:14,680
Like, basically, in that future, you don't have to do so much except giving a brief and

261
00:12:14,680 --> 00:12:16,720
just going back and forth. Right?

262
00:12:16,800 --> 00:12:18,880
Everything else will be done with LLM.

263
00:12:19,040 --> 00:12:26,080
In that scenario, in that future, how would you... Obviously, you have an age as a developer,

264
00:12:26,080 --> 00:12:26,920
as a CTO.

265
00:12:27,200 --> 00:12:27,680
What...

266
00:12:27,680 --> 00:12:31,200
Like, how would you be...

267
00:12:31,720 --> 00:12:41,040
Given that the LLM in that future does 99% of the work, what gives you the edge compared

268
00:12:41,040 --> 00:12:45,680
to a non-technical business owner that wants to build their application?

269
00:12:45,880 --> 00:12:49,600
Like... Because, like, you don't... You won't be doing any coding anymore.

270
00:12:49,600 --> 00:12:52,840
And also, the business owner won't be touching the code anymore.

271
00:12:52,840 --> 00:12:55,080
Just everything else will be done with LLM.

272
00:12:55,120 --> 00:12:58,680
But there is an edge for you because of your background.

273
00:12:58,680 --> 00:13:05,800
In that future, what will you be your role interfacing with the LLM that gives you the

274
00:13:05,800 --> 00:13:07,800
edge compared to the business owner?

275
00:13:08,520 --> 00:13:09,640
I don't know if it makes sense.

276
00:13:10,400 --> 00:13:11,280
That's a great question.

277
00:13:11,280 --> 00:13:13,320
Actually, there's a lot of things to unpack in there.

278
00:13:13,320 --> 00:13:14,880
So it's going to be a long answer.

279
00:13:14,960 --> 00:13:16,880
Look, it's a question I've been asking myself.

280
00:13:16,880 --> 00:13:21,000
And I'm pretty sure it's a question a lot of people are asking themselves currently, right,

281
00:13:21,000 --> 00:13:21,960
in the current scenario.

282
00:13:22,040 --> 00:13:24,080
But before we go there, just a quick clarification.

283
00:13:24,080 --> 00:13:26,040
You said you really like 4.1, right?

284
00:13:26,040 --> 00:13:28,880
Because I've been experimenting just a day with 4.1.

285
00:13:28,880 --> 00:13:33,840
I really haven't done a solid evaluation to compare between 3.7 and 4.1.

286
00:13:34,200 --> 00:13:35,760
But from your experience, it's pretty good.

287
00:13:35,880 --> 00:13:36,560
It's good.

288
00:13:36,600 --> 00:13:41,280
Cloud Sonnet is really fast at laying the MVP in my experience.

289
00:13:41,720 --> 00:13:44,840
But it doesn't approach the build systematically.

290
00:13:45,000 --> 00:13:51,280
But now, since I've been using 4.1, 4.1 scores less in creativity.

291
00:13:51,280 --> 00:13:57,120
In my opinion, I think Gemra 2.5 is ahead of Sonnet and 4.1.

292
00:13:57,320 --> 00:14:01,920
But 4.1 is really good at making my application smoother.

293
00:14:02,400 --> 00:14:08,640
And because, for example, I have a model I use in three different places in my application.

294
00:14:08,840 --> 00:14:13,120
Cloud Sonnet did not suggest me that, hey, you need to basically systematize it.

295
00:14:13,120 --> 00:14:15,880
You need to have one model used in three different places.

296
00:14:16,080 --> 00:14:17,880
And I had three different models.

297
00:14:17,960 --> 00:14:21,040
So whenever I wanted to update it, I have to update it three times.

298
00:14:21,080 --> 00:14:27,240
Gb2 4.1 from the get-go said, you need to basically create one model and use it three times.

299
00:14:27,440 --> 00:14:28,560
I was like, oh, thank you very much.

300
00:14:28,560 --> 00:14:29,440
That makes sense to me.

301
00:14:29,560 --> 00:14:29,880
Cool.

302
00:14:29,920 --> 00:14:30,240
OK.

303
00:14:30,440 --> 00:14:31,800
Yeah, that's exciting.

304
00:14:31,800 --> 00:14:33,880
So I'm actually trying to do a few more.

305
00:14:33,880 --> 00:14:37,760
And I'm actually white coding a new product.

306
00:14:37,960 --> 00:14:38,320
Right?

307
00:14:38,320 --> 00:14:42,120
And I'm planning to do it over the weekend and I'm planning to use the 4.1 exclusively.

308
00:14:42,120 --> 00:14:44,480
And I'll be happy to report back on my experience as well.

309
00:14:45,000 --> 00:14:48,040
How many businesses are you running at the same time?

310
00:14:48,040 --> 00:14:48,600
My god.

311
00:14:48,720 --> 00:14:50,440
Well, they're not businesses.

312
00:14:50,440 --> 00:14:53,200
I'm just white coding different things and I'm learning a lot.

313
00:14:53,200 --> 00:14:53,560
Honestly.

314
00:14:53,680 --> 00:14:53,880
Right?

315
00:14:53,880 --> 00:14:58,760
So seriously, I have learned as much and I have like two decades of coding experience.

316
00:14:58,880 --> 00:15:02,640
I never learned so much in three months as I've learned.

317
00:15:02,640 --> 00:15:03,480
Like it's crazy.

318
00:15:03,480 --> 00:15:04,760
It's crazy how much you can learn.

319
00:15:04,760 --> 00:15:05,960
I don't think I've learned that.

320
00:15:06,200 --> 00:15:09,400
Whatever I learned in three months in the in like five years, right?

321
00:15:09,400 --> 00:15:09,960
It's just great.

322
00:15:10,040 --> 00:15:10,360
OK.

323
00:15:10,400 --> 00:15:12,880
I need to ask you before we jump into that.

324
00:15:13,040 --> 00:15:13,200
Yeah.

325
00:15:13,200 --> 00:15:15,160
What are you learning new?

326
00:15:15,160 --> 00:15:20,880
Because I went just went through your LinkedIn profile and all the stuff that you have done the past years.

327
00:15:21,120 --> 00:15:23,760
I don't think anything else is there for you to learn.

328
00:15:24,000 --> 00:15:31,680
So as a non-coder, Mike, like when you said that, I asked myself immediately, what's there for Robin to learn?

329
00:15:31,840 --> 00:15:32,800
Well, you're very kind.

330
00:15:32,880 --> 00:15:35,640
The programming is like a vast crazy thing, right?

331
00:15:35,640 --> 00:15:37,960
I'm not exactly for the past few years.

332
00:15:37,960 --> 00:15:40,880
I'm not exactly what I call as always hands on programmer.

333
00:15:40,880 --> 00:15:42,600
So I, of course know my way around.

334
00:15:42,600 --> 00:15:50,760
I am good with, you know, pretty good database experience, pretty good with data tools, but I'm not good at front.

335
00:15:50,760 --> 00:15:51,560
And I never tried it.

336
00:15:52,320 --> 00:15:53,280
Was never interested.

337
00:15:53,280 --> 00:15:53,560
Right.

338
00:15:53,560 --> 00:15:53,880
Interesting.

339
00:15:54,080 --> 00:15:58,400
And now I'm building applications, you know, using React.

340
00:15:58,400 --> 00:16:01,800
I'm using SuperBase and I never did React before.

341
00:16:01,880 --> 00:16:03,600
I still don't do React that much.

342
00:16:03,600 --> 00:16:03,840
Right.

343
00:16:03,840 --> 00:16:05,560
But at least I understand JavaScript very well.

344
00:16:05,560 --> 00:16:06,240
I know Java.

345
00:16:06,480 --> 00:16:09,480
So I have a little bit of an idea about JavaScript.

346
00:16:09,480 --> 00:16:12,720
So now with React, I can actually code good frontend.

347
00:16:12,720 --> 00:16:20,680
I also don't need a solid node background because I can do, especially on the app side of things where I've never built an app before directly.

348
00:16:20,760 --> 00:16:27,880
Of course, indirectly, yes, I ask my teams to build apps and I understand what's happening there, but this is the first time I'm actually doing this without the help of any team.

349
00:16:27,880 --> 00:16:33,880
And for example, last week, I've been trying with the SuperBase and Expo to just build an app in itself.

350
00:16:33,880 --> 00:16:34,080
Right.

351
00:16:34,080 --> 00:16:39,520
I'm just making sure that it runs as a web app and as well as like in iOS and Android.

352
00:16:39,560 --> 00:16:41,200
Of course with React Native.

353
00:16:41,200 --> 00:16:44,480
And yeah, the doing the frontend was something that I've not done before.

354
00:16:44,480 --> 00:16:45,640
Now I can do that pretty easily.

355
00:16:47,240 --> 00:16:48,360
That reminds me, right.

356
00:16:48,360 --> 00:16:59,640
So one of the things that we should keep to ourselves, we should remind ourselves is that you and me are not typical founders who are just coming and starting pipe coding from day one.

357
00:16:59,640 --> 00:16:59,960
Right.

358
00:17:00,120 --> 00:17:02,440
We do have a technology background.

359
00:17:02,720 --> 00:17:10,680
So if it throws an error at us and if we can even without the help of the system, can still figure out where the problem is.

360
00:17:10,720 --> 00:17:11,680
Maybe it's faster.

361
00:17:11,680 --> 00:17:13,680
I mean, I still use the help of the system, right.

362
00:17:13,720 --> 00:17:15,800
Rather than just trying to figure it out myself.

363
00:17:15,800 --> 00:17:19,600
I just copy paste the error into the system and it just tells me.

364
00:17:19,920 --> 00:17:21,600
But even with that, we can do that.

365
00:17:21,600 --> 00:17:22,800
We can solve that problem.

366
00:17:22,800 --> 00:17:24,880
That gives us a little bit of advantage.

367
00:17:24,880 --> 00:17:27,760
And then we may be thinking that this is very simple.

368
00:17:27,760 --> 00:17:33,360
Maybe that's not the case because I've seen people who wipe codec systems up to MVP is fine.

369
00:17:33,480 --> 00:17:36,680
The moment you push that into production, things change.

370
00:17:36,720 --> 00:17:37,040
Right.

371
00:17:37,240 --> 00:17:39,880
And then we start talking about maintaining the code base.

372
00:17:39,880 --> 00:17:43,200
We start talking about scaling the production and so on and so forth.

373
00:17:43,200 --> 00:17:44,600
And it's a clearly different board game.

374
00:17:44,960 --> 00:17:46,200
So that's where I want to start.

375
00:17:46,320 --> 00:17:49,840
And even if you look at any of these benchmarks, let's talk about a few benchmarks.

376
00:17:49,840 --> 00:17:50,080
Right.

377
00:17:50,080 --> 00:17:57,840
So we talked about cbench, now cbench is about taking a GitHub repository and making some fixes from that repository.

378
00:17:57,880 --> 00:18:02,360
And then it runs the testing to make sure how good this fix has been.

379
00:18:02,560 --> 00:18:07,720
And if you look at the top tier ones, even Gemini 2.5 Pro as an example, it's at 65%.

380
00:18:07,920 --> 00:18:11,440
Which means that it can only solve 65% of the problems.

381
00:18:11,440 --> 00:18:13,680
There's still about 35% which is unsolved.

382
00:18:14,120 --> 00:18:17,240
And that's an important thing to keep in mind and who solves them?

383
00:18:17,760 --> 00:18:18,560
The human developers.

384
00:18:18,560 --> 00:18:18,840
Right?

385
00:18:18,840 --> 00:18:22,240
So they are not just limited to the models.

386
00:18:22,240 --> 00:18:23,080
They can experiment.

387
00:18:23,080 --> 00:18:25,200
They can go to the internet.

388
00:18:25,200 --> 00:18:27,600
They can search and they ask people.

389
00:18:27,720 --> 00:18:28,440
They figure it out.

390
00:18:28,440 --> 00:18:31,600
They spend time on it and they actually solve these problems.

391
00:18:31,680 --> 00:18:33,040
So that's the first thing to keep in mind.

392
00:18:33,080 --> 00:18:33,240
Right?

393
00:18:33,240 --> 00:18:34,480
Some of these benchmarks.

394
00:18:34,800 --> 00:18:36,840
So for example, Sui Lancer.

395
00:18:37,000 --> 00:18:48,080
So this is a benchmark that's created by OpenAI to showcase the capabilities of an LLM to solve not necessarily just code related problems, but also design problems.

396
00:18:48,320 --> 00:18:57,360
So the way that this is done and forgive me for telling you something that's obvious to you, but I guess, you know, at least hopefully that's useful for the listeners to keep that in context.

397
00:18:57,760 --> 00:19:00,480
So Sui Lancer was built by OpenAI in February.

398
00:19:00,480 --> 00:19:03,840
It takes about 1400, 1400 or 1500.

399
00:19:03,880 --> 00:19:04,960
I don't remember exactly now.

400
00:19:05,080 --> 00:19:08,800
Software engineering tasks from Upwork, real tasks, real tasks.

401
00:19:09,000 --> 00:19:12,000
And these tasks are valued at around 1 million in payouts.

402
00:19:12,000 --> 00:19:15,000
So all of this 1400, 1500, whatever that number is.

403
00:19:15,040 --> 00:19:15,280
Right?

404
00:19:15,320 --> 00:19:17,960
It just adds up to about a payout of 1 million.

405
00:19:18,160 --> 00:19:19,240
And these are all from Upwork.

406
00:19:20,040 --> 00:19:25,160
And then they give it to the models and see how much they, how much they solve and how much money they do again.

407
00:19:25,320 --> 00:19:25,560
Right?

408
00:19:25,560 --> 00:19:28,080
That's an interesting benchmark because I really like how they do it.

409
00:19:28,440 --> 00:19:33,440
And the results of the SW, Sui Lancer is that you give a number, right?

410
00:19:33,440 --> 00:19:39,240
So, for example, GPT 4.1 got out of the possible 1 million in terms of solving these problems.

411
00:19:39,520 --> 00:19:41,360
And these problems are not just coding problems.

412
00:19:41,400 --> 00:19:45,800
They could be simple bug fixes, but it could be total full-fledged feature implementations.

413
00:19:46,040 --> 00:19:47,920
It could also be management decisions.

414
00:19:47,920 --> 00:19:51,520
Should I go for React Native or should I go for Angular or...

415
00:19:52,280 --> 00:19:54,240
I mean, should I go for React, not React Native?

416
00:19:54,240 --> 00:19:56,080
React or Angular or Vue.js?

417
00:19:56,440 --> 00:19:59,200
So that's, those are architecture level problems.

418
00:19:59,200 --> 00:20:04,040
Should I use super base or should I just stick to a classic node plus other stuff and so on?

419
00:20:04,520 --> 00:20:06,120
So this could be management decisions.

420
00:20:06,120 --> 00:20:07,800
This could be complex feature implementations.

421
00:20:07,840 --> 00:20:09,160
This could be simple bug fixes.

422
00:20:09,400 --> 00:20:10,960
And now some of the best tools.

423
00:20:11,080 --> 00:20:17,880
Gemini, I don't know the Gemini number, but I think if I look at the GPT 4.5,

424
00:20:17,880 --> 00:20:21,560
I think it's sold to around 186, out of the possible 1 million.

425
00:20:22,200 --> 00:20:25,680
These are problems that are already solved by humans in the up world.

426
00:20:25,840 --> 00:20:31,400
So while it's great, it's great to have the systems and allow the systems because it, you know,

427
00:20:31,520 --> 00:20:34,080
I can build like I never done before.

428
00:20:34,120 --> 00:20:36,920
And the speed to production is phenomenal.

429
00:20:37,240 --> 00:20:40,400
We should accept the fact that it still has huge limitations.

430
00:20:40,640 --> 00:20:43,520
What happens if I just go out of their comfort zone?

431
00:20:43,520 --> 00:20:44,320
How do I solve it?

432
00:20:44,320 --> 00:20:46,200
How do I put things into production and so on.

433
00:20:46,520 --> 00:20:50,480
That being said, let's really talk about the key question that you asked, right?

434
00:20:50,480 --> 00:20:51,920
Like, what is your edge?

435
00:20:52,080 --> 00:20:57,480
And I think one of the things that's happening is that traditionally we look at, we look at software

436
00:20:57,480 --> 00:21:02,680
development in terms of product and software engineering, and product would look at, are these features

437
00:21:02,680 --> 00:21:03,520
customers want?

438
00:21:03,520 --> 00:21:04,760
Would we have PMF?

439
00:21:04,960 --> 00:21:06,320
Is that experience great?

440
00:21:06,320 --> 00:21:07,360
Do they come back?

441
00:21:07,480 --> 00:21:09,080
And how much should I spend on this?

442
00:21:09,080 --> 00:21:10,920
Which one should I prioritize and so on?

443
00:21:11,400 --> 00:21:15,720
And the engineering is primarily looking at how do I implement it in the best capable way.

444
00:21:16,080 --> 00:21:17,920
How fast I implemented it, what quality.

445
00:21:19,360 --> 00:21:23,040
I think those are going to merge, at least for the smaller organizations and even for larger

446
00:21:23,040 --> 00:21:28,680
organizations, because I mean, you would be starting thinking there by saying that I'm like a UX

447
00:21:28,680 --> 00:21:29,480
expert, right?

448
00:21:29,480 --> 00:21:31,280
Or I'm a product manager, as an example.

449
00:21:31,720 --> 00:21:34,600
But you're not a product manager or a UX expert anymore.

450
00:21:34,640 --> 00:21:35,680
We are actually building stuff.

451
00:21:36,040 --> 00:21:37,480
You are an end to end.

452
00:21:37,600 --> 00:21:42,800
So like a full stack developer, I would say a full stack builder or something like that, right?

453
00:21:43,080 --> 00:21:48,880
Which means that you're not just looking at, you're creating those UI in, like, I don't know where

454
00:21:48,880 --> 00:21:51,040
it is, maybe Figma, maybe somewhere else, right?

455
00:21:51,400 --> 00:21:57,280
But you can also go and just copy three of those, you know, three of those Figma, you know,

456
00:21:57,320 --> 00:22:01,360
screens into ChatGPD and say, I want to create a new interface, right?

457
00:22:01,400 --> 00:22:05,560
Create an interface which is very similar to this and give me some ideas on how the screen would

458
00:22:05,560 --> 00:22:08,240
look like. And ChatGPD imagine can actually do that for you.

459
00:22:08,360 --> 00:22:09,920
I'm not saying that they're replacing people.

460
00:22:10,200 --> 00:22:11,200
Not yet. Not yet.

461
00:22:11,360 --> 00:22:13,040
Right. But at some point of time, it will.

462
00:22:13,040 --> 00:22:16,680
And currently I'm using them for some of my MVPs that I'm building.

463
00:22:16,760 --> 00:22:18,720
I'm not a great Figma designer, by the way.

464
00:22:18,720 --> 00:22:24,080
I know my way around it, but I'm not exactly great at it, nor do I enjoy it particularly.

465
00:22:24,240 --> 00:22:29,880
Right. It's a lot easier for me to just take some of the screens and just bring that to ChatGPD and

466
00:22:29,880 --> 00:22:34,320
say, I want to build something new, just keep the same theme and same colors and everything.

467
00:22:34,440 --> 00:22:39,080
And I just build a few new suggestions for me in terms of creating the screen.

468
00:22:39,360 --> 00:22:40,400
It is a pretty decent job.

469
00:22:40,520 --> 00:22:42,120
It's not always sticking to the colors.

470
00:22:42,240 --> 00:22:46,960
I seen that the color scheme is actually pretty bad for even the newer version, but it does that job

471
00:22:46,960 --> 00:22:51,720
pretty decently. And then it's a great place for me to start and then just copy paste that over.

472
00:22:52,080 --> 00:22:56,360
And then I can just go to V0 and then say, just build this for me as an interface.

473
00:22:56,360 --> 00:22:57,360
And it does a great job.

474
00:22:57,600 --> 00:23:05,680
So I think the various parts of building a startup or a company, in terms of there is product

475
00:23:05,680 --> 00:23:10,920
management, there is user experience, there is really in terms of prioritization.

476
00:23:10,960 --> 00:23:13,080
And then there is actually the part of building.

477
00:23:13,280 --> 00:23:16,080
They're all merging, especially for the smaller organizations.

478
00:23:16,080 --> 00:23:22,040
Right. They're merging. You just need, you truly need, I guess, two years back, some art men said,

479
00:23:22,040 --> 00:23:25,600
look, the next billion-dollar company in the future could be just one or two people.

480
00:23:25,600 --> 00:23:28,120
Right. We are a lot closer to that.

481
00:23:28,120 --> 00:23:31,000
At that point of time, people probably would have thought bullshit.

482
00:23:31,000 --> 00:23:32,360
Right. It's still not there.

483
00:23:32,880 --> 00:23:35,600
But I think we are going a lot closer to that.

484
00:23:35,920 --> 00:23:37,200
So what does it mean for me?

485
00:23:37,200 --> 00:23:40,800
I would just say one more thing, which is an important one, which came out last week.

486
00:23:40,800 --> 00:23:46,440
I probably, you read this as well, that was Toby Lutke, the CEO of Shopify, right?

487
00:23:46,440 --> 00:23:49,680
His message to his team, which got leaked out.

488
00:23:49,840 --> 00:23:52,920
And then he just went to X and he, OK, you don't leak it.

489
00:23:52,920 --> 00:23:54,720
I leak it. He put that in X.

490
00:23:54,720 --> 00:24:01,760
And he said, before hiring a person, you have to come and prove it to me and your leadership,

491
00:24:02,040 --> 00:24:03,640
why this cannot be done by AI.

492
00:24:03,840 --> 00:24:06,480
And that's a very strong statement from a CEO.

493
00:24:06,480 --> 00:24:09,800
Of course, Shopify is going through this place where they have operational challenges.

494
00:24:09,800 --> 00:24:14,280
They want to maybe more optimize their growth slowings or they're under pressure.

495
00:24:14,280 --> 00:24:17,920
But that playbook is going to be adopted by a lot of CEOs.

496
00:24:17,920 --> 00:24:22,320
A lot of CEOs. And Toby, he's a great CEO, but he is a conservative one.

497
00:24:22,600 --> 00:24:28,080
So if you want to rank it, like if Elon Musk was on the extreme side of the spectrum,

498
00:24:28,080 --> 00:24:33,920
he's in the middle. So if he's saying that, it means that a lot of other CEOs are already saying it.

499
00:24:34,160 --> 00:24:35,680
But it doesn't get leaked.

500
00:24:35,680 --> 00:24:38,760
Absolutely. Toby came out and said that very openly.

501
00:24:38,800 --> 00:24:41,880
My feeling is that a lot of other CEOs are actually implementing it.

502
00:24:41,880 --> 00:24:45,720
And hence all these layoffs that we're seeing, because there's been a lot of tech layoffs.

503
00:24:46,040 --> 00:24:48,960
They're not saying it because that will make them bad people.

504
00:24:49,280 --> 00:24:55,440
They are doing it. And so the organizations are going to be a lot more leaner in the future.

505
00:24:55,480 --> 00:25:01,520
But it's not like AI is going to fire anybody, but there's going to be job displacements with AI.

506
00:25:01,560 --> 00:25:04,760
What he has mentioned is essentially hiring frees. Right.

507
00:25:04,920 --> 00:25:07,600
So they said, unless you convince me, you can't hire a person.

508
00:25:07,600 --> 00:25:11,440
And that's effectively hiring frees. What happens if all the companies are daft?

509
00:25:11,440 --> 00:25:14,440
And so that's a problem that we really need to think about.

510
00:25:14,480 --> 00:25:17,040
And as of now, I really don't have an answer to that.

511
00:25:17,080 --> 00:25:24,000
I think, you know, my my hypothesis here is that there is a point of no return, meaning that these models

512
00:25:24,000 --> 00:25:32,680
are going to get good at to a point that your brilliant point that you pointed out is that there is going to be like one man

513
00:25:32,800 --> 00:25:36,720
businesses or one woman businesses that are going to take off.

514
00:25:36,760 --> 00:25:46,280
And then you are already seeing it like Kirsten, team of like less than 12, team of less than 10, small team, very small team.

515
00:25:46,720 --> 00:25:54,440
So I think we are trending towards that future very fast that there is going to be a point of no return.

516
00:25:55,680 --> 00:26:02,640
That one man, woman businesses are going to take off, have all the tools like right now.

517
00:26:02,640 --> 00:26:05,800
I've been I've been now since like two weeks now.

518
00:26:05,800 --> 00:26:11,920
I've been using GPT-4 for all of our graphics design stuff, and it does a really good job.

519
00:26:12,160 --> 00:26:21,080
It's not mind blown, it's not perfect, but we are not at what Nike level that we want everything to be perfect and on brand.

520
00:26:21,400 --> 00:26:25,240
We are good. We are at the good enough stage. It's good enough.

521
00:26:25,440 --> 00:26:28,600
So my bet here, Robin, are being very transparent.

522
00:26:28,680 --> 00:26:36,720
If I don't get to the bottom of becoming a builder, knowing all the tools I need, experimenting with them enough.

523
00:26:36,760 --> 00:26:44,200
So if that point of no return comes and I am not very well versed with my tool stack,

524
00:26:44,200 --> 00:26:47,200
I'm going to miss that gold mine opportunity.

525
00:26:47,280 --> 00:26:55,240
And it's going to be a very big divide between people that can build smoothly from A to Z with these tools and people

526
00:26:55,240 --> 00:27:02,040
that are still experimenting and exploring and asking whether what we do with this, which tool we should be using.

527
00:27:02,240 --> 00:27:08,080
So my best bet is not to use that gold mine rush, which is going to come at some point.

528
00:27:08,320 --> 00:27:12,360
Right now, it's already there. In my opinion, it's a bit a lot more fictious.

529
00:27:12,520 --> 00:27:18,040
I love that example that you gave with the test that GPT Open AI run with the up work.

530
00:27:18,080 --> 00:27:24,680
Right now, it's like one tenth, but there is going to be a case that is going to cover three fourth of all the tasks.

531
00:27:24,680 --> 00:27:29,240
And that's huge. That's huge. That's game. That's done. That's done deal.

532
00:27:29,520 --> 00:27:37,240
There is no point for bigger companies. In my opinion, there is going to be a rush of one man, one woman businesses that are going to take off.

533
00:27:37,240 --> 00:27:41,640
They're going to take it's like one night success stories.

534
00:27:41,800 --> 00:27:44,400
We already see it is going to accelerate even further.

535
00:27:44,480 --> 00:27:47,480
100% agree. I think the point of return is already here.

536
00:27:47,480 --> 00:27:49,720
I think you alluded to that. It's already here.

537
00:27:49,720 --> 00:27:51,560
It's just a matter of time.

538
00:27:51,720 --> 00:27:55,880
Frankly, six months back, if we run that example, we'll answer, right?

539
00:27:55,880 --> 00:28:00,160
So we just this 1400, 1500 up work task and how much you gain.

540
00:28:00,240 --> 00:28:05,680
I'm pretty sure it would have been very close to zero six months back.

541
00:28:06,120 --> 00:28:16,200
Right. But now we are at 180K. I'm pretty sure in another six months that would cross over 300 or 500 or something like that.

542
00:28:16,200 --> 00:28:21,760
That means that it can do 50% of the tasks, which is, if you think about progress has been crazy.

543
00:28:21,760 --> 00:28:30,760
You don't go in terms of capabilities of replacing human related tasks, tasks that are done by humans at this kind of scale.

544
00:28:30,760 --> 00:28:35,120
I mean, there's never been such an incident in the history of in our history.

545
00:28:35,480 --> 00:28:38,440
And when something like that happens, we are at crossroads.

546
00:28:38,640 --> 00:28:42,360
And like you said, you can pretend that this is not happening.

547
00:28:42,360 --> 00:28:45,800
You can just try to hide away. But that's only going to take you so much.

548
00:28:45,800 --> 00:28:54,320
So far, it's the real opportunity is to be aware of it, is to really understand what's happening.

549
00:28:54,320 --> 00:29:01,120
And when it's gaining even more momentum, then you are in a place where you really have experience, really have.

550
00:29:01,120 --> 00:29:06,040
I'll just go back to one of the things that Steve Jobs said, I guess in the Stanford speech, right?

551
00:29:06,040 --> 00:29:10,320
You cannot connect the dots looking into the future.

552
00:29:10,320 --> 00:29:15,320
You can only connect the dots of whatever you did in the past in the future.

553
00:29:15,320 --> 00:29:21,400
You can't connect the dots right now. I'm probably I'm not I'm doing a bad job of the court.

554
00:29:21,400 --> 00:29:27,960
But he gave an explanation why he was interested in phones, typography and many of these things.

555
00:29:27,960 --> 00:29:31,320
And at that point of time, they seemed like irrelevant things.

556
00:29:31,320 --> 00:29:45,200
Right. He does this hobby, nothing more. But when he built Apple and really built that user experience, all of things came together because all of this collective experience that they had helped make Apple one of the best user experience companies.

557
00:29:45,200 --> 00:29:51,120
And this is why people still love Apple, right? Because the use experience and it's the same thing here.

558
00:29:51,120 --> 00:29:56,760
You will be able to connect those dots in the future. As you mean that you are you are building.

559
00:29:56,760 --> 00:30:05,200
That's my recommendation to pretty much everyone, because look, it's a challenge as a junior engineer currently to search for a job.

560
00:30:05,200 --> 00:30:11,280
The thing is, most of these companies are traditionally when I started my career. Right.

561
00:30:11,280 --> 00:30:17,640
So, I started as a junior engineer writing code, and it's like a huge code base because you're working.

562
00:30:17,640 --> 00:30:33,000
I was working for Siemens, and we had this whole network management system. I don't know how many millions of lines of code, but there were like a lot of code. Right. And with the good old systems, we have sometimes the entire code. If you had to compile, it would take like a few hours to compile the entire code. It was all in C++.

563
00:30:33,000 --> 00:31:02,600
So, when you start as a junior engineer, the first one, one and a half years, you're primarily just fixing bugs because their others are making sure that you don't screw up something so badly. So, they're giving you things so that you're getting familiar with the code base. You understand what are the different components of the code base and everything. No microservice at this point. I really talking about a few days back. And they're just ensuring that you're learning. So, there's a learning curve, and then you become a middle level engineer and a senior engineer and so on. And this was useful. Right.

564
00:31:02,600 --> 00:31:32,560
So, it was useful because the senior engineers didn't want to spend time on repetitive bug fixes and so on. So, the junior engineers are helpful in terms of taking that away from them. But what happens now is that with LLMs and everything getting so good at this, right, the repetitive tasks can easily be done by the LLMs. Which means that you don't need junior people, which means that it's going to be a big challenge for especially people who are starting in their careers. And it's not just for programmers. You mentioned for design, right?

565
00:31:32,560 --> 00:32:02,480
So, it's a case for artists. If you want to create some art, some marketing material, most of the image generation, video generation tools can do a pretty good job of imitating a junior level artist. Which means that the jobs for junior level artists are going to go down. So, it's a cross. This is a problem, right? Because if you don't train the junior level artists or junior level programmer, how will they ever become a senior level programmer? Right? So, there are some things that we need to solve. But at the same time, for everyone to experiment.

566
00:32:02,480 --> 00:32:07,600
to build and really understand what, what are the opportunities out there that's priceless.

567
00:32:07,600 --> 00:32:12,640
That's going to help them connect the dots in the future. And if they're not doing that,

568
00:32:12,640 --> 00:32:17,040
it doesn't matter if you are like, you're a dinosaur like me, who has, you know, two

569
00:32:17,040 --> 00:32:20,800
decades of experience, which is practically at this point of time, people should say, stop coding

570
00:32:20,800 --> 00:32:26,480
and just retire and go, go somewhere else, but it doesn't matter, right? So it gives all of them

571
00:32:26,480 --> 00:32:33,520
superpowers. You don't have to necessarily be great, great at a specific language or a specific

572
00:32:33,520 --> 00:32:39,040
library, because a lot of that, you know, a lot of that work can be done by the alum for you.

573
00:32:39,040 --> 00:32:42,400
And that's amazing. That's powerful. We just don't know how to use it.

574
00:32:42,400 --> 00:32:48,720
I really love how you started with Steve Jobs and the exam, the thing he said about

575
00:32:48,720 --> 00:32:53,440
Apple, that you don't know where you're going, you just need to work with it. And then the dots will

576
00:32:53,440 --> 00:32:58,800
be connected as a result and brought it on the individual level and basically turned into a

577
00:32:58,800 --> 00:33:04,720
message for juniors and the ones that are starting or under like mid-senior level. You don't really

578
00:33:04,720 --> 00:33:09,360
need to connect the dots where it's taking you, but your best bet is to experiment. And learning.

579
00:33:10,240 --> 00:33:15,840
A lot of learning. And Robin, I really appreciate finally you made it happen. I know your time is

580
00:33:15,840 --> 00:33:21,440
short. Thanks a lot for being here and like engaging me. It's been a very, very, very insightful

581
00:33:21,440 --> 00:33:24,800
for watching. Awesome. It's been great talking to you, Bharat. Thanks for having me. Thank you.

582
00:33:25,520 --> 00:33:31,120
Thank you for listening to UX for AI. Join us next week for more insightful conversations

583
00:33:31,120 --> 00:33:39,040
about the impact of artificial intelligence in development, design and user experience.