Real World Serverless with theburningmonk

#42: Real-World Serverless with Ant Stanley (part 1)

December 23, 2020 Yan Cui Season 1 Episode 42
#42: Real-World Serverless with Ant Stanley (part 1)
Real World Serverless with theburningmonk
More Info
Real World Serverless with theburningmonk
#42: Real-World Serverless with Ant Stanley (part 1)
Dec 23, 2020 Season 1 Episode 42
Yan Cui

You can find Ant on Twitter as @IamStan.

Links to things we discussed in the episode:

Senzo workshops in January:

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.

Opening theme song:
Cheery Monday by Kevin MacLeod

Show Notes Transcript

You can find Ant on Twitter as @IamStan.

Links to things we discussed in the episode:

Senzo workshops in January:

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.

Opening theme song:
Cheery Monday by Kevin MacLeod

Yan Cui: 00:11  

Hi, welcome back to another episode of Real World Serverless, a podcast where I speak with real world practitioners and get their stories from the trenches. Today I'm joined by a good friend of mine, Ant Stanley. Hey, man. 

Ant Stanley: 00:24  

Hey, thanks for having me on. You are [inaudible] up. What's taking you so long?

Yan Cui: 00:28  

I was waiting for you to release Interrupt before I get you on here to talk about it. So I guess maybe we can start with just maybe a quick introduction of what you've been doing, how you got into the service space. I mean you were one of the earliest people I know that got really deep into serverless. I remember you coming into the Just Eat office and did a talk. I think it was 2015 and talked about how you guys were doing everything with serverless at A Cloud Guru, and you were doing like an experiment and built something in like a day. So maybe, yeah, talk about your history and how you got into serverless.

Ant Stanley: 01:06  

Yeah. So, the serverless thing was almost accidental. So, mid 2015 myself, Ryan Kroonenburg and his brother Sam Kroonenburg started A Cloud Guru. At the time I was the only person full time on the business. Ryan and Sam, because they had families and mortgages which I didn't have at the time, they couldn't really afford to to take the risk at that point in time to go full time on the business until it was more stable. So basically we didn't have a lot of time to do things like set up servers, set up infrastructure and all of that. So Sammy built the majority of the platform, you know, decided let's let's try this Lambda thing, and this was just after API gateway became available. So, Lambda being announced within 2014. API gateway got announced, I think, June july of 2015. So, July, August, sampled the first version of, of our school, of the A Cloud Guru school. Went Lambda purely because it looks like we have very little operations burden, you know, the idea of not having to run a server was very appealing. I wrote a blog post at the time about it, you know, saying how API gateway would be a transformative release purely because now you can use Lambda, not have to use any kind of workaround to access Lambda functions publicly. You can access them in a secure, efficient way. And it would allow us to work in a serverless way, it is the first time I’ve ever used the word “serverless” and use it with the dash because back in the day serverless was server-less. But yeah, so we basically adopted it purely because we didn't have time. And it was more out of necessity than trying to be cutting edge and then later on, when we started blogging about it we realised that not many people were blogging about it, and also the point where we'd had a bug and you go to stack overflow and you asked, and you do a search for Lambda or serverless or whatever. Well, not even... serverless wasn't a thing… Lambda, and there'll be almost nothing. And then you put you put your problem into stack overflow and you'd have Tim Wagner or AJ respond to your issue, the GM and product managers for Lambda at the time. So, that's kind of how we got into it and we started to speak about it because of two things: one is, we've had a business called A Cloud Guru, so we need to prove to the world that we were cloud gurus. So we want to show that like what we've done and the other bit was that, you know, we realise no one else was talking about it at the time, you know, early... The first serverless talk Sam gave in November 2015 and then I gave another service talk at the end of November 2015 about how we built a platform so Sam gave one in Ireland in Dublin AWS user group there and I gave one to the London news group at the end of November, I think. And then I went, I gave a few weeks later again at Just Eat after Pete Mounce grabbed me off that meetup is “Hey, do you want to give this talk again?” Yeah, so that's kind of how we got into it. Yeah, it was, it has been a bit of a ride since then. It's... I've done all kinds of things. I left A Cloud Guru in July 2016. Took a bit of time off and then I started to do a bit of serverless consulting and have ended up using every major serverless platform out there. Still use Lambda today actively and yeah it's changed a lot. It's been a journey.

Yan Cui: 04:52  

Yeah, let’s come on to the different platforms in a minute. But let's talk about what you've been doing since you left Cloud Guru. Are you bidding anything nowadays? You know, we've been working together on the the Senzo side of things. I guess that's all serverless as well, right?

Ant Stanley: 05:10  

Yeah, it's all serverless. So what happens since I left Cloud Guru, I took a few months off running startups quite stressful, particularly when we started with three people. So, I took some time off, took a break for a couple of months. 2017 I still carried on running the London news group, which had started mid 2016. 2017 that was supposed to be a serverless conference in Amsterdam that got cancelled. So myself, Paul Johnson, and James Thomas decided let's listen to a small conference in London and took us about six weeks and we organised. The first JeffConf which later became ServerlessDays. So part of what I've been doing since then has been helping run ServerlessDays globally, and the ServerlessDays London. And then I've done a few consulting gigs in between all of that. And then this year started Senzo Homeschool. Since it was intended to be a classroom based learning and events business, the idea being, you know, working with people like yourself and instructors who have reputation in the industry to, to help them run physical in person classroom based training. The idea being, we looked after everything, we booked a venue, we handled all the finances, all the administration around it, and the instructor just turned up and teaches and the idea being scaling out to the world. And then also running small niche tech conferences with that and I started that with Hana Civik. And that that started to go well. And then the world shut down and doing anything in person has been less than ideal, so we did the very rapid pivot to homeschool, which is an attempt to recreate that online without having someone stuck in front of Zoom for eight hours a day with these workshop based platforms and that's all serverless. So that's been it's been good to get hands dirty again with serverless workloads.

Yan Cui: 07:02  

Yeah, for any listener who is interested in taking some of the, or checking out some of the courses that is available. Senzo has got, or rather Senzo Homeschool has got a number of workshops in January for both containers and serverless. And it's going to be taught by people like myself, Vlad who's also a AWS Containers Hero, as well as Slobodan and Alexander Simovic as well, who are also Serverless Heroes. And I guess, Ant, you've got a new course as well on the Node.js for serverless.

Ant Stanley: 07:31  

Yeah, yeah. So the Node.js for serverless course is effectively a beta now so if you buy any of the other courses, you'll get it for free. That's going to be available from mid September that's focused on folks who already know how to write code but they don't know Node.js. As you know, Node.js has the largest ecosystem of plugins and tutorials and everything for serverless. Majority of serverless functions out there are written in Node. It doesn't mean you have to write Node but if Node is kind of the easy path for serverless. So what we've, what I've found is that a lot of people want to get into functions, they want to get into using serverless functions and they find that most of the tutorials out there, most of the tools out there are on Node.js. So they want to learn Node. And the idea of the course is aimed at those folks. It's a journey I went on very painfully. Back in 2015 before Node had promises, nevermind async await, tried to figure out how those callbacks worked was interesting. So the idea with that, that course is to ease that transition into learning Node. It'll be about 10, 11 hours of content in total. And, yeah, it should be like a short quick course that you can do to get yourself up to speed really quickly with Node.js, so when you go and do your course or do other courses or, you know, you want to work on production, you understand what people are talking about when they start to talk about Node.js specific things and you're equipped to use Node.

Yan Cui: 09:02  

Yeah, I've really enjoyed the homeschool sort of format and I think the conversation we have on the Discord, certainly in the last two times we've run the workshop has been great. There's just so many different topics and questions from people who's working with serverless and got specific questions that are outside of the scope, but there's some really good engaging conversations going on in those servers. And hopefully, you know, we have a good group again in the January, so that we can get those conversations going. And so, you talked about the Node.js and how it’s certainly one of the major trends in this serverless space being the most, by far the most popular language people use. Are there any other trends that you see in the industry you saw a major movement and adoption patterns?

Ant Stanley: 09:47  

Yeah, I think there's a couple things. One of the major ones we're seeing is the adoption by our front end developers. That's accelerating. You see, for example, the front end framework Svelte. The new version of Svelte version four, whatever it is coming out, not too sure when but is going to adopt a thing called adapters, which essentially are plugins for your infrastructure. And all gonna be serverless based, you know, they're saying Svelte is going to be a serverless first front end framework with this next iteration. That's super interesting because that's one of the upcoming frameworks. The other side of it you've got Next.js, there's multiple plugins and libraries out there that enable you to run Next.js serverless side rendering either an edge functions or within your kind of central Lambda function so I think that trend is going to just keep accelerating so I think front end adoption of serverless platforms is just going to keep accelerating that's not going to slow down. I think that's one really big one. And the other kind of slightly related to that is definitely going to be the adoption of kind of edge functions. So things like Lambda@Edge, CloudFlare workers. I do remember a quick check and again 9 different CDN providers who provide edge functions at this point in time. So, and various flavours with Node.js or WebAssembly or whatever but those two trends, I think, they are both front end related and they're both related to each other, are going to be kind of big coming, coming pretty soon and they will dramatically change the way you build applications, you know, going forward, in the way you think about your users and how and what latency looks like and these, these kinds of concerns.

Yan Cui: 11:30  

Yeah, so I released my AppSync Masterclass course on early access, and so far I've seen, I'm seeing a lot of front end developers sign up to the course, which is slightly, I guess, surprising to me. Given that the most of people I know are back end developers. And one of the things I'm also noticing is that you have all these, I guess, the frameworks or libraries that are geared towards the front end developers like your Amplify, your Netlify and the Vercel, and then try to, I guess, hide or at least abstract away a lot of the underlying complexities of AWS. Do you see, I guess, a lot of adoptions of these frameworks? And how do you see, they compare with each other like Amplify versus Netlify versus Vercel, and some of the other ones you mentioned which is, I guess, more focused on the front end hosting as opposed to building an entire full stack application?

Ant Stanley: 12:24  

Yeah, definitely. So, yeah, the edge functions would typically be about server side rendering on the edge where I think you will start to see some edge APIs type patterns appear a little bit away from that, but on your side in terms of these frameworks or platforms that essentially abstract away AWS to large extents because Netlify functions is on top of AWS, Vercel’s on top of AWS, that that I think is going to continue. Amplify is obviously also on top of Lambda because there’s this tension, because the AWS Lambda team want to consistently increase the use cases where Lambda can be used. You know, they want to all the objections people have for example for not using Lambda and not talking about front end use cases like every use case so folks will say, I can’t use it for long running workloads or I can’t use it for data science whatever. AWS will do something to Lambda that release something that removes that objection. The problem with every time they add a feature to remove an objection, and to make Lambda usable to broad audience is they add complexity to Lambda. It's another toggle. It's another button or thing that needs to be changed or monitored or you have to do something to it to use this new feature. So, Lambda in 2014, 2015 was a very basic service, now it can be super complex, and it's, you know, there's a lot of features and buttons that people don't need. So if you are the web developer, you know, you don't need 80% of what Lambda is going to give you. So having an obstructive platform like Vercel, which makes opinionated decisions about what you do and don't need that's geared towards your particular use case is great because then you don't need to go into the depth of learning the AWS stack which can be quite intimidating, particularly if you’re a front end dev and all you want to do is get a quick API quickly and a single data source in the back end, that kind of thing it's, I think, yeah, Vercel, Netlify, Amplify, Apex Up even, from TJ Holowaychuk, there’s bunch of these services, and the fact that they're just, yeah, they simplify, take away all the buttons you don't need to press and make it easy is going to be huge, and they're going to keep getting bigger.

Yan Cui: 14:48  

Yeah, so I've seen a lot, I guess, a lot of people adopt serverless and technologies through some of these frameworks. I guess one question I always have is, well, functions is just one small part of your overall application, and at some point you're going to be touching, lots of other services, especially as your application becomes more, I guess, feature rich, and you need more and more capabilities that Amazon is able to offer through EventBridge, through all of these other services. If you start off with something like Netlify or Vercel, how do you then escape from that? How do you then graduate from that, so that you are able to then use more of the capabilities that AWS offers but also how do you educate people about these other services that they could use and all the different features that they now realise “Oh, maybe you do need them after all”?

Ant Stanley: 15:39  

Yeah, that's an interesting one actually I saw someone on Twitter the other day talking about how they were using, I can’t remember which one of the services but essentially they had a concurrency limit imposed on them by the service, and they’re twitting about Lambda can’t go higher than this and everyone said, actually you can, just put a request in and then this person responded “well actually I'm using required Vercel”, “well you need to ask Vercel to lift that for you”. All you have to move off it and that's a use case where you would potentially outgrown Vercel so I think the concurrency was almost 50,000, some like that, which is very high, most people won’t hit that. But for those folks who do and they need to move to their own platform. I think from a code perspective there's not much to change because it's all running on the underlying platform. May, what will change is your deployment. So how you deploy. And you have to do a little bit more on the infrastructure side so Vercel stands up a bunch of infrastructure for you that you don't have to do. So then you need to understand things like CloudFormation, CDK or Terraform, whatever infrastructure code tool. So there's definitely a learning curve to move off these things because, you know, Vercel, Netlify, Amplify all take away a lot of pain for you. But if you outgrow you are gonna have to, you can't avoid that learning. So yeah, I think the big thing is you'd have to learn how infrastructure code tool work, you'd have to learn probably IAM is the other big one you'd have to learn about, Identity Access Management to make sure you use least privilege. And also potentially implementing your own monitoring solution or at least learning how the AWS monitoring suite works so X-Ray, CloudWatch, CloudWatch logs in particular. Those are the three big areas you probably have to look at if you wanted to move off one of those platforms. You know, those platforms give you a lot because they save you a lot of pain, and those areas, but if you do outgrow it, you're going to have to invest and do that kind of stuff, saying that if you do outgrow you'll be at a scale that you should, in theory, be able to afford to hire someone to do it or to, you know, take the time to do that yourself, that investment will be justified, should we say.

Yan Cui: 17:53  

Okay, so, on the front end side of things you've got a lot of framework to choose from, as well as all these different tooling like Amplify, Netlify and Vercel. How do you go about deciding which ones to go with? Amplify has got its own magic and Netlify got its own magic and, I guess, Vercel must have its own front end integration as well to make building a certain type of applications easier. What are some of the decision points that you will go through when you're looking at, “Okay I'm building this homeschool platform. I want to, I should choose one of them because of what?”

Ant Stanley: 18:31  

Yeah it's interesting, I think, you know, choosing a front end framework is not a small task to be honest. It's one of those if you have skills, React skills, for example, you'd want to stick with React. You don’t want to let your back dictate your front end framework. All of these providers Vercel, Netlify, Amplify, etc. They are opinionated to certain extent but they're not in terms of front end frameworks they want to be as interoperable with all the different front end frameworks as possible. Because if, so Vercel for example are the core maintainers of a framework called Next.js, which is a react based framework and the most popular react framework. Now react is a relatively foundational level set of libraries and Next.js is opinionated configuration on top of react. And so Vercel hosting for Next.js is excellent. Because they actually like the framework and they optimise Vercel for Next. But Next.js can be run anywhere, so for example, Netlify has just released a build plugin for Next.js. So if you're running Next.js you can use Netlify functions to do server side rendering in a few clicks using Netlify. As I said, Svelte has kind of gone the other way. They're not waiting for providers to optimise for them, they're going to be building a bunch of build plugins that makes it really easy for you to run your front end framework on any provider. And they are still doing that focus. I think they've got a Netlify plugin and one other available at the moment, but, yeah, I think what you should look at is definitely, it's not really the technical aspects for me. It's more the community aspects, you know, how many blog posts are out there, how many, if you asked a question on this framework, will you get an answer on Stack Overflow? Or is there an existing answer on Stack Overflow? Those things for me are actually bigger than, you know, the technical merits of individual platforms. It's just one of the pain points I've hit with homeschool. I built them myself. If you, I don't know React, I know Vue to certain level, and I've done some work with Svelte. I just found Svelte was easier to pick up. But then I hit an issue with a library, GraphQL library I was using was Svelte. And there was like no resources out there on it at all. You know, it's quite a major pain point for me at the time. I was able to, you know, get it up and working but particularly around how to do authenticated queries with this library, and there was nothing. The docs were awful, well, there was no documentation on how to do authenticated queries or the documentation were unauthenticated queries. There was no almost nothing on Stack Overflow, no blog posts. It was quite painful, actually. And that's because it wasn't a super popular library I was using. By one person, It got me. And it's probably been a bit too much time focusing on it. I'm actually swapping it out in the next couple of weeks for something that, that didn't exist 12 months ago but does exist now, and is significantly better documented. Because there's a commercial company behind the library we're going to be using. So that's probably the biggest pain points I've looked at is, you know, how much documentation. How much, how many community resources is on the configuration you're looking at. So if you want your own Next.js, for example, for one of our Next.js and Lambda@Edge, are there plug in libraries to help me? Are there, you know, good docs out there? Are there good blog posts and people who have done this kind of thing? And that would be probably my biggest piece of advice is the technical stuff is secondary to the community stuff.

Yan Cui: 22:19  

That's such a great point. I've run into so many problems in the past myself as well when I'm using technology that are new and they're just not a lot of help. You have to do a lot of your own legwork to figure out how to actually make this thing work the way I wanted to. So on the last thing that you mentioned about the Lambda@Edge. So maybe let's switch gears a bit and talk about edge functions, cuz you mentioned earlier that a lot of the CDN networks already support some kind of edge functions. So, how do you see, I guess, that particular market right now? Do you see, maybe, is CloudFlare worker the clear, I guess, leader right now? It seems to be a lot more capable, their platform compared to Lambda@Edge. There still are pain points involved with using Lambda@Edge and CloudFlare Workers has got all these other, I guess, really cool stuff that you get as well, like the persistent workers, as well as the data store. Where do you see that space is evolving?

Ant Stanley: 23:20  

Yeah, so I think spaces are very very early days. I said I would have hoped Lambda@Edge would have kept up with CloudFlare Workers but I would agree that CloudFlare Workers is definitely the leading platform. And it's, it's a leading platform for as much as it's more than just workers you've got the KV store, which is a really basic key value store that persists on the edge and then you've got the persistent workers, which is, which kind of gives you state on the edge as well within a worker which is really interesting. So, it's also super easy model and it supports JavaScript and wasm via the V8 engine. And I've done a little bit of work on it and building some kind of prototypes stuff at the moment with it. It's super easy to use. It's almost too easy I had an issue where, so for example if you use KV store., it's actually a, you have to create bondings outside of the worker. There's no like SDK to load, or anything else like that. And you're working with worker configuration. You bind your function to the KV store, and you can just reference that KV store by name. There's no initialization steps or anything else like that and that really confused me because you know, typically if you're running Lambda or anything else, you know, Azure functions or Google Cloud, you know. You bring in your library. You pass your library the connection details, and all of that. You got two or three lines of initialization types of code. In KV store, there's none of that. You just, you know, you create this bonding outside of your code. And you just reference your KV store, and do Put and Get operations on that. So that was superior. I think the confusion was it was almost too easy. It is a very basic service but just being able to have storage on the edge does open up lots of things for you. I wouldn't ever use it as your source of truth, but, you know, locally cached storage, you know, with stuff with data you're going to be accessing regularly having that on edge makes loads of sense. So yeah, it's, so they're definitely in. The other interesting one is what Fastly is doing. So Fastly their edge workers platform is, I think, is Fastly edge or Fastly, it's not workers, I can’t remember the exact name but that uses wasm or WebAssembly. So there's, as much as CloudFlare workers can use WebAssembly. It uses WebAssembly via V8 engine so essentially all they've done on CloudFlare workers is taking V8, out of, out of chromium, and they're running that on the edge and V8 supports WebAssembly. What Fastly are doing is they're running their own WebAssembly runtime. There’s two WebAssembly runtime, one’s called wasmtime and other ones, I think, it's called Lucet. Fastly developed Lucet, and they've recently taken on the wasmtime team from Mozilla. So I think those will combine become one that they run that wasm runtime on the edge where this makes a difference is a wasm runtime startup time is incredibly fast. So the thing about V8 startup time it's loading JavaScript and a wasm engine, with a wasm only runtime it just loads a wasm engine. So you've seen folks getting nanosecond response times on Fastly’s edge functions. And that looks really, really interesting. But, you know, having a few nanoseconds to run a simple script is fine. It doesn't really matter if for example you're doing data fetching data from a central database because saving a few nanoseconds or it could be a couple of 10, 20 millisecond database call isn't going to change much in the end. But that's super interesting I'd like to see where, where that goes. And maybe we'll see more kind of WebAssembly based functions platforms. There's still maturity issues with WebAssembly though, and then particularly on the community side. At the moment if you want to have rust that transpiles to WebAssembly there's quite a bit out there but if you do want to do any other language transpile to WebAssembly the tool chains aren't quite there, the documentation isn't quite there. So that still needs to mature, but yeah, what Fastly are doing with WebAssembly looks super super interesting. So those are the two platforms I'd look at really is CloudFlare and Fastly, two platforms for different reasons.

Yan Cui: 27:44  

Okay. And for audience who haven't heard about WebAssembly or at least haven't been keeping up to date on what it is. Can you just maybe quickly explain what is it and why does it matter? Why should you bother with WebAssembly at all?

Ant Stanley: 27:59  

Yeah. So, WebAssembly is essentially, not the easiest question. So essentially, you get a compiled binary. It's not a binary that, you know, it's not a binary file but a WebAssembly file can run anywhere that you have a viable WebAssembly runtime so, you know, a little bit like a scripted language that, you know, as long as it's a path that or JavaScript runtime interfaces with the local environment, you know, with WebAssembly, the WebAssembly runtime will interface with a local environment so in a browser that's V8, on a server that would be wasmtime or Lucet. But the key bit there is you take your kind of higher level language Rust, TypeScript, whatever it is, there's a few WebAssembly transpilers out there now. And it'll transpile down to WebAssembly which, you know, you can open up a WebAssembly file that looks a little bit like assembly. And then you can run that anyway you don't need to cross compile WebAssembly for any other platform. And the big thing about is significantly more efficient than a typical scripted language. So it's got huge performance benefits over it. And what was really interesting was TensorFlow.js, for example, the JavaScript implementation of TensorFlow three months ago switched out its back end to WebAssembly. So, to use TensorFlow.js, all the bindings, all your API calls will be in JavaScript but then it hands those off to this WebAssembly back end that now can run anywhere and they don't need to cross compile it, you know, so you don't need to compile it for Linux, Linux, you don't need compiler for Windows it's the exact same code with assembly code that can run anywhere as long as it's got a compatible runtime, which is pretty powerful about it.

Yan Cui: 29:53  

Yeah. I remember, when it was first announced someone did a demo that was showing how they can cross compile Doom to run on the WebAssembly and it was smooth. The framework was perfect. So I guess another question I had about what you talked about earlier was about the KV store. Because I remembered I spoke with Paul from StopForumSpam a while back, and he talked about the KV store quite extensively as well, because his whole API was hosted on the edge. And I remember he said that the KV store was the data store, was still coming from a us-east-1 data centre, even though it's accessible from all the functions from the edge. So, do you know how does that happen? Has it changed now is the data is actually available on the edge?

Ant Stanley: 30:39  

So what it looks like to me is essentially it's a KV store cache is what it kind of looks like to me like so CloudFlare give you guarantees that your data will be updated up to date within 60 seconds. So if you change one of your items in the KV store it'll be globally available within 60 seconds. My guess is that there's a central data store somewhere, and then essentially the KV store is a cache of that data on the edge. You know, so, so that use case where you're gonna have a lot more reads and writes as the right use case for it. If you've got a very high throughput database where you're gonna have lots of writes, it's probably not a great, great use case particularly where those rights has to be available to everyone else. So for example something like a chat app using KV store as your saved state for chat is probably a great place to go. Then the other hand, the observable the CloudFlare, kind of persistent for observable workers is probably the better route to go because that's more like a in memory top on the edge storage of your state. So, yeah KV store looks like it's a central store. That's essentially replicated or cached on the edge, essentially.

Yan Cui: 31:57  

Okay, got it. And I guess if it's just an edge cache then they probably also means that it's not great for data that are constantly being overwritten. So you read from the edge, you try to update it and you have different actors from different edge locations all trying to update the same records. I guess they don't have CRDT support on the KV store or do they? 

Ant Stanley: 32:19  

No, no. They definitely don't have that today, like I said it's 60 seconds is the guarantee they give you on your data being up to date which, you know, for, for data that's going to change a lot that's highly dynamic that's not fantastic. But like I said if you've got a product catalogue for example you put that in your KV store because that's not going to change where is, for example, like the quantities on a particular product, you make a call back to your central database for that. So...

Yan Cui: 32:44  

Okay, got it. Do you remember there was this company called, I think Near Cloud? They had this thing called Kuhiro. A few years ago where they had essentially edge functions with a CRDT on the edge so that you are, you know, you can do any updates you want so long as permitted by the CRDT data structure, and that they are guaranteed to be globally eventually consistent because that's the guarantee that you get with CRDT. Any idea what happened to them and that idea seems so powerful, but just never seems to take off.

Ant Stanley: 33:14  

Yeah, I don't know. You see, one of the problems in generally with functions is not about how good you functions are themselves, it's about the platform that they exist in, you know, like AWS has big strength is you've got 100 plus event sources around it. So I guess two things if you want to get adopted, you kind of need to live within a bigger ecosystem, because those folks on that ecosystem are adopted first, as opposed to how great their platforms are, and also with the CRDTs. I'm wondering how Tim Wagner’s new company Vendia is working because that's supposed to be like a distributed ledger, distributed serverless computing. I'm wondering if that's using CRDTs. They’ve mentioned it's more distributed ledgers. I'm guessing it’s more blockchain type tech which isn't that miles part. That CRDTs I think is fascinating. I'm wondering if the CloudFlare, the new CloudFlare’s observable workers are using CRDTs. That's because it kind of looks like they are. I'm not 100 percent sure but they haven’t publicly said they are, but it kind of looks like they are. CRDTs do look fascinating though.

Yan Cui: 34:29  

Yeah, totally agree. And since we're talking about all the different vendors, let's go back to what you were talking about, what I was gonna ask you about earlier, in terms of the major differences between Lambda, Azure functions and Google Cloud Functions. And where do you see maybe where Azure is doing better? And maybe where Lambda is doing better for example?

Ant Stanley: 34:51  

Yeah, I think so. I've used most, most of the major cloud providers. I've used Google Cloud Functions. I've used Azure Functions, and obviously Lambda as well all within, you know, in paid gigs and production so my thoughts on a lot of them really comes down to the ecosystem within which they exist, you know, the first ever functions provider was functions, Auth0 had a functions platform called WebTask. All of this kind of existed just before Lambda existed, and they don't really exist anymore, because the adoption is often driven by the ecosystem, you know, folks who are using AWS adopted Lambda first. Folks didn't go to AWS because of Lambda, definitely not in 2015. And for me the quality of the different functions platforms, is as much about the quality of the overall platform so the amount of event sources so Google for example is not super event driven. They don't have anywhere near as many event courses as AWS or Azure and that's that's a major failing.

Yan Cui: 36:06  

So that's it for part one of my conversation with Ant Stanley. Please come back next week as we compare the functions offering by major cloud providers and take a look at the future of serverless and containers, as the two technologies continues on their path to convergence. To access the show notes and to see the transcript for this episode, please go to And if you want to learn how to build production ready serverless applications, please check out my upcoming courses at I'll see you guys next week.