Real World Serverless with theburningmonk

#53: Fighting DDOS at Fathom with Jack Ellis

May 26, 2021 Yan Cui Season 1 Episode 53
#53: Fighting DDOS at Fathom with Jack Ellis
Real World Serverless with theburningmonk
More Info
Real World Serverless with theburningmonk
#53: Fighting DDOS at Fathom with Jack Ellis
May 26, 2021 Season 1 Episode 53
Yan Cui

You can find Jack on Twitter as @JackEllis.

Links:

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.

To learn how to build production-ready serverless applications, check out my upcoming workshops.


Opening theme song:
Cheery Monday by Kevin MacLeod
Link: https://incompetech.filmmusic.io/song/3495-cheery-monday
License: http://creativecommons.org/licenses/by/4.0

Show Notes Transcript

You can find Jack on Twitter as @JackEllis.

Links:

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.

To learn how to build production-ready serverless applications, check out my upcoming workshops.


Opening theme song:
Cheery Monday by Kevin MacLeod
Link: https://incompetech.filmmusic.io/song/3495-cheery-monday
License: http://creativecommons.org/licenses/by/4.0

Yan Cui: 00:12  

Hi, welcome back to another episode of Real World Serverless, a podcast where I speak with real world practitioners and get their stories from the trenches. Today I'm joined by Jack Ellis from Fathom. Hey man, good to have you.


Jack Ellis: 00:25  

Good to see you.


Yan Cui: 00:27  

Yeah, so I read your blog post about denial of service attacks and some of the things that you guys had to do to get around that a while back, I have to say it was really interesting. It's something that I've always kind of wondered, because in the serverless world, we kind of call denial of service attacks denial wallet attack. And I’m quite interested to hear about some of the things that you guys have to do, I guess, to get around that denial of service attack that you guys were under. But before we get to that, can you maybe tell the audience about yourself and the Fathom and where you guys are building over there?


Jack Ellis: 01:00  

Yeah, so I'm the co-founder of Fathom Analytics. And it's a privacy-first Google Analytics alternative. We invented privacy-first analytics. So you might have heard of privacy-first analytics by now. But we invented it while back, GDPR compliant, no cookie notices, you know, the drill. And we're we're leading the space pretty much right now. And we're kind of just smashing out features, we just launched version three early access, and lots is going on. So if you're sick of Google Analytics, you're bored of it, it’s too complicated, you're sick of the cookie banners, come on over to usefathom.com. And that's the only pitch I'll do.


Yan Cui: 01:35  

And I guess in that case, I guess web analytics is one of these areas that you can expect to have a lot of traffic and why I guess in that case, did you decide to use serverless?


Jack Ellis: 01:47  

Yeah, so it's really a case that we can have spikes at any moment, we have some truly huge customers. And when they launched a big project, so I can talk about this customer, a customer of ours launched a universal basic income campaign. So it's effectively sign up for some free money. Now, you can imagine why that's popular. And they were flooded with traffic. And we need to have the capacity to be able to, or the ability to scale up and to actually deal with that traffic, because we could be comfortably sitting with, you know, some EC2s provisioned and thinking we can take it. But when you have those kinds of spikes, you just can't take it and things have come to a to a halt. And they could slow down. And we have so many customers that if one customer going viral, took up all of our server resources, everyone would be impacted. And we'd be dealing with thousands and thousands of support requests. So by having the the elastic load balancer with Lambdas, we don't have to worry about that, because Lambda can scale up as and when we need it, and then it can scale back down. And then yes, it does cost more to run Lambdas. But at the same time, you're not having to over provision to handle these ridiculous spikes. And it's not just one customer that can have a spike, you can have one customer having a spike here. And then 10 minutes later another customer and so on and so forth. So by going serverless, we basically protect against server overload.


Yan Cui: 03:08  

Okay, I guess in that case, you still have some shared capacity limit around the regional limits for Lambda, concurrent executions? Do you guys do anything special to segregate the traffic for different customers so that if you get one really big customer, and they're having a really big traffic spike, it doesn't impact everybody else? 


Jack Ellis: 03:29  

No. So we've got a really, really good upper tier limit on Lambda. So we're pretty comfortable with that. You know, we haven't gotten to the point where we're having to actually, so for example, if we had someone who's doing a billion page views a day, we would have to look into that kind of thing. We haven't got to the point where we're dealing with, you know, a billion a day, the most we're really dealing with is probably like 50 to 100 million a day, maybe for a single customer. That's very rare. You know, that's probably been seen a couple times, typically, we're dealing with the 100,000 range, or perhaps a few million a day. But obviously, as you get more and more customers, that certainly adds up. So it's interesting, you asked that question, and we may be in that position where we have to start thinking about that. And it could come down to the fact that, you know, we might have to do dedicated setups for individual customers, but we're not there yet.


Yan Cui: 04:16  

Yeah, hope you get there, because that'd be quite interesting, once you get to that level of throughput, and I guess that you said that you're using a ALB with Lambda, instead of API Gateway. Is that a cost saving...


Jack Ellis: 04:30  

Yeah, okay. So I'm happy to talk about this. So our setup is done through Laravel Vapor and you have a few options for setup. Basically, we don't manage any of the actual configuration. We, Vapors Vapor is almost like a platform as a service that provision serverless infrastructure behind the scenes. And so you might have heard of Bref if you're in the PHP community. Vapor is, Laravel Vapor is just something that handles that all for you. Now, as for why we use ALB, API Gateway is expensive and the actual API Gateway is so so powerful, as I'm sure you know. And we just weren't using it for the features that it is capable that it has. So it came down to, are we going to pay extra for API Gateway, which is more expensive than the ALB for our use case? Are we going to pay extra for all these features that we're not using? We're not using authentication. And there's so many things that can do, we're not using that. So it actually made more sense to go with with the Elastic Load Balancing approach. And then we've, I think, at the moment, we're going direct, but in the next few weeks to a few months, we're going to be putting a CDN in front of that to reduce that time to first byte. So that'll be fun.


Yan Cui: 05:37  

Yeah, I remember reading some, some post that went viral on Hacker News a while back. It was someone who was building an API, it was pretty high traffic, I think it was about 10,000 requests per second on average. And then at the end of the month, he found that he's got a big AWS bill. And and then when he drilled down into it, it turns out most of that was API Gateway, because like you said, the API Gateway is pretty expensive, much more so than the Lambda invocations that he had. So I guess in this case, given the traffic you're dealing with, I think it makes sense to use ALB.


Jack Ellis: 06:12  

And even if you're going to move move between versions, you know, they'll have that new newer version, where it's in a single read, single availa... no, I forget what it is, but it's not globally available, right? The there's there's different versions of API Gateway. So you go to the cheaper one. And you just think, well, that's still more expensive than ALB. So I might as well just jump to ALB, unless you're using all of the features or, or some of the special features on API Gateway, I wouldn't push to use that personally.


Yan Cui: 06:39  

Yeah, that makes sense. You're talking about the HTTP APIs on API Gateway, which is like 70% cheaper than the rest API. But when I did, when I crunch some numbers before, I think, once I hit a couple thousand requests per second is still going to be maybe 5, 10 times more expensive than what you will pay on ALB for the same amount of traffic. So I think from from the cost point of view, like you said, if you're not using it, why pay so much more for API Gateway?


Jack Ellis: 07:07  

Unless you're using the free tier, then you can make an argument for it. Right? If you've got a small application, you fit into that free tier, then there's the argument, and I'll shut up about not using it. But if you're going past that free tier, then I definitely, I just consider that I recommend that people take a second look.


Yan Cui: 07:23  

Right. And I guess another question around that. I guess that brings us into the conversation around that denial of service attack, because the blog post you wrote was amazing. I will put a link to it in the show notes. But for listeners who haven't read it, can you maybe give us a recap of what happened and some of the things you guys had to do to get around that that attack?


Jack Ellis: 07:46  

Yeah, sure. Okay. So I'll first talk about my experience with this kind of thing. I came into serverless, or what is it late, late 2020. I forget, we moved from Heroku. I am not a server person, I spend all my time writing application code. I loosely describe myself as full stack, but you know, I can work with it. But it's definitely not my speciality. So coming into this serverless thing, I wasn't thinking about things like firewalls and DDoS attacks, it's just not my experience. So we were we were running nicely. We were running on Lambda, probably API Gateway at the time, I forget what we were using. And we were getting more and more attention, more and more hype, bigger companies were using us. And people were learning about us. And as you become bigger and bigger, you become a target. I’m not going to talk about who is targeting you, but you do become a bigger target. And we was we were just starting to get hit with these spam attacks. And we thought, okay, someone's hitting us with spam attacks. They're trying to ruin our customers data and that kind of thing. And we, we built some things to combat that. But we didn't necessarily think of it as a DDoS attack, because it wasn't a DDoS attack at that point. They weren't trying to deny service, they were just sending us spam. Then at some point, they just ramped up and what was happening is they they started targeting us when they thought that we'd be in bed. And they they'd hit us at about midnight, I think or midnight UTC or something like that. And they would just flood us with requests. And it would actually overload the Lambda. Because obviously we have a limit on the Lambda. And it's a high limit. But we were getting absolutely whacked. And at that moment in time, we didn't have anything like WAF configured, right? So everything was just passing through hitting the application, the Lambdas were running. And we have a and this is before Aurora didn't have the new version of their serverless database at the time. They had it so that you know when your database got hit, it takes time to scale up. And because of that, we went with over provisioned fixed size database, which you can definitely look into and say maybe you shouldn't have done that. But I just didn't get on with Aurora for multiple reasons. So what was happening is the Lambdas could take it but because it was hitting that fixed size MySQL database, it was taking ages to actually finish the request. So we were getting billed for Lambda runtime because we had quite a high timeout and the database was just getting destroyed. So everything was just was impacted in huge ways. And that happened over and over. And I think at one point it was happening for multiple days. And again, like I said at the beginning of this explanation, I'm not a server guy, I'm not an expert at this kind of stuff. But unfortunately, I had to become a bit of an not an expert, but I had to learn about this stuff. So then we had to speak with the DDoS response team, DRT, Amazon AWS Shield, Pro, or whatever it's called. And we had to get them to come and help us with these attacks. And all it boiled down to once we actually looked at the traffic was. We had to identify how the patterns of the attacks and we had to block those patterns, and they adapt all the time, right? But we had to then speak with them and get a decent WAF configuration in place. And that's how we effectively solved it. So we were sitting ducks because we hadn't done the appropriate security configuration. And you know, a lot of people might sit back and go, Ha, you should have done that. Well, guess what? There are tens of thousands, hundreds of thousands of people who've never dealt with this stuff, who right now, I could spend something up, going into the black market and buy something and get a DDoS attack to attack your site, and it would go offline. So don't be complacent about this stuff. I mean, we certainly won't be complacent but you don't know what you don't know. So do it do read my blog post and do read what we learned. But definitely look at the WAF instructions. And also Amazon has this best practices thing people talk about. I don't know if you're familiar with that, Yan. It’s something that you can find online. Do you know about that? Have you heard about that?


Yan Cui: 11:33  

Yeah, it's the Well-Architected Framework. 


Jack Ellis: 11:36  

That's it. Yeah. Yeah. So if you're listening to me now, definitely review that when I was on a podcast talking about this before, someone said to me, you know, you should have read this. And I absolutely should have read that. But this wasn't something I was aware of. So anyone listening to this, check that out and make sure you're protected, because your wallet will appreciate it.


Yan Cui: 11:53  

Yep, I will put that in the show notes. So the Well-Architected Framework covers a number of different pillars, including security, performance, cost, and so on. That gives you a lot of, I guess, actionable advice on how you should architect your system, and some of the attack vectors you should think about. And the Serverless team has also, I guess the Solution Architect team has also put together maybe that was the Well-Architected team has put together an online tool that you can use from the AWS console, that will essentially run you through a questionnaire. And you can answer different questions. And they will tell you at the end that Oh, you need to do this, you need to do that you haven't configured WAF, you haven't configured much other things like that, which is super useful for doing like a checklist before you go live with something. I guess in this case, the other thing you mentioned there is that you contacted AWS through the Shield Advanced team, which gives you the access to the rapid response, I think it was denial service response team or something like that? Was there any specific advice that they gave you that was the most effective in terms of fighting off the attack?


Jack Ellis: 13:01  

So this is a tricky one to talk about. Because I've got to be careful what what is generic and what is specific to us? Because I can't reveal certain things that we've done. Obviously, the thing I can say, Okay, look, rate limiting is really important. Everyone should be doing rate limiting of some sort. You should also and it’s tricky for us, because we're privacy-first solution, we can't actually store things like the IP address, the path name, the user agent, because we're processing data for tens of thousands, hundreds of thousands of websites. So we're getting all this information coming in. And if we kept access logs, that's problematic for everything we stand for. So let's pretend you're not an analytics company, okay, you're just you're just a normal website, where keeping access logs is fine, because it's your website, you're not profiling people across the web, it's your website. The piece of advice we got was, you know, are people trying to load resources directly without loading the homepage or things like that, behaviour you wouldn't expect? You can match that in the in the access logs using Athena, actually. It's really, really powerful. Once you've got it all set up, and you've got Kinesis, Firehose, whatever it's called, you can set all these access logs up. And then you can query it and find these patterns, you can then look for offending IPs, that you should be blocking based on the weird patterns that they're making. But honestly, the rate limiting is huge. Definitely, everyone should have some kind of rate limiting, because that's going to help you a lot, as well, a common thing we see in the Laravel space people people allow the X-Forwarded-For, there's no protection on that because you don't know the IP address of the load balancer that's forwarding it to your application. In the Laravel code, it kind of has a wildcard so you can actually spoof an IP address from the request. Just be careful of that kind of stuff. You know, just be careful, we had this with our attacks, the attacker was spoofing the IP address. So just be mindful of that X-Forwarded-For header. And make sure you're rate limiting on the actual server IP and not anything else. So if you're doing an application level IP detection, just be careful if you're using Laravel, and perhaps other frameworks, okay?


Yan Cui: 15:13  

That's good advice. And that's actually a very applicable in terms of looking at the behaviours that you don't expect to see. And then finding bad actors that way. Once you've identified a bad actor, how do you then add the IP to the firewall? Do you make an API call to WAF to add the...


Jack Ellis: 15:30  

Great question. That is not currently set up. That is on the list. So what we're doing is we're actually building a security dashboard. And we've had talks about whether we'll publish this or whether we'll sell it, I don't think we will. We're building a security dashboard that links into things and you know, it's nothing special. It's not artificial intelligence, but we write code to try and identify things. And then what happens is it keeps track of the bad IPs, and then it can sync that through to API Gateway. At the moment, we were just doing a kind of copying a big list of IPs and blocking them manually. That that just, you know, you can do that. And but it's very dramatic, it'd be nice if you could kind of whack a mole, you know, and just press a button on your security dashboard, and then utilise that beautiful API and sync it through. So that's definitely on the list, but we're not doing that right now.


Yan Cui: 16:16  

Okay, cool. It's good to know that that's an option. Because again, you want to minimise the amount of manual stuff that you have to do whenever a bad actor pops up, because like you said, it's whack a mole, right? You take one IP down, they're going to bring up another one.


Jack Ellis: 16:30  

Yeah, for sure. One piece of advice I actually got from Fola, from the DDoS, AWS Shield Advanced response, whatever they call, AWS Shield Advanced team, he made a really good point. And he said, you know, they've only got so much money for this attack. And that was that really stuck with me, you know, there's only so much resources. Our situation was different. I can't talk in huge detail about what ended up and what we learned. But yeah, they've only got so many resources, right? So you can fight back. And as well, I was thinking, Oh, I shouldn't pass a 403 back to them. Because then they're going to ramp things up and get angry and keep trying to attack. Fola said, No, no, no, no, no, they are trying to get something from this attack, you need to actually send a 403 and say, No, no, we've got you, we know what you're doing. And hey, you know what, even me being a little bit cocky on this podcast could be, the attacker could listen to this and go, you know what, we're gonna just try and get them again and again and again. So I again, I got to be a little bit careful. And this is why people don't talk about DDoS in public. But you know what, you know, cloud vendor can do it we can do it, we can do it, right?


Yan Cui: 17:35  

Yeah, absolutely. And I think these kinds of lessons is really valuable for more people to, to find out about because, again, when you become successful, like you said, once some sort of bad actors are noticing you because you're doing well, that's when you become a target.


Jack Ellis: 17:51  

And let me add one thing as well. I'm talking here about a layer seven DDoS attack. Clearly, if someone's got a huge, huge botnet, and they do, I mean, Amazon will absorb a lot of it, right? But if someone's got a huge, huge botnet, and they're so damn motivated, and their budget is so much, you know, there are going to be problems and you're going to have to reach out to professionals again, and again and again. So I'm not saying that you can't DDoS further, that's not the the narrative here. I'm just saying that if you've got these kind of script kiddies, or you know, people with questionable resources, then you can you can fight back.


Yan Cui: 18:28  

Yeah, I remember, back in the day, I was, I forgot who it was now. I think it was David Fowler, maybe from the Microsoft team. He wrote this, I think it was a white paper about security. It’s basically Mossad or no Mossad. So basically, if something like a Mossad going after you, well, you know, you're done anyway. But everything else, you can do something about that. Like you said, if it's a script, kiddie, kiddie, then you can do something about that. You know, dealing with the low hanging fruits. Don't make yourself an easy target.


Jack Ellis: 18:58  

Yeah, I'll tell you what, I, people were saying to me about Cloudflare and Cloudflare is a solution to layer seven DDoS attacks. What does Cloudflare do? They throw up a capture a lot of the time and say prove that you're not a robot. Okay, that's fine. But as an analytics company, we need to continue serving legitimate traffic and everything else. And that rate limiting is really expensive. I was showing a friend about some cloud because we use Cloudflare. And we were looking at it and it's expensive compared to WAF. So I was just, in the end I just said, Look you got to go, you got to go to WAF. It's going to be much cheaper than using Cloudflare. But the point is that they don't, Cloudflare doesn't really do layer seven that well, in my opinion. You know, people can hate on me for that. But that’s how I feel.


Yan Cui: 19:41  

Okay, that's fair. I haven't used Cloudflare very much myself. So I'll take your word for it. I guess another thing I want to bring out and ask you about is that, because you said you come from, I guess more of a full staff front-end development world, coming and coming to serverless and not being a, I guess, like you said, an expert on the server side stuff. What are some of the biggest challenges you find in terms of adoption? Was the tooling? Is it some of the, I guess, the practices that you're used to like writing unit tests and things like that become quite difficult to translate? 


Jack Ellis: 20:14  

Yeah, good question. Entry level for sure. You know, to actually, before Laravel Vapor existed, I mean, maybe Bref did it, but you, there wasn't just an easy way to get a Laravel application on to Lambda and everything else. So the beauty of Laravel Vapor is that it actually orchestrates everything and brings it all together, I have never had to configure Lambda, I wouldn't even, I could probably get it working, right? But I wouldn't, I don't want to spend time learning this, I haven't got time to learn this, right? So actually learning how all this tooling links together and setting it all up. That is the thing that stopped me. Whereas I could go into something like Heroku and deploy my application, nothing existed like that for Laravel. That was that simple, and that beautiful. And then Laravel Vapor came along. And, and so many people, I mean, I have a course on, I have serverless, I have my serverless Laravel course. I don't know if you know about that. The serverless Laravel, I teach people how to use Laravel Vapor. And I've sold over 1000 copies of that. And I know that Vapor is very popular. So you've suddenly got this huge community of Laravel developers who've got this really, really Heroku-like, entry into the the serverless ecosystem on AWS, and people are loving it. So now we've got that. There aren't these problems of the entry level. But before that, you just thought, Oh, look at this Lambda, this isn't what I'm used to. I'm not, you know, you're used to, you know, pushing and uploading SFTP or however you do your deploys. I'm not used to all this, this Lambda stuff and upload, compile my function, upload my function, and API, and linking it all together. We just, I'm not familiar with that. And a lot of people aren't either. So the entry level has now been reduced. And that's a huge thing for serverless.


Yan Cui: 21:51  

Oh, that's cool. So I will share the link with it in the show notes to your course so that more people can check it out. And this great that you're giving, you're giving back to the community as well. And I was aware of Bref. And I spoke to the guy that created Bref about some of the things that he's seeing. And he's also seeing a lot of adoption of interest in the serverless, thanks to the fact that it's now easier entry points into serverless Lambda. For PHP developers, I guess I have to look at the Laravel Vapor. Well sounds like it's also quite a useful tool for for people that that want to get into Lambda from that from from Laravel,


Jack Ellis: 22:30  

Well, the thing of Laravel Vapor is created by the Laravel team, right? So Bref exists, and Bref is really good and Bref I've heard, if you're doing more advanced stuff in the serverless world, Bref is just a fantastic option. The the thing that gave me the confidence to move to serverless was Laravel Vapor comes out. And it's backed by the Laravel team, right? So that means it's actually going to be supported along. And I'm not, I like Bref as well, by the way, I'm not saying you shouldn't use Bref. But that's what gave me the confidence was the fact it was backed by Taylor, Taylor Otwell and his team. And that's what made us move. Bref is really good, too. By the way, I'm not, that's not me that’s taking a dig at Bref.


Yan Cui: 23:07  

Well, you can have two very good tools in the space. There’s no problem with that.



Jack Ellis: 23:10

Exactly.


Yan Cui: 23:12  

What about other things, like, in terms of some of the practices, like testing? Do you find that a struggle? What about in terms of monitoring, I guess, does Laravel Vapor give you some built-in monitoring integration that you can use?


Jack Ellis: 23:28  

Yeah, so Laravel Vapor certainly gives you the basics, I mean, that I think they're just linking into CloudWatch and surfacing it in an easier to read way. Anything more complex, you're going to have to learn CloudWatch. And that's just that's just how it is. With regards to, you know, testing, just use a CI tool we run, yeah, I mean, we have, we run our tests, and our tests are pretty good. It's pretty good coverage. In terms of, I don't think I do anything differently with serverless deployment versus server deployment for the testing.


Yan Cui: 23:58  

Okay, so I guess that's one of the benefits of being able to take Laravel and then putting it into Lambda as it is so that you don't have to change the way you, well, the way you use it writing code and writing and tests for them.


Jack Ellis: 24:10  

Yeah, no, I mean, like I say, Taylor's, Taylor and his team at Laravel have done all of this stuff. They've made, they, the thing that Taylor does really well is he he takes these complex things like recently, there's something called Octane. And it's basically, it's Swoole, I don't know if you know Swoole, and he's made it really simple. It's like, you know, how Go works where Go is, it's like multiple different processes just spun up. And it's, it's just faster base. It's a long running process, rather than just spin it up and then kill it. It's a long running process. And they process all these requests over and over. He takes these things and makes them simpler, basically. And that's what he did was serverless here. So he's done all of his tests and things just worked. You know, we deploying things. It just works. It's just fantastic. So we've really haven't had to tweak our test. It's been fantastic.


Yan Cui: 24:56  

Sounds, that's great. That's one of the things that I guess I've had to change quite a bit, since I moved to Lambda. It’s just how I run my test, how I do a lot of things, but I guess I've come from maybe quite different backgrounds to you. 


Jack Ellis: 25:11  

What language do you write?


Yan Cui: 25:12 



Right now, these days, I'm using Node.js. And it's, it's, I'm not running Express app in the Lambda function for performance reasons. And, and so you know, some of the things that I'm used to writing tests have to just be adapted to the way I write Lambda functions nowadays.


Jack Ellis: 25:32  

So you're writing functions, you have multiple functions for your application? 


Yan Cui: 25:37  

Yeah, yeah, I've got one, so imagine you've got an API, I've got one function per endpoints, well, per endpoint and methods, as opposed to one function that handles everything.


Jack Ellis: 25:46  

Okay, so, and that's a big difference. So we have our, we have our Lambda function for web requests. And then we have a Lambda function for commands. So things that run run in the cron job, you know, if I'm sending out emails every day, and then we have a Lambda for our background queue. And the background queue is linked to SQS. And the whole point is it goes into SQS and then comes through to the Lambda queue. So there's only three Lambdas per project, we definitely don't have it like that. So that, it's just, it's really simple. And that might change in the future, as more and more people say things like, you know, we want different priorities on our queue. We don't want the queue to be drowned through with various jobs. And that might change. Who knows, right? But things are very, very simple still, we definitely, I've seen that set up with the the multiple Lambda, one Lambda API endpoint, but we're definitely not there with that kind of stuff. That's interesting there.


Yan Cui: 26:35  

Yeah, there's some inherent trade-offs with with the two different approach, the approach you've taken is definitely a lot simpler. And also the fact that you've got tooling that allows you to just take your existing application as it is in Laravel and then run it in Lambda. So it makes life a lot easier. You don't have to change the way you write tests, you don't have to change the way you structure your code. Everything just works as before. But you get all the benefits a Lambda gives you in terms of the fact there's no management of servers, scaling, you get all the benefits out, from the Lambda platform. But there's there are also trade-offs in terms of performance in terms of cold starts, but also in terms of security, as well, whereby, you know, if you've got multiple functions, you can be more more granular in terms of how you issue the IAM permissions to individual functions. So in the case of a compromise, then you could limit your blast radius. But it's it's all depends on what is important to you and what you're used to. Certainly, I think that the fact that this easy entryway for people to get into Lambda and get the benefit from it, I think, is great. As you probably want to do more and more interesting stuff. You want to pay more attention to security around Lambda permissions, and things like that, then maybe let's start looking at breaking your your API into smaller functions. But then again, it's it's all dependent on what's more important, what's most important, getting a product out there and the scaling and then and then dealing with other things, optimizations that can come later, right?


Jack Ellis: 28:05  

Known for sure. And cold starts is an interesting topic, isn't it? As always been the probably the number one criticism of Lambda. And they released this ability to provision concurrency, right? And I found that interesting. A lot of people are sort of scared of that. And I think, well you know, just go into your Lambda page, look at what you typically use. And you can get an idea. And you can provision that. And it is faster. I mean, I like using it. We're probably going to use it. We're working on v3 right now. And we've, I think, we’ve switched back. Because when you deploy, you have to wait for it to actually provision. And when you're doing, when you're doing regular updates, it's annoying, right? So we've switched back to, we have a warming system. So it gets pinged to be kept warm. And that's done by Vapor. But I like provisioned concurrency. I think if you're not using provisioned concurrency, take a look at it. Because it's quite exciting.


Yan Cui: 28:48  

Yeah, especially if you've got a steady traffic as well. With provisioned concurrency, you got that nice, nice, warm instance that's running all the time. And we actually have worked out that if you can use about 60 or 70% of the utilisation, like in terms of the Lambda instance that you've got running, it's actually cheaper than the running Lambdas. 


Jack Ellis: 29:10 

60 or 70, that's amazing. 


Yan Cui: 29:12 

Yeah, because the, when you look at the pricing for provisioned concurrency, there’s per hour cost, but then the per request cost is cheaper compared to on demand. So if you've got enough invocation for those functions that you are provisioning, then you actually end up cheaper on your bill compared to, you know, running just on demand functions. And plus, you get the benefit of...


Jack Ellis: 29:32  

60 or 70, that's a surprise to me, but that's really [inaudible].


Yan Cui: 29:35  

Yeah, it's just the back of the napkin calculation we did when it was announced. But it's, it's around there, I think. But okay, so I think that's all the questions I had for today. Jack, thank you so much for joining us. Is there anything else that you'd like to share with the audience before we go, maybe upcoming changes to your to your course or other things you're doing?


Jack Ellis: 29:58  

Oh, I said I wouldn't do another plug. But so I'm working with Alex DeBrie. Just after we finished version 3, we're going to be launching a course called DynamoDB for Laravel. If you're coming from a kind of relational database background, and you're looking to move to something like DynamoDB. I'm working with Alex. And my role is to extract what he knows, and put it in the, in a way that ?? relational database developers [30:23] can understand because Alex is a DynamoDB genius, and it's ridiculous how much he knows. So my role is the extractor. I'm the beginner's mind extractor. And we're packaging up a course. And we're going to be releasing that. So that's dynamodbforlaravel.com. And then yeah, Fathom Fathom Analytics v3 is heating up right now. Do check us out, usefathom.com. It's really, really good. My co-founder designed it and it is beautiful. So you got to come and try that, okay?


Yan Cui: 30:52  

Nice. And yeah, I just had Alex on this on this podcast, the last episode as well. And yeah, Alex is a great guy. 


Jack Ellis: 31:02  

He is sharp. Yeah, it is going to be a good course. He's sharp. It’s very interesting. I mean, if... you just had him on you spoke about all of this single table design stuff. It's so fascinating, don't just, you know, it sounds like something different and it sounds a bit crazy. Don't ignore it. It is crazy, some of the stuff that he talks about. I really just… Buy his book as well. It's a really good book. When I read his book, I was just, I was reading it just going, Wow, wow! I got this doesn't make sense, you can't do this. No, you can. Wow! So definitely check out Alex DeBrie’s book. It’s very good.


Yan Cui: 31:33  

Yep, you'll find the links to his book, as well as to the last episode, where I had a chat with Alex about single table designs, why and the why not. So check it out. And again, Jack, thank you so much. And it's been a pleasure having you here.


Jack Ellis: 31:48  

Thanks so much for having me. I appreciate it.


Yan Cui: 31:50  

Good luck with version 3. And see you soon. 


Jack Ellis: 31:53 

Thank you. 


Yan Cui: 31:54  

Bye, bye. 


Yan Cui: 32:08  

So that's it for another episode of Real World Serverless. To access the show notes, please go to realworldserverless.com. If you want to learn how to build production ready Serverless Applications, please check out my upcoming courses at productionreadyserverless.com. And I'll see you guys next time.