Real World Serverless with theburningmonk

#62: Solve the challenges with serverless caching with Khawaja Shams

June 01, 2022 Yan Cui Season 1 Episode 62
Real World Serverless with theburningmonk
#62: Solve the challenges with serverless caching with Khawaja Shams
Show Notes Transcript

Links from the episode:

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.

To learn how to build production-ready serverless applications, check out my upcoming workshops.


Opening theme song:
Cheery Monday by Kevin MacLeod
Link: https://incompetech.filmmusic.io/song/3495-cheery-monday
License: http://creativecommons.org/licenses/by/4.0

Yan Cui: 00:12  

Hi, welcome back to another episode of Real World Serverless, a podcast where I speak with real world practitioners and get their stories from the trenches. Today, I'm joined by Khawaja who used to run the DynamoDB team and now is building a solution for serverless caching at the Momento. Hey man, welcome to the show.


Khawaja Shams: 00:30  

Hey, thank you so much for having me, Yan.


Yan Cui: 00:32  

So before we start, can we maybe just get a quick instroduction about your background, because when we spoke before, you were telling me about some of the work you had done at AWS for the DynamoDB service, and some of the problems that you see with serverless, and caching, are how it's still a bit of a underserved area for serverless applications, and why you decided to go ahead and build this new offering. So yeah, tell us about yourself.


Khawaja Shams: 00:58  

Sure, um, I've been a AWS and cloud computing fanboy, from from the very early days. I started deploying production application AWS in the mid 2000s, as a customer when I was working on image processing pipelines for the Mars Rovers, and then helped brought a variety of cloud computing to capabilities to workflows across the US government. In 2013, I joined Amazon as a full time employee. I spent about seven years there, and one of my favourite parts of the job was running the engineering team for DynamoDB. DynamoDB is arguably the best non no SQL relational, no SQL database out there. It's super high scale, mission critical, performance sensitive. And building that was an incredible journey. And just watching how large scale systems are built and how to make them incredibly available, and also carrying this burden of availability, because the impact of something like this goes down, the impact is global in so many ways. So where I come from, like my, my whole career has been working on mission critical systems, how to make them perform, and how to make them scale, how to make them just never go down. And that is something I've really, really enjoyed working on.


Yan Cui: 02:10  

So DynamoDB has really been a godsend. For me, I've been working with AWS since the 2010. I remember back then it was just SimpleDB, and, you know, to get to a certain level of scale, you can't rely on a SimpleDB’s built-in scaling. And we had to do this multi layer of caching plus persistence, plus DynamoDB solution, just to be able to build Facebook games with a few million users that plays our games regularly. And when DynamoDB came, everything just becomes so much simpler. It literally took care of maybe 70% of my codebase, which was just dealing with persistence. And when DynamoDB came around it was just becomes, okay, just use the SDK to call DynamoDB and that’s it, job done. We don’t need to worry about all these scaling, all these persistence and, you know, worry about losing data somewhere along the line. It has been one of the most amazing service I've ever used on on AWS. And I certainly remember, it was the fastest growing service when it came out. And it remained the fastest growing service for many, many years after that. And I think nowadays this is kind of the sort of de-facto database that people use, when they're building on AWS nowadays.


Khawaja Shams: 03:24  

It's, you know, even SimpleDB was actually one of my favourite services to when it when it first came out, it was just seamless. And I remember the scaling problems you're talking about, there was a really nice blog done by the Netflix folks, I think was ? who who talked about how, how to actually slowly ramp up the load on on SimpleDB to handle the scenarios that you mentioned, you know, DynamoDB, to your point took care of a lot of the scaling problems that people run into. And it basically took away that boundary, that ceiling that you had on on a scale, and it just kind of disappears into the ether. And one of my favourite parts of DynamoDB is that when you're running a production, when you're running at a large scale, it just kind of disappears, you end up not having to think about the database, you're not having to micromanage the nuances of the large scale distribute system behind the scenes. There's a team that is taking care of all that complexity for you and presenting to you an awesome layer of abstraction where you're just calling CREATE TABLE, you throw millions of TPS at it, and it just works behind the scenes. And that, to me is the essence of what serverless has to be. The word Serverless is new but but this notion of something that kind of just, you know, blends into the background, you never have to think about it. You never have to worry about the capacity behind the scenes. That's just beautiful. And this particular notion has been around from the very, very early days even before the word serverless got started to get coined or tossed around. Well, the first AWS services SQS it was serverless. You never have to worry about you know, instances for SQS or running out of capacity on SQS necessarily, you just threw data at it. Same thing for S3, you never have to worry about S3 running out of gigabytes of storage. You just create a bucket, you throw instances, you throw data at it. And when you ask for it, it's there. And that's, that's my favourite part about serverless services as the way how they just kind of blend into the background.


Yan Cui: 05:22  

Yeah, exactly. And S3 also gives you 11 9's guarantee. So, you know, no small feat there. And I guess, one of the things I really love about DynamoDB was the fact that it’s really restrictive. But it means that anything you do, can safely scale to a million ops per second. I've used MongoDB, and few other services in the past. And they let you do so much different things, so easy to use. But very quickly, you run into a scaling problem, because you're doing things that are just never going to scale if you think about it. But that sort of simplicity kind of just, I guess gives you this false sense of security, they start building the application, you're doing all these crazy things. But the moment you get some traction, some traffic, suddenly your application stops working. So DynamoDB’s constraints actually helps you build better, more scalable and more robust systems at the end of the day.


Khawaja Shams: 06:15  

Absolutely. I think some of the limitations that the DynamoDB puts in place, they look the in the heat of the moment, they often feel like they're slowing you down. But it's what it's really doing behind the scenes, this is enforcing some best practices and minimising the ease with which you can shoot yourself in the foot. Right. So I mean, there are caches, for instance, where you can store objects that are half a gigabyte. And that's allowed, it's great, the versatility is amazing, until somebody stores an object that's half a gigabyte, and then they they started doing again on it. And you know, things kind of just fall apart. And this was the the core differentiator for DynamoDB was to do a very few set of things. But do it really, really well and maintain that tight, high percentile latency for the customers so that they can rely on the limited functionality that it had for their most critical workloads. And if you have extreme queries, and if you want to do analytics, there are ways to just kind of decouple that from the key value lookups with some additional indexes that that DynamoDB provides as well. But that decoupling, as well as that limitation allows you to really rely on the capabilities that do exist and saves you from the distractions that will come if you put expensive queries in the critical path of your data play.


Yan Cui: 07:33  

So talking about the expensive queries and storing half a gigabyte of data into caching. Caching is still one of those areas, that's I guess, not been well served for serverless applications for a number of reasons. You know, you've got all the solutions out there, ready. So you've got Memcached, and you can use ElastiCache in AWS, which is so great. But a lot of how it works is still built for serverful applications in terms of, you know, I got this big EC2 machines, each one of them is going to have a pool of connections to my cache cluster. And they're going to be doing a lot of things, concurrent operations on those connections. Whereas with Lambda, you've got the opposite scenario where you've got lots of small Lambda functions or workers that are running your Lambda function, all maintain one connection. That's all it needs. But they're not doing a lot on those connections. By total, you have a lot of active connections to your cluster, which means there's some connection management that needs to be done either on the client side in your Lambda function, or on the cluster side. It’s a similar problem that we have with relational databases when you’re using Lambda to connect to RDS. Beyond that, there's also other operational things you got to worry about, having to pay for uptime for cache cluster, and so on, so forth. And so this is the area that you are now focused on tackling. So tell us about the way you guys are doing and tell us about your new company.


Khawaja Shams: 08:55  

Yeah, love to, you know, there's two aspects of this. There is caching for serverless applications, and as well as having serverless caches. So I'll start with the caching for serverless applications first, oftentimes run into customers that have just drank the serverless kool aid. They love the fact that they don't have to worry about servers, they've got their Lambda and API gateways there, they've got CloudFront for doing additional caching, for edge caching and whatnot, they've got S3 for storage, they've got DynamoDB for their persistence of, you know, key value, lookups, and so forth. And then it's time to help accelerate and the entire serverless dream falls apart, because you cannot do caching today without introducing servers into the mix. And, you know, so the entirety of all this, you know, beautiful architecture that you set up now kind of has to be perverted by throwing in specific servers that are doing caching. And why why is that bad? The same things we love about DynamoDB, the fact that there is no machines to manage, the fact that you're not managing the number of shards, the number of replicas, all of those decisions start to come into play. So when you go in to provision a cache, using any of the available options, you're still thinking about, well, what machine type do I want? How many machines do I want? Do I want multi-AZ? Do I want single-AZ? How much RAM does this machine have? How does that translate into the memory available to my working set? There's a whole lot of questions that you have to ask yourself and buying into the heat of the moment in terms of just the provisioning experience for the cache. In my own teams, you know, integrating a cache turns into a sprint, like it's measured into sprint's of work, rather than, you know, when you create a dynamo table, we don't even think about it, we just say create table and we kind of move on. Whereas when you're presenting a cache, you're thinking about capacity management, you're thinking about alarming, you're thinking about, you know what the best configurations ought to be, thinking about auto scaling, you're thinking about replication. There's a whole lot of work. And, you know, it adds a lot of barrier to productivity, which today causes developers to just extend and kind of defer this really easy simple opportunity to improve their performance and their scale. When we think about serverless caching, we were working completely backwards from the customer and thinking, oh, what does it mean to have an awesome caching experience. And to us, it's a developer issues an API call to create cache, and the cache has to be ready in under a second. Our API call is a blocking call where the cache is ready under a second. And there is, you know, we set out with a vision of having zero configurations. We failed. We have one configuration, so you know, it's not zero. But you know, you still have to tell us what you want to call the cache. But that's it, call the cache, give it a name. And behind the scenes, we make all the best practices decisions on your behalf whether it's replication going multi-AZ, which machine type, how many shards are needed, heat management, all that just kind of gets taken care of. The other part with serverless with server full caching is that you're doing very direct capacity management. You have to think hard about, you know, your peaks coming up for any events that are happening and whatnot, and you have to scale up proactively. And then you either have to scale down proactively, or you just kind of absorb the cost. And you're thinking about scale in units of instances, which is not how you think about scale on S3 or DynamoDB. DynamoDB, you're thinking about TPS, that's all that matters. It doesn't matter how many IPen or C, C6i instances that you need. All of that is complete abstracted away. You just tell dynamo, hey, I'm gonna do this many TPS, and it just kind of work. So for us, we're aiming for a very similar vision where the customers don't have a lot of knobs. And that we take all of those practices, the best practices that we would build. It's limiting in some cases, but it's really a big boost on productivity, because developers can get started instantly and get a cache integrated into their systems.


Yan Cui: 13:03  

There's also some interesting differences as well compared to more server form applications where there's a lot more local caching in your application, where you can actually, you know, lots of concurrent operations being handled by your code. So you can use the local caches. And so even can store data into say SQLite if you want to. Whereas your Lambda function, there's a lot more concurrent units, or doing one transaction at a time. So you can’t really take advantage of that local caching as much as you could with more server form application. And I think there's also a few hard lessons that you're going to learn when it comes to operating the cache cluster, things like a fallbacks and, you know, failover, when a node goes down. Those are kind of things that people don't think about until you absolutely need to, and they realise that, oh, we don't actually know how this works. I remember a couple years ago, I was working for a company, and we're using Redis, with ElastiCache. And at a time, there was no Redis Sentinel. And we learned the hard way that ElastiCache use the DNS Failover, which means when the primary node goes down, it took a few minutes to switch to the secondary node, which means our app was essentially down for a few minutes. So we then eventually actually ran Redis ourselves with Sentinels so that we get that sub-second failover time. Again, those are the sort of things that often you have to think about, that puts you back towards the mindset of thinking about servers, thinking about uptime, thinking about networking, because you also need the VPCs and all that stuff. All the things that I want to get away from, from having to worry about and think about when I'm building my serverless applications.


Khawaja Shams: 14:42  

Yeah, I mean, you hit on a lot of really important points. And I think at a high level, what we observe is customers keep having outages with their caching or with their setup of the servers. And they learn from these, you know, mistakes one outage at a time and it is it painful to kind of watch some of these things. There's a there's a really awesome blog by folks at Tinder, where they did incredible engineering to deal with maintenance windows. They subscribe to SNS notifications, when the SNS notification comes in that a maintenance event is happening, they refresh the state on their application servers really, really more frequently so that they can withstand, they can absorb that outage. And even with that, they have to increase their capacity by like 10, or 12%, to deal with, you know, idle connections, because they were getting back pressure from connections that were starting to timeout. And while this is an impressive engineering feat, it's really impressive work, it's painful to see customers have to make that investment and they had to Tinder had to do it. Because I think there there's a quote on that blog that at one point maintenance windows were the leading cause of outages for Tinder application. And just think about that the impact of that on society, right? Like people couldn't get dates because of maintenance windows, like that's, that's a real, you know, problem. But, But jokes aside, when you're thinking about caching in the serverless world, some of the things that you take for granted in this virtual world kind of go away. So if you are, you know, an old school developer that was building things on Tomcat, where you had hundreds of threads that were simultaneously sharing a cache, your cache hit rates were going to be meaningfully higher, because you've got hundreds of threads, they all have access to this local cache. And you know, whether you're bursting from one request per second to hundreds of requests a second, they, they all kind of get to minimise the the lookups and coalesce the lookups. Now, you suddenly go into the Lambda world, where if you get 1000 concurrent requests, like simultaneously, especially if using provision capacity, you might actually spin off 1000s of Lambdas. And each one of them is going to start with a cold cache. So the value of the lookaside cache is more important than ever, in in this, you know, serverless model. And on top of that, you also cannot rely on the local caches. Even if you have a static rate, like even if you're doing one or two TPS, you might get into situations where Lambda might start or it might get suspended for a little bit and then start with a value that might be stale, like you don't have this notion of background threads that can refresh the cache periodically for you anymore, that you could in the server form environment. So a lot of these, you know, assumptions and things that we took for granted have to be kind of challenged. So naturally, you move to lookaside caching. And that's when you run into the other problem you mentioned, which is connection. So if you have 1000 Lambdas, that just went off and each issue a brand new connection, they're going to cause pain on your caching server, one because you're suddenly you have to establish connection each time.  So every time a new Lambda starting, you're establishing a new connection, and two you, you might run out of connection count on your on your cache node, or some of the things that we've noticed in some of these cache products, they don't handle lots of concurrent connections really, really well. They're optimised for few connections. And this gets customers into situations where the cache actually slows them down or causes outages because they're they're kind of waiting for these connections to be established, or are just kind of sitting in a backlog or that node to kind of take on this this request. Now, this particular problem of connection management is not new. Facebook and Twitter solved this problem, you know, over a decade ago by putting proxies in front of a cache. So Twitter very long time ago, open source something called twemproxy. Facebook uses mcrouter internally. Pinterest also uses mcrouter internally. And what this proxy is doing is it's absorbing a lot of the connections in one place. And then it is multiplexing them into existing sustained connections to the Memcached nodes or the Redis nodes behind the scene to kind of absorb this. And that layer of indirection is super helpful in minimising the impact on the cache node, which then gets to focus solely on serving the cache. And this is something that just doesn't exist today, this layer of proxy is just not there for existing server for caching solutions that exist.


Yan Cui: 19:13  

You know, with the problems that we've talked about in terms of, you know, the overhead, the challenges of managing a cache and for your Lambda functions, I actually see a lot of people end up using DynamoDB as the caching layer because it's just so much easier. And sure it works when you don't have a lot of operations, you don't really have high TPS but in those really high super scenarios or you're dealing with fairly large cache payloads. DynamoDB is just not the right solution for that anymore. So in this case, there has been some other companies who have been looking at building caching solution that is suitable for serverless applications. The one that I saw a couple of years ago I think they rebranded now to Upstash, and now you guys are building Momento. So tell us about Memento and also, how do you guys differ from, say something like Upstash?


Khawaja Shams: 20:06  

Yeah, DynamoDB being the wrong use for the job, by the way, I absolutely agree with that, you know, one of the jobs of a cache is to absorb spikes. And to the second thing is, it helps you reduce cost on your expensive databases. Dynamo is not meant for either of those. Dynamo if you're doing a lot of reads, it becomes expensive. So if you treat Dynamo as a cache, especially for larger objects, you end up spending a lot of money, you don't get the performance that you would out of an in memory cache that's sitting in EC2 directly. And you have throughput limits, you can't read a key from Dynamo at more than 3000 TPS. Now that's not a bad mouthing Dynamo at all, it's just the wrong tool for this specific problem. So that's why it's not a direct replacement for a cache. But it is as close as it gets to a serverless cache for our customers. Now, what we're doing at Momento is, again, we started with a customer backwards, we started with a custom protocol that is gRPC backed. It allows customers to do a lot with just even a single socket. You can do many, many TPS on a single socket. And one of the problems that we observe with regards to outages that the customers are having is misconfigured clients, clients that have unbounded connection pools, clients that have untuned timeouts, clients that have memory leaks in them, and so forth. And we realised that we can't just offer a, you know, a Redis compatible or Memcached compatible service and ignore the clients, we have to solve the problem. If we're really going to help our customers improve performance, cache at rates and availability, we really need to fix the client. So we built this protocol that actually reduces the load on our customers application servers. But we also hadn't built our SDKs. So that we have all of these tuning parameters for connection pooling and timeouts, retries. All of that is baked in. There's something AWS does really, really well, by the way, if you use the Dynamo SDK, a lot of the best practices for connection management are baked in. But when you go to like Memcached, or Redis clients that you download off of GitHub, a lot of them don't have good defaults that are optimised for availability and performance. So we started there. And we bake that. The second thing that we did is we we offer we built in this proxy layer that absorbs the, you know, 1000s of concurrent connections that you may have coming at us from from Lambda and so forth. But it also absorbs our ability to do deployments and scale out without impacting customers cache at rates, without customers having maintenance windows. So we can actually seamlessly scale up, scale down, do deployments, without the customers ever even noticing. And this is why Dynamo and Momento don't have planned outages, there's no maintenance windows, we spend a lot of effort making sure that all of this kind of fades away in the background, and you never have to think about it. And then lastly, we deal with a lot of problems. So one of the things that Daniela and I have been doing is we just we study caching outages for a living. We go we find caching outages, we meet people who have run have had caching outages, and we just interview them, and we just learn from what happened, what did they do. And then we go talk to people who have run some of the largest caching clusters and learn the techniques that they have developed over time. And we try to bake all of that into this automated service so that all of this kind of our customers don't have to learn these lessons, one outage at a time. So some of the things that go wrong. When you have a multi node caching cluster, you run into hot shards, we monitor for hot shards, and we automatically split your shards, when they start to get hot. People run into hot key issues in clusters that topple over a single node. There's a really fun outage to read about, well, not fun, but a really good lesson to learn from there's a company called Shortcut. They did an RCA. Single node got or subscribe ran out of bandwidth, went down. And there was a cascading failure that took them over a month to kind of detect and fix over a course of five outages. We baked that those kinds of things in so that our customers aren't, you know, dealing with these outages and they can kind of focus on just the gets and sets and the scaling and the heat management and the capacity management and deployment, all kind of happened behind this.


Yan Cui: 24:33  

Okay, so I guess for now you guys are focusing on the basic operations around gets and sets. Is there any sort of view or plan to support some of the things that you get with say Redis in terms of data structures? I think one of the things that Upstash was promoting was the fact that okay, they support the Redis API so that you can do a lot of really cool things that Redis lets you do, just something that you guys are also looking to do in the future as well?


Khawaja Shams: 25:01  

Yeah, um, Upstash is doing a great job, I think they they're meaningfully advancing the state of serverless caching and making Redis completely serverless. And that is a really sound approach to go to go make Redis available. Redis is an incredibly versatile Swiss Army knife. It's also a great, you know, great database. But as Redis evolves, it's turning again, more and more into a database and less and less into a cache, it's got lots of capabilities. And our take is slightly different. We are very limited in the number of capabilities that we offer, but we are designing it for instant elasticity, instant scale, and high availability with predictable performance. Those if you're you can optimise you know, lots of features. And instant elasticity, consistent performance, high availability, like doing that, you know, together is incredibly tough. So while we remain humbled by how many features Redis has, we are very slowly adding capabilities into our cache offering, and doing it in ways that are limiting, but enforce the best practices so that our customers will continue to have consistent latencies and high availability. Now, some of the things that are, you know, very close to the top of our roadmap are things like HashMap sets, you know, lists and things of that nature, they're going to be out on Momento pretty soon. The things that are a little bit harder are things like stored procedures like Redis has the Lua functions, those are really, really powerful. They're also really hard to implement in a serverless environment because they can have a lot of variabilities. So those I think, you know, I think fall into the category of break glass, I know what I'm doing, let me just have at it with the server. It's not something we offer today. But but the rest of the capabilities, we were just going to incrementally keep adding in ways that make sense. With an API that also makes sense, if we were building it from the ground up, rather than just supporting an existing API. That's, that's there.


Yan Cui: 27:05  

Okay, sounds good. Because that actually feels very much like the approach that you guys took with DynamoDB, with a constrained set of operations so what you can do, but everything that you're allowed to do can be scaled, basically made, like you said, forces to think about best practices and making sure that whatever you're doing is going to be scalable, is going to be resilient, and it's going to be safe, rather than letting you shoot yourself in the foot in six months time when you've got to be more traffic. 


Khawaja Shams: 27:33  

That's right. And with microservices, it just becomes easier and easier to have that one microservice that just shows up and, and runs a bad query or does something funky and boom, like everybody who was relying on that cache kind of just suffers.


Yan Cui: 27:46  

Yeah, because I mean, Redis is great. And I do like Redis. But then like you said, it just so many things you can do nowadays. I've seen people even use it for pubsub, which you know, in the right cases, that's really useful. But then again, relying on this thing that is supposed to be my caching layer, they're supposed to be ephemeral that I shouldn't have to worry about it going away because it's just a caching layer, to now something that I’m actually depending on it being live all the time so that I can get my updates from some background process so that my user facing APIs or whatnot, can do its things to either refresh the cache, well, refresh some data or to send notifications to customers. So again, it becomes so, like you said, it's a Swiss Army knife or infrastructure piece in my architecture that now got this one single point of failure that if it goes away, then a lot of things are going to break.


Khawaja Shams: 28:36  

Yeah, I mean, if you really know what you're doing, and if you can understand the ramifications of some of the choices that are available in these large Swiss Army knives, it's these are great tools. And, and I have a lot of admiration for for each of these tools. It's just, you know, we're trying to aim for, you know, kind of just that zero config world where you can just kind of integrate and just rely on this thing to be there. And without necessarily turning into an expert distributed systems engineer to run a multi-million TPS service.


Yan Cui: 29:08  

Okay, so I guess so where are you guys at with Memento now? Can people actually go and sign up and try it out for themselves?


Khawaja Shams: 29:15  

Yeah, we're in private beta. We, you know, you can download the CLI, you can, you know, sign up for an account right on the CLI and start playing with it right away. It's completely self-service. You know, one of the internal metrics we have is we want a brand new developer to be able to download, install, create a cache and start doing gets and sets against it in under five minutes. If you do try it, we would love to know how long it took you because it's really important for us and this is not about provisioning time. It’s about ramp-up and onboarding, it is incredibly important to both me and Daniella, that customers can onboard on the service in a delightful manner quickly. And then once they're onboarded creating subsequent caches got to be, you know, a one-second thing where you just call create cache. And it just works the way a bucket works in S3, or a queue works in SQS, or a table works in DynamoDB.


Yan Cui: 30:09  

Ok. I'll put the links in the show notes so that anyone who's interested in trying out can go ahead and sign up and then maybe let you know how long it took them to sign up and get their first API call going.


Khawaja Shams: 30:22  

Absolutely. We'd love that. And we have SDK and a lot of popular languages. We support Rust, Java, JavaScript, Go, Python, C#. I think somebody asked us for a Perl SDK. It's it's not we don't have one yet, but you know, you can you can certainly do a lot of bash scripting with our CLI as well, which is written in Rust.


Yan Cui: 30:42  

Yea. Rust is another language that is getting a lot of popularity, certainly amongst the Lambda users.


Khawaja Shams: 30:48  

Absolutely. I mean, it's better for cold start. It’s super high performance. It's and it's just a joy to write in. Our caching engine is written entirely in Rust. We have been getting expertise on Rust internally as well. It's a beautiful language and we really enjoy it.


Yan Cui: 31:05  

Cool. Yeah, thank you so much for taking the time to talk to us today. And I'm looking forward to trying Momento myself and maybe give you my feedback and tell you how long it took me to get signed up and to get going. 


Khawaja Shams: 31:17  

Yeah, thank you so much for having me. I really enjoyed the conversation. Feel free to reach out if you have any questions at all.


Yan Cui: 31:22  

Thank you. Thank you everyone for listening today. And go to Momento hq.com.


Khawaja Shams: 31:26  

And give Momento a test drive.


Yan Cui: 31:28

Cool. Take it easy, everyone. See you guys next time. 


Yan Cui: 31:43  

So that's it for another episode of Real World Serverless. To access the show notes, please go to realworldserverless.com. If you want to learn how to build production-ready serverless applications, please check out my upcoming courses at productionreadyserverless.com. And I'll see you guys next time.