Real World Serverless with theburningmonk

#58: Building a serverless crypto platform with Louis Quaintance

January 12, 2022 Yan Cui Season 1 Episode 58
Real World Serverless with theburningmonk
#58: Building a serverless crypto platform with Louis Quaintance
Show Notes Transcript

You can find Louis on LinkedIn here.

Links from the episode:

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.

To learn how to build production-ready serverless applications, check out my upcoming workshops.


Opening theme song:
Cheery Monday by Kevin MacLeod
Link: https://incompetech.filmmusic.io/song/3495-cheery-monday
License: http://creativecommons.org/licenses/by/4.0

Yan Cui: 00:12  

Hi, welcome back to another episode of Real World Serverless, a podcast where I speak with real-world practitioners and get their stories from the trenches. Today, I'm joined by Louis Quaintance from AQRU. Hey, man.


Louis Quaintance: 00:23  

Hey Yan, it's great to be here. 


Yan Cui: 00:26  

Yeah, Louis. So can you tell us about your platform, and what you explained to me earlier is kind of like a bank saving accounts for your cryptocurrencies. And it's really interesting how you kind of build this whole platform using mostly serverless components. So before we dive into the architecture side of things, just maybe explain to us what is AQRU? How does it work?


Louis Quaintance: 00:47  

Sure, sure. I mean, before I do that, Yan, I must say, it's really an honor to be on your show, I remember seeing the talk you gave in London at the JavaScript Exchange, I think it was something like that. And I was really, at the time, I was really blown away by serverless. And I wasn't using it. So it's great to be here. So thanks very much for having me on. Yeah, so I work at AQRU. I’ve been there about six months, it's been a real journey. And what we're looking to do is really simplify the process of buying and investing your crypto for the crypto novice, someone maybe who's got a bit of money to spare, but doesn't doesn't hasn't have a track record of investing in crypto and wants to really simplify that. So we're looking to kind of make it known really to users that they can take the crypto they already have, or they can deposit real money on our platform. And we will go and invest that for them and earn them a yield. And part of that is because a lot of people don't really realize that just by holding crypto, they can earn money, they can stake that money on crypto exchanges, in return for reward. And so it's really simple with with our products, you deposit your your euros or pounds, or you deposit your existing cryptocurrency that you've you bought elsewhere. And we then invest invest that for you and pay a return yearly of 12% on your stablecoin, six on Bitcoin, and 6% on Ethereum. And we're really trying to make it simple but also open up this avenue for people to to earn money on crypto without having to sort of buy and sell and speculate all the time.


Yan Cui: 02:35  

That's really kind of interesting because from that way you describe it really feels like a savings account where you're getting some yield from the money you just have sitting in the bank. But guess in that case, is any so financial regulations that you guys need to be, I guess, adhere to anything that as a customer I should be thinking about in terms of, I don't know, things like the financial regulators in the UK, things like that, in terms of protection against the, you know, if a bank goes bankrupt, I guess, some protection around the money I have in the in the account. Does that come into play for things like cryptocurrencies?


Louis Quaintance: 03:11  

Yeah, I mean, obviously, on the one hand, cryptocurrency at the moment is unregulated. But we do have very, very strict regulations in the way we handle cryptocurrency in our company. So we are we go through a whole series, a whole series of checks to verify who the person is, when they come onto the platform. So one of the first things they do is they go through a kind of familiar know your client process where they scan their passport, they take a selfie, that enables us to scan them against a series of kind of money laundering databases and financial crime systems so that we can verify that the people coming on the platform are actually kind of here to do good business. They're not here to sort of try and hack the hack the system. But also when you do deposit, all the money’s the money's insured. So we we work with a third party that enables us to integrate and work with the blockchain. And they manage the customers' money that the vaults in a vault and that allows us to really lock down and provide a very high level of security but also a level of insurance. So when we do any insurance, so any investments, that money is insured as well, similar to how it would be in a bank. So there's a very high level of security, there's also a working provider who actually insures the crypto as well. So there's that sort of double layer similar to what you would get in a normal savings account or normal banking process really.


Yan Cui: 04:46  

Okay, that's really good to know. And guess in that case, you need to have some kind of process that that handles the whole validation process of getting your your passport photos and having someone to verify and maybe calling some other government APIs make sure that you know your passport number is not on some naughty list of people that shouldn't be doing these kinds of things. I guess in that case, what is the architecture look like on your backend? I imagine you've got maybe some kind of workflow engine to handle that particular validation process. You've got maybe some APIs. And you and you mentioned that this is mostly all serverless. Can you maybe just paint us a high-level picture of what does the architecture looks like for your system?


Louis Quaintance: 05:29  

Yeah, sure, sure. So I mean, we've, we've built two kinds of front ends to begin with, one of them is obviously the web. So that's in React. And we have a mobile app as well, which is in React Native. And we use a third party, a well-known sort of identity verification, third party, I think Monza uses it as well, called Onfido. And Onfido allows you to embed their SDK into the way your web app or your mobile app. And essentially, those those those kinds of drop-in components, they allow us to capture a scanned image of the of the passport, and all the the users, you know, selfie, and if you like, and then that data is then pushed to their API. And then really a lot of pretty much in the case of that third party and all other third parties really, our our integration is via Webhooks. So we, one of the things that made it really, I guess, nice, really for using serverless in this case was that we were able to just really quickly spin up an API Gateway with a Lambda attached. And because a lot of these, these processes are very, very asynchronous, you know, well, they obviously by definition, but they they're not happening all the time, you know, we don't need a permanently running server to process these things. Because it's not, it's not having, like, every minute, it's happening throughout the day at different times. So just to be able to fire up those applications, when needed, was really helpful. So a lot of it is a case of us, you know, receiving the webhook event, auditing it in S3. And then we typically are using SNS at the moment as a kind of event backbone, so that we can, we found that the performance of that was well within what we needed. So we typically publish publish the events to SNS, and then we have different SQS queues that listen out for certain types of events. And then Lambdas that read off the queues and then process the the the events as they come in, typically talking to Dynamo, or, or Aurora Postgres for the case of kind of more numerical financial type transactions.


Yan Cui: 07:50  

Okay, so are you I guess, in that case, are you using the S3 notification mechanism to push an event into SNS, whenever an event has been captured via the Webhook by the Lambda function, returning to S3 for audit, and then that triggers SNS message then goes into SQS and triggers some other Lambda function to then pick up that payload? And then do some work with that?


Louis Quaintance: 08:14  

Sure. We're, we're still roughly the early stages in the sense of six months ago, at the moment, we're just using S3 as an as an audit, really. So we're not we're not doing that using the notifications. But so we just.. so the Lambda sits behind the webhook, the API Gateway is just writing to SNS directly at the moment. And we're just using an S3 just as an audit as a backup really.


Yan Cui: 08:38  

Okay. And have you guys looked at using perhaps EventBridge for this instead of SNS because that seemed to be quite the the trend nowadays that the people are moving away from using SNS and SQS, but more moving towards using EventBridge? And having like a centralized event bus for the entire system?


Louis Quaintance: 08:56  

Yeah, I looked into it. I haven't... under the pressures of time, I had some really working examples of of how to get SNS going fast. So I use that, it's definitely on our roadmap to sort of look into it, we're only using it at the moment for particularly triggering scheduled events really SNS we're using that the sort of event sources module you get with the CDK, I think uses the EventBridge under the hood. So yeah, it's definitely something that we’ll we'll look towards in the future.


Yan Cui: 09:29  

I guess that's a quite an interesting point there that you've kind of built this whole thing within just a few months, which is quite an quite an accomplishment. I guess the the fact that serverless probably helps you a little bit with that. You don’t have to worry too much about infrastructure. You can just rely on AWS to provide the infrastructure for you. And it sounds like cost is also maybe a concern as well, there you don't have to run web server 24/7 just so that you handle the callback from time to time.


Louis Quaintance: 09:56  

Yeah, that's spot on. Yeah, I mean, I think Although although we've kind of we've been blessed as a company in the last few months, we've been bought out by a bigger entity and listed in the UK. When we first started, you know, we didn't know how much we would, how much cash we'd have, and how long that would last. And we only had myself and a good friend of mine, Gary, who we'd worked together before and had a lot of experience sort of mobile apps and these types of systems. So, so we kind of had to sort of, we didn't have the budget to say, hire a big platform team to manage the infrastructure. So I suppose I suppose that was a big win with serverless really. In my previous company, we were using Kubernetes. And, you know, it was it was a it was, it was a had some had some merits and so on. But it was really expensive to run and to operate. And it requires, you know, a number of platform engineers, who were who are managing that. And I just felt that I, in a previous role that I've been in for about six, seven months, I use heavily the Amazon CDK. And that's just been like, absolutely, just fantastic, really for, for for managing all this stuff. I'd historically used a bit of CloudFormation, which I could never really get myself, my head round all the all the syntax and writing that fast enough, you know what I mean, so there was that. And I'd use the serverless npm, you know, the serverkess sort of framework as well, which had been good, but maybe for the sort of breadth of things we wanted to do is a little bit restrictive but didn't give us all the freedom. So I didn't want I didn't want to be hand cranking YAML or JSON files, I wanted to have to write it in code. And I've never really liked TerraForm as a sort of, as a sort of like a language or, you know, sort of scripting thing. So although it's very powerful, I… but I I really liked CDK. So CDK allowed us to set up all this infrastructure really quickly when we didn't have a platform team. So that's been that's been massive for us. And, and I think tied to that is that, you know, although it's not directly related to serverless, we, we were able to really simply deploy our software with with GitHub actions, as well. So using GitHub and GitHub actions, and using the CDK, we were able to just, you know, orchestra all the deployments really nicely. And again, that ties into the fact that, you know, in a previous job, and many jobs people were always using, using Jenkins, which is great. But you know, it was a lot of it. It was like a self managed Jenkins. So that was one thing. And also, a lot of companies historically, have used BitBucket Pipelines, which are, which are fine. But my personal view is I found that, you know, using GitHub actions to be a lot easier, but I don't know what you think. But that's been my experience.


Yan Cui: 13:02  

Yeah, I've used the GitHub actions on quite a few projects that is quite easy to set up, and I have to say I quite like the fact that you can use those. I thought what they call them now just GitHub built in, I guess environments, they can just bring in Node, you can bring in Python quite easily without having to, you know, find the right docker image with both, all your dependencies installed. And you know, you got to worry about, Okay, open source docker images may not be safe, they may have vulnerabilities that you don't know about, and you have to build up your own image. And then you have to find places to to put that image and using ECRs and stuff like which was just just adding more and more layers of things that you have to think about and do where GitHub action just go, Oh, yeah, just use this, this, this action to bring in Python. So I can use the AWS SDK. CLI as part of my pipeline and things like that. I do like it a lot. It's it's quite nice. And I guess I really like the point that you made earlier about how Kubernetes is expensive, because it requires a team to operate. I think that's one of the costs that a lot of people don't think about, that they look at containers look at Lambda, they think, oh, yeah, you know, if you're handling X number of requests per second, Lambda seems quite expensive. But then the if you need whole team that you had to pay for to manage your Kubernetes environment, then suddenly, you know, that cost.. you've got to think about total cost of ownership, which just drastically changed the picture because engineers that knows Kubernetes well and have operated in production at scale they're hard to find, and they're quite expensive to recruit and maintain as well. So I mean, that's a really important point that a lot of people just don't think about, they only think about the operational costs, you know, what AWS charges you because that's an easy to measure number that you see every month.


Louis Quaintance: 14:49  

Yeah, absolutely. And, you know, we were easily spending with not a huge amount of traffic at a previous company with Kubernetes. Our AWS bill, because we're using it in AWS was was easy, like 20,000 pounds a month, you know, and we were, you know, we're nowhere near that with Lambda. So you know, in sort of maybe more traffic than we had at that previous previous job. So, yeah, it's a no brainer for me. And I think one of the things that have been what is really at the crux, the core of the system, I suppose is, well, we use all AWS services, but one of them being Cognito as well, which we've really, which has been a real good experience. And I mainly say that because I think almost in every job I've worked in, there's always been a tendency to, to hand crank the authentication system yourself. Or to, yeah, so so and so using something that's off the shelf there again, which is so so easy to set up has been a big win, I think with Cognito.


Yan Cui: 15:54  

Okay, so let's maybe talk about, I guess, the the app side of things, because it sounds you are using Cognito for user authentication, which means that you've got a mobile app or web app people can log into, and I guess the, on a front end, you've got React and React Native. And I guess, on the backend, are you also using API Gateway for REST API or are you using more flat GraphQL API with AppSync?


Louis Quaintance: 16:16  

Yeah, so we using a little bit of API Gateway, but the majority is using AppSync. And, again, you know, that ties in nicely to Cognito perfectly, because you know, we've got… I love the way you can easily create roles, create groups, obviously in Cognito, and then assign those groups to different mutations or queries in AppSync. So that that that, again, has been super easy for us to to add in. Because we've got a back office system as well, we need to maintain, which, where we have different permissions, different user groups, that sort of thing. So yeah, we're also in the front end, we're using AmplifyJS to integrate with, with the likes of Cognito to handle that authentication. From the front-end perspective, so we're using that library. But yeah, we're using using AppSync. And that's been that's been a positive experience, I think, on the whole. I think sometimes it's, it's the only tricky thing maybe is that I mean, there's probably something that I'm not using, but there's not, I think the, for me, the the optimum developer experience I've had with with Node, or TypeScript has been using an express server, and running, you know, the ability to run that locally. And and test that with SuperTest or whatever you're using. That that is like, super, super great. You know, that's just a fantastic experience. But you can, you can create a similar thing with with with with AppSync, I guess. But what we're typically doing actually, when we're running it locally, or testing it is we're just, we're just invoking the… we're running the the tests against the UAT Amazon services. So if we've got like an AppSync Lambda resolver. And that's talking to Dynamo we just run we just hit the Lambda locally, just mocking out the event object and just passing it in, like it would be passed in in a real world scenario. And just talking, talking to the real UAT Dynamo, I haven't had that sort of, I don't know if there's something you'd recommend in terms of whether you can create the full AppSync environment locally, or whether that's worth doing. 


Yan Cui: 18:26  

Yeah usually I don't find that’s really worthwhile doing. The only time we find that it's worthwhile simulating the whole API locally is when I'm doing server side rendering. Because you know, even for API Gateway, I can I can inspect the REST response, JSON response, and figure out, okay, it's returning the right thing or not, but I can’t render HTML and CSS in my head. So when I'm doing server side rendering, then yeah, I do find it useful to have the ability to just run API Gateway locally. So I can point a browser to some local localhost 3000 port or something like that. So for a lot of the AppSync APIs I've developed, I’ve used something similar to what you're describing is what I call integration test, where I essentially run the function locally but I’m having the function talking to the real DynamoDB tables and other AWS services to make sure that it's actually doing the right thing, and just trigger the function with the right payload that looks like it's coming from AppSync or API Gateway. With AppSync, you can also just do without Lambda, you can just write VTL. So I also use, Amplify’s got a couple of those open source libraries that smaller modules that you can publish to npm that you can use to simulate a VTL I guess running VTL code. So I've used those to write a unit test where I'm write— for cases where I'm writing more interesting, more complicated VTL code than I like to normally write, but sometimes, you know, just not worthwhile putting a Lambda function behind it. You can just write VTL code and do some simple I don't know, array manipulation or something like that. So yeah, and I also tend to write quite a bit of end to end tests for my GraphQL APIs, just make sure that the whole thing is actually working end to end. And once I've got an API deployed.


Louis Quaintance: 20:14  

Sure, yeah, perfect. 


Yan Cui: 20:17  

And I also find the, I guess CDK. I'm not sure if you're using the serverless stack framework, which is, I guess, like a layer on top of CDK. It gives you some other constructs. And I think it's got some tools that allows you to I guess invoke Lambda function locally and watch the file as it changes and things like that. That seems to be getting a lot of attraction nowadays. But with the serverless framework I also sometimes do is, I'm using Lumigo for a lot of monitoring stuff. So when something happens, I can see the alert in Lumigo, writes to my slack channel, I can go to Lumigo, I can capture the invocation event, which I can then put into a JSON file locally. And I can then rerun that event against my function locally, and use VS code to put in a breakpoints into my code, so that I can step through the code step line by line, if it's something that's a bit more interesting, be more difficult to debug that I can also do that as well. But but yes, with CDK, and the serverless stack framework, I think you can do something similar. I haven't tried it myself, but I've seen quite a few people mention that they started to use serverless stack framework with CDK.


Louis Quaintance: 21:25  

Oh, that's, that's really good. Yeah, I would definitely look into that, as you touch there on Lumigo. And then I think that that's been an amazing addition, for us, as a as a platform really, just, you know, just the, the slack integrations is great. But also I was really blown away by for those who haven't used it, just just just how easy it is to set up, you know, you create an account, and it just deploys a CloudFormation template and just hooks into your Lambda functions, isn't it? And you do not actually got to change any of your code, it just, it just deploys it. And it immediately starts monitoring all your all your serverless applications. And that's been absolutely… we've detected so many little issues through that and fixed it super quick. And, and the fact that it the fact that it pushes, as you said, that the the event object into Slack, you know, means you've not got to go out somewhere else where. You could have a quick view of what's going wrong. And that's been perfect. And we've actually hooked it into our our own kind of logo. And we've got something similar to, I think you might have put together at DAZN when you were there, like that sort of power tools thing?


Yan Cui: 22:36  

Oh, yeah, dazn-lambda-powertools. Yeah, that was me,


Louis Quaintance: 22:39  

We've got we've got, we've got something really similar to that, actually. And we've hooked it into certain log levels, so that it publishes it to Lumigo, as well, any error as well. So, so we we've sort of turned that down a little bit with as we've as we've gone live, and we started to kind of, you know, sort of see what really is an error, or what's just noise, we've, we've, we've started to tone some stuff down. But but Lumigo, just works so well with with Lambda and serverless, you know, for, for monitoring. So that's been been really great. And I think I'm only just using the beginning the beginning of what you can do with it like there's a certain cost analysis that you can get out of it as well. So I'll be looking into that.


Yan Cui: 23:21  

Yeah, it looks like those transactions and sees what components are part of the transaction, and then you kind of give you an estimate on the cost for that particular transaction have been one of the things that's probably missing right now is for you to be able to produce some kind of aggregates. So for transactions that look like this, you know, what's the total cost for my for my infrastructure, then you can see, okay, I've got this new feature, maybe I can tag it somehow. So that if I see other similar transaction that goes through the same same Lambda function, the same event buses, the same events, then you can work out, okay, what's the cost for this particular feature? And you can start to look at, okay, in terms of building, you know, features and products, you know, are we, are we spending more money on maintaining this feature than we actually making from this feature? That's one of the things that you can't quite do yet. But that's something that I hope that they, they push me further in terms of that the cost analysis, but it's not their primary focus. I do understand that.


Louis Quaintance: 24:24  

They've got some new funding, haven’t they? And so hopefully… 


Yan Cui: 24:26  

They did.


Louis Quaintance: 24:27  

They'll solve that.


Yan Cui: 24:29  

Yeah, I think one of the things that they are looking to improve is the support is actually add support for containers because a lot of companies are using a mixture of containers and Lambda in their environment. So being able to use the same sort of same tool to monitor both sides of the, both kind of systems is going to be really helpful. Speaking of which, you mentioned that your system is almost all serverless but you do have some EC2 instances in there. Can you maybe explain the use case there and what's the limitations that come force you back into managing EC2 machines?


Louis Quaintance: 25:03  

Yeah, sure. I mean, it was, it was all kind of new to me because I've not really worked in a sort of trading system before. But one of the things it's common with with trading currencies of any kind is that, they typically use a kind of standard for messaging called FIX. And I didn't really know anything about it before I got into this. But yeah, FIX is a standard, there's a whole body around in which defining the structure of certain messaging messages, certain fields that have to be sent, the way that you keep the connection alive. And so you essentially need to, in order to do this kind of trading with third parties who, you know, buy and sell crypto for you, you need to maintain a long-running connection to their system. And that's sort of kept alive via messaging that happens every via sort of heartbeat messaging that happens every 30 seconds. And it provides a really low level really fast mechanism for for trading to get a quote, and to then transact that quote. And so this is something that, you know, it couldn't, you couldn't, I couldn't run it in a Lambda, because it obviously would shut down. And so that was something that I sort of looked at, but obviously realized pretty quick that wasn't going to work. And then the other thing was, I looked into was using Fargate as well. So just running, running a container. Because I've done that before in the previous company, and that worked okay. But I sort of hit a brick wall of it. I don't know what it was. I could never get it to work. It was was that I found that the AWS environment was just was always just killing my connections to out to the third party. And I'd spent ages on it, I just couldn't, I couldn't figure it out. There's probably something stupid I was doing. But you know, it's one of those problems that you get in, in these things where you just give up and then you think what can I do? I need to get this running. So I ended up just starting a new EC2 instance instance. And then just yeah, and running and running a Node, a Node service on there. Pretty, pretty quick to set up, and then just, you know, it connects fine.


Yan Cui: 27:17  

Yeah, I would have thought that Fargate makes a lot it makes it a lot easier nowadays. But I guess if you just need to have a one machine, then you know, is probably good enough. I guess with EC2 you might have to think about you know, things like having Multi-AZ so that you know if EC2 is having some problems in your region then, sorry, in your AZ, then at least you don't have some downtime. But yeah, I'm familiar with the FIX protocol. I used to work in the in the investment banks many, many years ago. I think when they were first introducing the FIX protocol at the time. And my wife is still working in finance. So she had to deal with FIX in the past as well.


Louis Quaintance: 27:56  

Yeah, it's a bit it's a bit of a bit of a pain. It's okay. But I mean, it's pretty quick. That's one of the one of the things that are positive.


Yan Cui: 28:03  

Yeah, yeah. I do remember the format. I think it's like pipe-delimited messaging, right?


Louis Quaintance: 28:08  

That's right. Yeah. 


Yan Cui: 28:09 

Yeah. Yeah. And I think you mentioned that you're using mostly DynamoDB. But you are also using relational databases as well, is it Aurora as part of your stack?


Louis Quaintance: 28:20  

Yeah, yeah, that's right. A bit of a mix. I always really like DynamoDB mainly, mainly because it works great with Lambda functions, in that sense is, you know, obviously, you're communicating via an API, essentially, aren’t you to the database, which is great. And yeah, try try to follow that, kind of my eyes really open to it. When I watched that that video, you've probably watched, I'm sure, by Rick Houlihan on sort of single table design on DynamoDB. And that, and how they use it at Amazon. And that was that was really eye-opening for me. So I have to be honest, I sort of, did toy with using Dynamo for absolutely everything in the in the stack. And it and it might be it might be that we we swap out Aurora at some point. But it's good enough for now for the volumes when the main reasoning is that we need some level of reporting and Aurora is kind of used for that at the moment for the sort of being able to find out how much… which is strange we're using a relational database, but we're not using an in particularly relational way we're using it because of the ability for us at the size of our app to report and and so on. But but yeah, so we're using that mix and Dynamos work well really I mean, I guess the the the the other thing that I would say with Dynamo is that it's it's quite a it's not very flexible, isn’t it? So if you if you if you mess up your table design at the beginning, it's not that forgiving, is it? I suppose.


Yan Cui: 29:43  

Yeah, especially if you're using a single table design because one of the things that I guess Rick and maybe Alex DeBrie’s talk talk doesn't really cover is the fact that Rick’s examples are all based on Amazon in teams where the access pattern is known that has been established for many, many years, so migrating those, you know ahead of time how you're going to access data. But if you're still building a new thing, a new product, then your access pattern is going to change, most likely at some point. And with single table design it is just not as flexible if you were to use many tables, at least that's been my experience. And I think the problem that single table design solves in terms of joining data more efficiently, I think, you know, when it comes to AppSync and GraphQL, you kind of kind of don't have that problem anymore. Because the GraphQL schema kind of stitch the things for you. And if you have to join them by say, a user to his wallet to his coins, AppSync kind of do that for you, if you just point AppSync to different resolvers. So a lot of problems in terms of joining joining data from different tables you also don't need to really worry about that when it comes to building a GraphQL with with AppSync and and DynamoDB. But I do think… maybe that's a good question in terms of volume, how much volume are you dealing with right now in terms of data? Because it sounds like you're using Aurora as basically like reporting database, which I imagine at some point is going to be, you know, it's going to struggle once you have a significant a significant amount of data you have you have to deal with. But I guess you're just starting out. So maybe you're not quite there yet.


Louis Quaintance: 31:22  

Yeah, we're not really there yet at the moment. Um, I think, I think for the for the for the level of users and the regularity, we've got the moment it's it's fine. So we're only we're only in the sort of tens of thousands at the moment in terms of users. We're not we start we went live about three or four weeks ago. So we sort of, we're going to be ramping up that the marketing this year and looking to grow significantly but yeah, I think at the moment, it's working out fine. And  we'll see. We're using Aurora actually as well for a lot of the transactions that are done so like deposits that come in and writing trade trade calls and that sort of thing. So, so yeah, so it's a mix at the moment, but as we evolve, I'm sure we might look at other things Redshift for reporting or that sort of thing. Or, or Snowflake.


Yan Cui: 32:12  

Okay, yeah, I guess Snowflake is really popular for those those kinds of reporting and analytics workloads. And I guess it sounds like you're also doing some kind of a time series data for the for the transactions. I guess Amazon also got that quantum database and QLDB which gives you some of these benefits of, I guess, sign the request so that you know that they are… you can cryptographically approve that the transaction is linked, and you can… it also gives you that history in terms of, you know, for particular records all the different changes it have just gone through in the past as well. But I think the last time I looked at it, there were still some, I guess, usability issues around QLDB. So for one of the projects I worked on the last couple of years it was for a healthcare app and we couldn't use it just because it was missing some stuff that it just wasn't quite ready yet. But I think since then, they improved a lot of things on the QLDB side of things.


Louis Quaintance: 33:10  

Yeah, yeah. And they've also got the timescale. Is it timescale or is it time—?


Yan Cui: 33:15  

Time series database. Yeah. Yeah, that one is is fairly is it even public yet? I think it was, it's been in beta for a while. Okay. Yeah. Timestream. That’s the… Yeah. Okay, so looks like it’s yeah, okay, so they must have gone live with it already. Yeah, I remember this was announced a couple of years ago and then it’s been in the private beta for a number of years and nothing happened. Oh, yeah. Yeah, for another… I talk about it I think that I do remember they announced it last year, at some point, but that's another one that's maybe worth looking into in the future once you've got a bit more data you've got to worry about. Probably separate them out, the time series data goes into Timestream and the analytics data goes into Snowflake or something like that.


Louis Quaintance: 34:00  

Yeah. I mean, have you, have you worked, have you seen, do you have a view on how much what's the sort of maximum number of users for using something like Aurora?


Yan Cui: 34:10  

That's difficult to say. I guess it's more around the volume of data you're dealing with. When I was, it was many years ago when I was working on the social games. And you know, that's when we're dealing with, I think, over a million daily active users, and a lot a lot of requests. So analytics events, I think we're getting something like 50 gigabytes of analytics data every day. And so at the time, we were trying to use Aurora as our reporting database and within like a month, we kind of just just couldn't do anymore. It was just taking too long to to run any reports. So we end up using BigQuery from Google Cloud at the time. And Amazon at the time didn't have anything similar. And since then, they have now got Athena, which kind of, you know, their their answer to Google BigQuery. But yeah, that's still kind of my, I guess, my go-to solution for a lot of this analytics kind of workflow is just to use something like Athena, which you know can query data that you've got in the S3. But I think I’m also seeing more and more people now uses Snowflake, which seem to integrate really well with some lots of other third-party tools just out of the box. You can run stuff in Snowflake in Snowflake, and then just output the data to where you need them to be. So maybe that's another good option to look at.


Louis Quaintance: 35:28  

Yeah, yeah, yeah, definitely. Nice.


Yan Cui: 35:31  

Okay, I think that's, that covers all the questions I had. Is there anything that you would like to talk about now that you guys have been acquired and it sounds like things are going well. Are you looking to hire people to bring onto your team?


Louis Quaintance: 35:44  

Yeah, we're always looking for for talented engineers. And we've we've recently, well, hired a few more people starting this week. If you do want to view it, you are interested in joining the journey, it’s pretty exciting so far, I mean, we've we've acquired a lot of assets, and there's a lot of technical challenges, you know, do do do get in touch on LinkedIn. And yeah, it'd be great to hear from you.


Yan Cui: 36:09  

Yeah, so if you've got a link to your I guess, open positions then I will share that on my on the show notes for this episode. And is there anything else that I guess you guys are doing in terms of, I don't know, do you guys write any blog posts, any engineering blogs that you'd like to share? Or talks that you've done?


Louis Quaintance: 36:28  

Yeah, I mean, if you if you… probably the best thing at the moment is just to follow us on LinkedIn. We were regularly producing quite a lot of content on sort of crypto and investing in crypto. So if you want to know more about that that's a good good resource at the moment. Not got our own blog, but that's definitely definitely something that we will pick up in the new year. 


Yan Cui: 36:50  

Okay, sounds good. I'll make sure that's included in the show notes for today. And again, thank you very much Louis for taking the time to talk to us and, you know, all the best in your project.


Louis Quaintance: 37:00  

Thanks very much. It's been great to meet you.


Yan Cui: 37:03  

Ciao. 


Yan Cui: 37:16 

So that's it for another episode of Real World Serverless. To access the show notes, please go to realworldserverless.com. If you want to learn how to build production-ready Serverless Applications, please check out my upcoming courses at productionreadyserverless.com. And I'll see you guys next time.