Real World Serverless with theburningmonk

#21: From K8 to Serverless at Wealth Wizards

July 22, 2020 Yan Cui Season 1 Episode 21
Real World Serverless with theburningmonk
#21: From K8 to Serverless at Wealth Wizards
Show Notes Transcript

You can find Ionut Craciunescu on Twitter as @icraciunescu. Check out his medium posts here. You can also find Wealth Wizard's engineering blog here, and their GitHub projects here.

This is the talk by Simon Wardley that Ionut mentioned in the episode:
Crossing the River by Feeling the Stones

And here is the earlier episode where I talked about FinDev and Wardley Mapping with Aleksandar Simovic and Slobodan Stojanovic:
#17: FinDev and Wardley mapping with Aleksandar Simovic and Slobodan Stojanovic

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.

Opening theme song:
Cheery Monday by Kevin MacLeod
Link: https://incompetech.filmmusic.io/song/3495-cheery-monday
License: http://creativecommons.org/licenses/by/4.0

Yan Cui: 00:15  

Hi, welcome back to another episode of Real World Serverless, a podcast where I speak with real world practitioners and get their stories from the trenches. Today I'm joined by Ionut from Wealth Wizards. Hi, welcome to the show.


Ionut Craciunescu: 00:26  

Hi, thanks for having me.


Yan Cui: 00:29  

So before we start, can you maybe just give us a sense of what the, what is, what is the Wealth Wizards do and what is your role there? 


Ionut Craciunescu: 00:37  

Yeah, sure. I'm a platform engineer, part of a fantastic team which designs, builds and maintains the infrastructure layer for the source product that Wealth Wizards is offering. Well, who is it? It’s a FinTech company based in the lovely town of Leamington Spa UK. I guess not many heard of us yet but who knows in the near future. Our purpose is financial wellbeing for everyone and to make financial guidance and advice, affordable and accessible to all. This is why we were created as a company, and we created the platform to achieve that. We show how what we offer to the world can be done in our own brand for that, MyEva. And this was never more relevant than today. Anyone can register at app.myeva.com, ask your question, so this financial situation, and receive guidance for things like buying your first home, saving for rainy days or saving into a pension pot. MyEva is built on top of digital, which is about digital engagement and great customer experience and, to Turo for Advisors, is a back office application intended for, to be used by financial advisors to make the experience seamless, efficient, quick, and compliant.


Yan Cui: 01:53  

So from a 30,000 feet view, what does your architecture look like today?


Ionut Craciunescu: 02:00  

I guess it's quite common these days to follow a microservices pattern, and we are no exception to that. The applications are web based developed in Node.js. And most of them are running in containers managed by Kubernetes and make use of MongoDB for the storage layer. Some applications, though, are developed on top of serverless services like AWS Lambda, API Gateway, Step Functions, a mix of DynamoDB, and storage, and S3 for the storage part. It's fair to say that most part of the application stack is still running on containers, but we're in the process of moving to serverless based architecture, and all of application stack is fully deployed on top of AWS.


Yan Cui: 02:45  

So we discussed earlier about your decision to switch from Kubernetes to serverless, which is quite an interesting decision that you took. What was the trigger for you, for you guys to go from running everything in containers with Kubernetes to now going towards a serverless-first approach?


Ionut Craciunescu: 03:05  

Sure. I can't say there was a single trigger, but actually several factors that supported the decision to fully move to a serverless world. Personally what helped me to understand the value of serverless is on on one side, what I call the cup of tea twitter thread by Simon Wardley, where he gives the example of a business selling cups of tea to the customer. He creates a map with a value chain, making up the business, things like kettle power, hot water, tea, cup of tea and so on, and shows which one is custom, built which one is rented and which one is a commodity. For the argument's sake that kettle is custom built in one of the examples, but now if the business is about selling cups of tea, then why should it custom build kettles? In the same way, if our company goal is to make financial advice accessible and affordable to everyone, why should we be concerned or spend time patching operating systems, for example. Another thing that I found extremely useful is again by Simon Wardley, and it is his talk at a, which he did at a GOTO conference, titled "Crossing the River by Feeling the Stones", he talks about mapping, a tool for strategic intent, and later on he talks about serverless and makes a great job at explaining why it makes sense and why it is the next step in the evolution of IT, as we moved forward from servers to VMs to containers and serverless being the next step. From a pure tech perspective, about a year ago we decided to replace an in house authentication system with AWS Cognito. We made this choice, as we realise developing an auth system is stealing time away from developing core product features. We did a quick research on what exists out there, checked Auth0, but in the end we decided to use Cognito, based on a more competitive price, price, and better integration with AWS services. A couple of engineers needed a proof of concept first to make sure that we can use the product, and then within about three weeks, we moved from the in-house system to Cognito. According to the developers, it would have taken us about six months to come up with a DIY solution that would match, that would match the features we’re using from Cognito. I was involved in the migration part from an operations perspective, and did a bit of automation for Cognito pools management and wrote some Lambda functions to be used as a Cognito trigger. So there are a lot of functions that I wrote is being used where SAML authentication is configured. And the goal is to check the SAML assertion for a custom attribute, check the value, and based on it, it goes and adds Cognito users to some groups before the JWT token is generated. So in effect, it's adding the user to identification role based on information in several assertion, getting SSO features or it's a part of a building block giving us SSO features. I was so happy to be able to do that, especially that many years ago, I used to deal with servers in a data centre I kept moving along as it evolved, but this time it felt I was doing some work at the application layer quite close to the business logic, which gives value to the company. So instead of tinkering for example some SSH stuff or patching operating systems, I was finally able to do something quite valuable supporting the business needs, even better. So this move to Cognito was a tech proving ground for adopting serverless services. I no longer had to be concerned, I wouldn’t be as concerned as before, with things like, “Is it highly available?”, “Is it redundant?”, “Is it secure?”, “How do I store my passwords?”, “What is, what is this big group thing?” It just worked. And these worries were almost taken away. So we're quite happy with the current path.


Yan Cui: 07:12  

That is great to hear that you guys found a lot of benefits in terms of feature velocity, getting something done in a few weeks, rather than a few months, and also allowing you to now focus more on creating business value by implementing features rather than thinking about servers and tinkering and managing those infrastructure. And I love the fact that you mentioned Simon Wardley was one of the inspirations for you guys moving to this to this direction because I'm a big fan of some of Simon Wardley’s work as well. I myself, I guess, self-proclaimed Simon Wardley disciple. And actually recently I spoke with Aleksandar Simović and Slobodan who are also serverless heroes, about the idea of, one of the ideas that Simon Wardley has talked about a lot, which is this idea of FinDev where financial, finance and development can intercept. So if anyone who's interested in that side of software development, please go and check it out on the realworldserverless.com. So I want to turn the conversation around to something that I hear a lot from folks who are still very much in the containers and Kubernetes world is, what about all the control and portability that you're giving up by moving to serverless? I mean those are often the main arguments people have for going with Kubernetes, right?


Ionut Craciunescu: 08:31  

I guess so in that case one could argue about losing control. For example, you no longer have as much control over the network layer. But then, that network layer is managed for you, and most of the times is done by someone who is focused on doing just that. And for sure they will do a way better job. There also might be a feeling of serverless being like a black box, but once you adjust your thinking to this new world you realise that's perfectly fine. And the cloud provider is better placed at managing these layers of an IT stack. In terms of giving up things like portability, I think that's just a myth. If I take the example of functions of a service, sorry, Function-as-a-Service. The code will run just fine, whether it's AWS as your GCP, or the next cloud provider in line. I could argue you could actually gaining portability instead because you can run whatever you wish. The thing that will be different between these cloud providers will be the automation workflow around development and management of these functions, and it doesn't take much effort to get this right for the next provider in line, if you really need to. Then there's things like storage services for example, there's S3 in AWS and Blob storage in Azure, they both do the same, more or less the same for things like CDNs, API gateways, managed databases and so on. Obviously, there's differences between them. But in essence, they have pretty much the same core set features. And let's say for argument's sake that you lose control. But what do you gain instead of that? You get to spend more time at the application layer, instead of being focused on on just operating systems. No, I'm not trying to say that. Don't get me wrong, I'm not trying to say that Kubernetes is not good. It's great, I love it. It's a great piece of tech. I love things with it. But given the chance I would choose something or definitely something which is serverless based.


Yan Cui: 10:41  

That's great to hear. And all the arguments that you've said makes perfect sense when you think about again that tea cup example that Simon Wardley gave, right? You know, what is the value that your business provides your customers and being able to focus on that rather than, you know, this non or undifferentiated heavy lifting. So as you transition from this Kubernetes to a serverless-first approach, what have been some of the most challenging aspects of that so far? Is it a case of cultural challenges, or maybe was it technical challenges because of platform limitations and things like that?


Ionut Craciunescu: 11:19  

I could say, there are no cultural challenges to overcome. I think at Wealth Wizards, we are pretty good at embracing new tech, if it's suitable for us when it's solving a problem that needs solving, and pretty much everyone in engineering is totally on board with adopting new tech. If we need to throw a tech perspective, that can be a learning curve sometimes depending on what one is doing. If I go back to the Cognito example, the most challenging thing around adopting it was actually the learning curve and AWS docs back then didn't seem to help as much as we wanted to. And there wasn't that much knowledge on on the testing about Cognito not being used as much maybe, or people don't write as much about it but once I discovered the SDK docs it made things really easy and can, was the trigger, okay this is how I can write my Lambdas actually Cognito triggers, how I make use of them. Then it's quite easy to get started with a RESTful API, with a Lambda back end in serverless. And if you need to do more complex things you do need to learn the service services that you are using to make great use of them. So don't get me wrong the learning curve for serverless services, generally speaking, is so much lower than most of the things out there, but since this is a new approach and best practices have not yet, well, not yet, well cemented across the industry, sometimes you have to do a bit of investigation and find out what works for yourself. For example, one of the coolest serverless services S3 is quite easy to use, but there are some actually enough data leaks of things stored in S3 due to a mis-config server. In our case, before we started putting customer data in S3, we did some work to come with an approach using encryption and access controls to restrict, who, when and how can access data, basically limiting it just to the application only.


Yan Cui: 13:29  

Yes, I think that's a very valid point that often people overlook that learning two different things are not always the same. I think sometimes learning, so the whole container stank with Kubernetes on top of that feels like taking a university degree like taking years to master whereas with Lambda. And a lot of these serverless components actually are much, much easier for you to just get started and building something and shipping something very quickly. So another thing I want to sort of touch on is, there are still many other people out there who are building stuff with Kubernetes and thinking about is serverless a good fit for us. What would be your advice to those folks who are thinking about this path?


Ionut Craciunescu: 14:12  

I'd say it depends on the people. If you are just concerned with, let's say architecture designs and you're not that much of a hands-on person. I can't recommend highly enough Simon Wardley’s talk that I mentioned earlier on "Crossing the River by Feeling the Stones". And that's from a non tech perspective, let's say, if it still doesn't click with you just give it time, it will. You'll see. From a pure tech point of view I guess you just have to get on with it. At most of the time when it comes to moving from one tech to another, find the low risk project experiment, learn from it, iterate and keep going. There's plenty of articles on the internet and the work that AWS serverless heroes and other serverless advocates are doing is just fantastic. They do a lot to help others to understand how to make best use of this new set of practices.


Yan Cui: 15:02  

And in terms of the actual platform itself, are there any platform or tooling limitations that are making life difficult for you to adopt serverless more broadly and sort of quickly in your organisation, because you said there are still, most of them are still running in containers. Are there some limitations that are now stopping you from moving everything over to serverless?


Ionut Craciunescu: 15:25  

So for our organisation. I can't think of a moment for any limitations for the compute part. So I say, No, not really. We have great engineers and the apps were decomposed well enough to make them easy to move to serverless. When it comes to the storage layer, most of our data at the moment sits in MongoDB, and we want to move to DynamoDB and S3. But we need to redesign the data model, and to possibly do some code changes. Here, we found quite helpful Rick Houlihan’s talks and sessions on DynamoDB. But the thing that we struggle with most is just limited resource, as in how much engineering time we have available. Product Development still takes most of our time, and it comes first before tech layer change. Platform wise, I had a slight concern about the monitoring aspect of it mostly due to the fact that tools like AWS X-Ray are not quite there, but this was solved by things like Epsagon or Thundra. From a security point of view I guess it's not uncommon, maybe to capture network traffic to analyse it where VMs are still involved. In a serverless world, you can't do that anymore as far as I know, but then they still need to do it. Serverless services are more secure anyways than a traditional infrastructure so security services providers will have to evolve their tools as well. For our own tooling, as in CI/CD pipeline. We wanted to maintain the same level of abstraction and usability, regardless of where my application exists as in Kubernetes or at serverless or whatever. We automatically deploy to dev and test environments, and we can deploy to production using Slack commands at any point in a day. We adopted serverless framework for serverless services. And to make it to fit well within our workflow, we wrote a plug-in so it pulls information from consul and vault which is open source, by the way. It's not an obstacle in itself, just something to be aware of that a slightly different process needs automation. At the end of the day, that's what we're here for, among other things.


Yan Cui: 17:44  

Okay, what about in terms of the sort of challenges to local development? Because one of the things I also hear a lot from folks that are moving from containers to serverless is that well with containers I could just run up everything locally on my laptop using containers. But now coming to serverless, I can't do the same thing anymore. I'm depending on all these external services that only exist in AWS. What are some of the tools that you guys are using for simulating that local development experience or are you doing, what many people like me are saying, okay, just don't bother locally simulating everything just talk to AWS services?


Ionut Craciunescu: 18:22  

We spent just a bit of time to investigate how be the best for us and we come to the conclusion of not bothering with it, just talk to the real services, deploy your things out there and use it with AWS services. Based on the back of the investigation that was done we had an amazing engineer who looked at using tools that more for example DynamoDB or other services, but he quickly find out it's not similar to the real service, let’s not bother with it. It's just not good effort for us to to spend in terms of testing. We still do some unit tests for the functions, but then we do application testing and end-to-end testing and we have a testing framework that works regardless of where that app is deployed. It has no knowledge of the infrastructure itself, it doesn't really care.


Yan Cui: 19:18  

Okay, that's all great to hear. And what about... Okay so one last thing I want to ask is, what are your top three AWS wishlist items that things that you wish AWS would change or improve or fix?


Ionut Craciunescu: 19:28  

There is one thing that would help us adopting serverless faster, and that one would be API gateway HTTP headers routing, as in being able to call different back end integrations, depending on the header being sent. Another one, we found out when we started building a data lake, some quirks with Athena which requires all field names to be lowercase. The only symbol allowed being an underscore, so things like that would help us, things like that would help us getting up to speed faster or adopting the services faster so we don't have to spend time troubleshooting smaller issues.


Yan Cui: 20:19  

Actually on the API Gateway one, you can also use ALB which does support the header routing based on headers I think. And ALB is supported with Lambda. I guess the pricing model is different. It is not maybe as as easy to set up compared to API gateway. But it's at least potentially an option for you to consider. 


Ionut Craciunescu: 20:49  

It was an option, indeed. And we did investigated that that route. As far as I know there's a limitation with the number of rules a load balancer supports and that's 100. And I think it's a fixed limit, I might be wrong though. And then there's features in the API gateway that the load balancer doesn't have like throttling and different keys for different people that are consuming my APIs, and some other more advanced security features. So that's why we said we're gonna stick with API gateway.


Yan Cui: 21:16  

Okay, that makes perfect sense. So I think that's it, that's all the questions that I've got. Is there anything else that you'd like to tell the listeners before we go? Any maybe personal projects you want to share? Or maybe is Wealth Wizards hiring at the moment.


Ionut Craciunescu: 21:33  

We try to publish articles on our Wealth Wizards medium.com publication called Wealth Wizards Engineering. There's a GitHub account we have as well and we try to open source tooling or things that we think might be useful for others. Those are things that we do right that are specific to our own use case. In terms of hiring, unfortunately, we do not hire at the moment, but we do hope we will be able to hire in the near future. So keep in touch, you never know.


Yan Cui: 22:05  

Sounds good, and I will make sure to include the link to your medium blog in the show notes for anyone who wants to read about that and also I'm going to add links to the talk you mentioned form by Simon Wardley as well. And what about, how do people find you on the internet? Are you on Twitter or LinkedIn?


Ionut Craciunescu: 22:25  

Yeah I'm on Twitter. My Twitter handle is @icraciunescu, I guess, spelling it would be easier. And I have a medium blog post @icraciunescu.


Yan Cui: 22:38  

Okay, I'll make sure I'll include those in the show notes so that people can find you easily from there. Okay, and thank you so much for joining me today and sharing your experience of moving from Kubernetes to serverless. And hope you guys keep going with a journey and then you go to a world where you're fully serverless.


Ionut Craciunescu: 22:58  

Thank you. We can't wait for that and thanks for having me.


Yan Cui: 23:02  

Take care, bye bye. 


Yan Cui: 23:16  

That's it for another episode of Real-World Serverless. To access the show notes and the transcript, please go to realworldserverless.com. And I'll see you guys next time.