Real World Serverless with theburningmonk

#30: Serverless at TIKD with Marcelo Olivas

September 23, 2020 Yan Cui Season 1 Episode 30
Real World Serverless with theburningmonk
#30: Serverless at TIKD with Marcelo Olivas
Chapters
Real World Serverless with theburningmonk
#30: Serverless at TIKD with Marcelo Olivas
Sep 23, 2020 Season 1 Episode 30
Yan Cui

You can find Marcelo on Twitter as @mfolivas and on Linkedin here.

To learn how to build production-ready Serverless applications, go to productionreadyserverless.com.

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.


Opening theme song:
Cheery Monday by Kevin MacLeod
Link: https://incompetech.filmmusic.io/song/3495-cheery-monday
License: http://creativecommons.org/licenses/by/4.0

Show Notes Transcript

You can find Marcelo on Twitter as @mfolivas and on Linkedin here.

To learn how to build production-ready Serverless applications, go to productionreadyserverless.com.

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.


Opening theme song:
Cheery Monday by Kevin MacLeod
Link: https://incompetech.filmmusic.io/song/3495-cheery-monday
License: http://creativecommons.org/licenses/by/4.0

Yan Cui: 00:13  

Hi, welcome back to another episode of Real World Serverless where I speak with real world practitioners and get their stories from the trenches. Today I'm joined by Marcello Olivas, who's the CTO at TIKD. Hey, welcome to the show.

 

Marcelo Olivas: 00:27  

Thank you for having me Yan. Appreciate you doing this.

 

Yan Cui: 00:31  

And, yeah, thank you for taking the time to talk to us. So to get us started, can you tell us a bit about your journey to serverless and who is TIKD, what do you guys do there?

 

Marcelo Olivas: 00:42  

Yeah, absolutely. So my journey to serverless, you know, I started like about three, maybe, maybe four years ago, where, you know, I was working at a company that got acquired by a big company, and we were doing a lot of Kubernetes stuff. And at the moment in time because when we get absorbed by the big company compliance didn’t allow any type of Docker or any type of Kubernetes, but they did allow anything that was related to CloudFormation or anything like that. So, you know, it was a big change for us. So one of the things that we wanted to make sure that we ended up doing is see if we can leverage. A lot of the stuff that we've been doing all the services but you know we didn't want to deal with like the whole architecture. So that's when we stumbled upon, you know, serverless, and the serverless framework. The serverless framework was great because he was allowing me to create all these applications using CloudFormation, knowing the resources that he has available. So, like that that's how we started it but initially it was just because of the fact that we just needed something in CloudFormation.

 

Yan Cui: 01:59  

Tell us a bit about TIKD. What is TIKD? What do you guys do there?

 

Marcelo Olivas: 02:04  

So TIKD is a software as a service for citations. We cover, you know, different types of citations for these parking cameras and tolls, and we have different integrations all over the United States, and currently we have two products. One of them is kind of like an on demand product and another one is a subscription. So subscription works great for, like big fleets. Just imagine something like DHL or Amazon Prime, where they have a big fleet and they just need to find out they're in citation on those vehicles like that they are void, you know like, late fees or even getting those vehicles towed. And the other one is like I say it's on demand. And this is specifically great for companies that are doing any type of rental. So, say for example if you end up renting a vehicle, and you know you reserve it for like about a week, at the same time, want to make sure that we scan, during that time, during that interval time to see if there's any citations and if there are any citations, they will contact you see if you can pay the citation. That's what we do with them.

 

Yan Cui: 03:14  

So was the TIKD platform built with serverless from the ground up?

 

Marcelo Olivas: 03:18  

Absolutely, yeah. Take this 100% serverless. We don't have any applications running at all. Everything is serverless, and we did it with the purpose of just making sure that we move extremely quickly, and at the same time, we reduce the amount of maintenance that we end up doing in the platform. So yeah, it's 100% serverless. 

 

Yan Cui: 03:40  

So that's great. And so with serverless being a really big departure from how, you know, we are used to building software, what has been the most challenging adaptation for you, so far? 

 

Marcelo Olivas: 03:52  

I think the biggest adaptation, or maybe learning curve that we have is, is the fact that, you know, you have to automatically start thinking more as synchronous and more reactive. And because of that, is a different, different paradigm. Right. I think that the fact is that serverless automatically puts you in the path of thinking about microservices, thinking about short time live functions, and at the same time reactive functions. So I think that that has been the biggest learning curve, especially from, you know, junior and mid level developers that they don't have a good grasp on those on those subject matter.

 

Yan Cui: 04:39  

So, as you build this brand new platform with this brand new technology forcing you to think about microservices and event driven architectures hasn't been anything that hasn't just quite worked out that you thought okay, this should be a lot simpler. Why is this so difficult?

 

Marcelo Olivas: 04:55  

Yeah, I think that the biggest challenge that we're having right now is integration tests, I think, just like you mentioned especially right now that we have an event store, you know, having this integration test for end to end test is very difficult especially when you have an event driven architecture, like you mentioned, you know, I saw, time and time again, you know the guys are telling me like, well, you know our tests are failing because, you know, we haven't got results in the event store. So those are the things that I believe that is not that is a pain right now but I think later on is going to get better, especially in the testing side as well as the event driven.

 

Yan Cui: 05:39  

It's funny that you mentioned the testing side of things because that's something I've heard quite a lot of people talk about it being a struggle. So with that in mind, would you... do you design your applications differently to make them more testable? Is there anything that you're doing that's different to how you used to do things, just so that you can test your system better when they're in serverless?

 

Marcelo Olivas: 06:03  

Yeah, for sure. One of the things that I really like about your classes is the fact that you end up introducing the return on investment that you end up getting, especially for, you know, when it comes to end to end, integrations, and then unit testing. Right. Yeah. The higher level that you go, the more return on investment so therefore if you do an end to end test, you end up getting a higher return on investment rather than, then, say for example unit tests. So one of the things that I always tell my developers is, you know, let's focus first on the end to end test. Most of the time back in the days, a lot of the developers just think about their unit tests. I'm not saying the unit tests are not important, but I try to let them know, and emphasise the fact that if we ended up having a better grasp of the end to end test the quality of a product would be a lot higher. So that's something that I emphasise, not only to my developers, but also in the code reviews.

 

Yan Cui: 07:07  

Okay, so maybe, can you tell us some of the things the specific feature that you've built with service, that makes you think, oh wow okay now I would never have done this so quickly if I was building these with EC2 instances, or with containers.

 

Marcelo Olivas: 07:21  

Yeah, I mean there's just so many, many, many things that we can say that. For starters, you know like, I remember going in a meeting. When we first started TIKD and my co-founder and I, we went to a meeting in LA. And we didn't have a product at a moment in time. But one of the customers says if we... if you can find a solution to be able to do this, and you can present it over to me, you know, I can, I can totally, I can totally give you my entire fleet. And that was in LA. So we're in Miami so that fly from LA to Miami, was able to connect to the internet, and I was able to get a prototype, working all in serverless within a couple of hours, and it was functioning, it was, by the time that I landed in Miami, I was able to just develop a prototype for a customer, and that was to me that was, that's when I knew that we would never go back to anything else.

 

Yan Cui: 08:23  

Yeah, I always love to hear these kind of stories of how, how it really helps people accelerate and get things going. So, do you have any advice to people who are building a startup and thinking about serverless or maybe they already started? What are some of the advice you would have for someone, you know, maybe even a younger version of yourself that you know today that you wish you knew three years ago.

 

Marcelo Olivas: 08:46  

Yeah, I think the one thing that I always tell people is that is the maintenance part. a lot of people just get complacent with the fact that, you know, I, I already know. You know Docker I invested so much time learning Docker. I also invested a lot of time learning this other tool called Kubernetes or maybe do I just put a lot of effort and time into Istio and Kubernetes. And one of the things that I keep telling people is that they always tend to forget the cost around maintenance. You know, maintenance is not just around the cost of the resources, but it's maintaining and caring for that particular infrastructure that is something that a lot of people don't put in very emphasis. So one of the things that I always tell people is, you know what, just give it a shot, especially for any green project, try to see if you can end up doing something with serverless just start off and see if there... if you, if you can just do that. Just, just took a, kind of like a small little prototype for maybe like about a month and see how you like it. Most of the time that people end up doing that they really really like that. The other thing that I always tell people is that this is clearly not a silver bullet. But this also had good, you know, a good use case for a lot of things. And that's something that I was mentioned that to everyone.

 

Yan Cui: 10:10  

I think that is, yeah, that's a good point about the sunk cost fallacy that we've invested so much time into learning containers and all of that and now we have to learn another thing, and also mean serverless is great but there are certain things you could do better yourself with containers with a lot more effort, where as it’s almost, you know, it’s much more difficult to do with a black box, like Lambda. But at the same time for the vast majority of people, you probably just don't need some of the extra bells and whistles that comes with the stuff that you have to build and maintain and run yourself. There is a massive cost overhead there. And I think another another common mistakes I see people when they consider the cost of your application is that they only look at the AWS bill and they see okay well with Lambda, at, you know, some hundreds of requests per second, it's actually more expensive to run them on Lambda versus to just put a container out there and run your application on that, but then you're not really thinking about the fact that, Okay, if you've got to run this, scalable containerized application you need to have people with the experience to run that and do it well. And to be able to deal with issues and the fact that if you want to run multi AZ, you're not running one container you're running at least three and if you are doing multi region, then you're running 12 containers if you are in four regions, right? So all of a sudden your costs start to ramp up pretty quickly. And when you consider that you need to have expertise or to hire people to look after your container you arise environments, and to build your Kubernetes cluster and maintain that I mean that is way more expensive compared to just using Lambda and pay, I don’t know, $500 per month extra compared to what you pay for containers, but you're going to save so much more in terms of not having to hire those specialist skills into your team, until such a point that you need someone who's more specialised and be able to take you to the next level, especially for startups. Some that cost trade-off between infrastructure versus engineering is huge. 

 

Marcelo Olivas: 12:14  

Yeah, absolutely. I think that the cost is incredibly big and a lot of people don't think about that and I think like I said I feel like a lot of people end up becoming enamoured, just because of the fact that you already know your tool, right, or there's a, there's a big community around or there's a big hype about it. The other thing that I will say is that, you know, even though we have really large customers, really big customers, and we have, you know, a large fleet. You know, we're over, you know, 15,000 vehicles right now that we monitor, just a big, big, big fleet. And we do monitor that in everyday basis, our whole team is composed of five, composed of five people. So, that is extremely low doing you know all type of integrations not only with our customers, but also with all the, all the, all the different providers that we have. The only thing that I really end up enjoying, is the fact that, and this is I know that this is something that a lot of people end up talking to me is the fact that they feel that I am locked in. Right. I'm on locked in with my provider. And I always tell people that that is not a bad thing. Just to give you an idea, right just really quick. The other day, one of the things that we end up having, is we end up having one of the biggest customer end up getting a huge amount of citation in paper. So one of the things that we ended up providing is one of those. I don't know, have you used the SFTP transfer that is used by AWS?

 

Yan Cui: 13:46  

Is that the one with S3? 

 

Marcelo Olivas: 13:48  

The SFTP, is SFTP transfer service that AWS has.

 

Yan Cui: 13:55  

Okay, no I haven't used it.

 

Marcelo Olivas: 13:56  

Well. So just to give you an idea we've created a service, an SFTP service provided by AWS. And then at the same time they're able to not only, you know, use the SFTP to upload photos images or scan images of citations. But then we can use, end up using text, text extract to basically get all the text from all the citations, put them into our database and be able to just associate, you know like, the partners as well as you know the issue date, the cause, and even find out if that citation belongs to a particular renter. So, again, you know, the fact is that, and we did that in less than a day. Now just because of the fact that we try to leverage, you know, the Provider Services and the only thing that we do with serverless is basically just kind of like the glue for, you know like, the particular business flow and leveraging the services that they have. So we try to avoid recreating the wheel. So I don't think locked in is a bad thing.

 

Yan Cui: 15:06  

Yeah, the other side, the other side I guess the other side of the coin for this lock in. I hate the term lock in because you never really lock in anyway, there's always a way out. It’s more of a question of the amount of coupling you incur, and therefore the cost of moving, but the other side of that coin is that the more coupled you are that means that the more value you're able to extract from the provider, because they're going to do way more for you, and any technology choice that you make, you're essentially creating coupling, you know, to Node.js, to express frameworks. I've seen companies spend two years trying to get out of a web framework, because all their code is so interweaved into the framework, they can't extract any of the business logic out of it. So, and also you also just getting locked in, if you like, with expertise you have in the team as well, right? So you've got a containerised environment you are coupled to having those skills, and if you're using a private cloud provider you're locked into the skills that they offer you as well. So also this whole lock-in argument just drives me nuts.

 

Marcelo Olivas: 16:16  

I know. I know.

 

Yan Cui: 16:18  

And so, I mean all these things that AWS gives us, serverless give us is great, but are there any platform limitations or tooling challenges that you find that you're constantly bumping into on a day to day basis. 

 

Marcelo Olivas: 16:31

You know. I wish I can say that but the one thing that I always tell people is that whenever you end up using any type of open source. I always look into the community. And that's something that I really like about the serverless community. I have ended up having some challenges, but nothing has been completely affected. In other words, the community usually helps me out, whether it is finding a workaround or being able to find a solution. So, nothing big major. We do end up stumbling on some problems, but nothing big even like I say one of my biggest pet peeves right now is the fact of the testing, and that's the only thing that we've been having but nothing major. 

 

Yan Cui: 17:22  

Okay. Sounds like you're a very happy AWS customer. It's just good to hear. But, with that said, do you have any sort of AWS wish list item that maybe new features or new services that you wish that they would build. 

 

Marcelo Olivas: 17:41  

You know, neither that I can think of right at the moment, because I... I wish I could say that there's one thing, because I believe that they're getting themselves in there, I mean it's definitely easier. I mean for us. Like I say most of the stuff that we end up doing is software as a service where we do end up leveraging a lot of integration. So that is being very well done. The one thing that I'll probably ask is, again, on, you know, the testing, testing locally. That will be something that we would love to have and will love to receive or being able to end up doing a little bit better testing with the end to end, I think that would be really good. 

 

 

Yan Cui: 18:28  

Okay, that's, that's great. Hope someone from AWS is listening and taking notes. Well, before we go, is there anything else that you would like to tell the listener, maybe about yourself, anything you're working on, maybe TIKD is hiring. 

 

Marcelo Olivas: 18:42  

Yeah, right now because the whole pandemic obviously things are challenging and there's a lot of unknowns, but at the very beginning you know we're still working with a lot of people we still, you know, we did have a couple of a couple of positions available at the very beginning. Right now we basically end up hitting the pause button on those but we are at the moment I think that the pandemic and that you know finalising we're probably gonna end up rehiring people. Yeah, you know, when I say we manage a large amount of customers, big customers, and we're growing tremendously. One of the things that we're trying to do is expand internationally. So my customers, they want us to go to Canada, also the UK, I know that's where you are, Yan, and also France. So that's something that those are other markets that we're looking into very carefully. 

 

Yan Cui: 19:37  

Okay, and how can people find you on the Internet, on Twitter, on social media? Or maybe do you have a blog?

 

Marcelo Olivas: 19:45  

Yeah, so you can always find me in LinkedIn, Marcelo Olivas. Or you can reach me at [email protected] You can reach me there. I'm also in LinkedIn and Twitter, @mfolivas, that’s my twitter handle. 

 

Yan Cui: 20:06  

Okay, great. I will leave those information in the show notes. And once again, thank you so much for taking the time to talk to us today and stay safe, stay home. Take care. 

 

Marcelo Olivas: 20:17  

You too, Yan. Thank you so much for doing this. 

 

Yan Cui: 20:20  

Okay, bye bye.

 

Marcelo Olivas: 20:21  

Bye. 

 

Yan Cui: 20:33  

So that's it for another episode of Real World Serverless. To ask us a show notes, please go to realworldserverless.com. If you want to learn how to build production ready serverless applications, please check out my upcoming courses at productionreadyserverless.com. And I'll see you guys next time.