Real World Serverless with theburningmonk

#38: Serverless in the Enterprise with Mike Roberts

November 18, 2020 Yan Cui Season 1 Episode 38
Real World Serverless with theburningmonk
#38: Serverless in the Enterprise with Mike Roberts
Show Notes Transcript

You can find Mike on Twitter as @mikebroberts on LinkedIn here.

Here are links to what we discussed in the show:

To learn how to build production-ready Serverless applications, go to

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.

Opening theme song:
Cheery Monday by Kevin MacLeod

Yan Cui: 00:12  

So I guess in your work with John the last couple of years. Have you seen any trends in terms of enterprise adoption of serverless, any sort of common patterns, or common problems or challenges that people run into in that space?

Mike Roberts: 00:27  

Yeah, and I think it is, it is increasing. I think some of the challenges are in this architecture world. And I really want to be thinking about this a little bit more over the next six months or so. I, you know, it's a little bit like the microservices problem, but amplified even more. So a lot of people, when they start reading about microservices, think that they need microservices for everything, and they end up with 150 different microservices, when, you know, 10 might have been fine. And having this vast Cambrian explosion of microservices causes some real headaches, and it causes, you know, it makes it hard to find bugs, it makes development slower, all those kinds of things. I think a lot of sort of bigger enterprises may think about building serverless applications in kind of that same way and that they end up with, you know, this whole Lambda pinball thing that I was talking about earlier where they have all these tiny tiny services all trying to communicate with each other. And I think that causes a lot of problems, and a lot of pushback. And that's fair enough because if I had to work in a world where 1000 different Lambda functions, all need to know about each other I think I would be pushing back as well. So I definitely see those problems. And I see, you know, some of these engineering practice problems I've been talking about. I think that, you know, one of the things that I try to express to folks is that, and I've done this for a number of years, is that the Lambda programming model is very simple and when we're writing unit tests. It's very like the same code that I've written 20 plus years at the coding level, but serverless applications from an architectural point of view, are very, very different. And I think that there needs to be better training on the, in this area. And I think that companies need to be open to the idea that if they want their teams to be using these techniques and the services that they need to put explicit effort into education and training because just expecting that people are going to pick up the Lambda framework and start building great applications from just by figuring out by themselves, has proven to be not true. And so, and so we need to, as an industry we need to figure out how to solve that better. And I think that, you know one thing to be said is that, you know, we've been talking a little bit about lifting and shifting in terms of the Lambda function size, I think that there is a really good case to be made that you can take a microservice architecture, and pretty much lift and shift that into a serverless architecture, and it will work pretty well. It's not necessarily optimal from an architecture point of view, but I think it would work. I think they work pretty well. And that's certainly going to be better than a lot of the problems that companies are finding themselves in. And so, I think if I, you know, if I talk to companies, now it's like, okay, microservices and serverless can gel as a common way of thinking. So, start there. And then, and then try and optimise your thinking over time after that.

Yan Cui: 03:51  

I think that echoes a lot of similar challenges that I've seen in my clientele as well, where a lot of clients think they can think this Lambda thing is easy. We've got this and then they go ahead and build some Frankenstein application and then they do realise this doesn't quite seem right. And then they start to ask for help, and at that point of course it's a lot harder for them to, I guess, steer themselves back in the right track. And I certainly do agree that training has been something of an afterthought sometimes, because again, serverless just seems so deceptively simple but when you are not used to thinking about that world, especially when you come in from monolith to straight into serverless, there's a lot of the practices, a lot of how you set up your environment, how you approach, testing and all of that can be quite as big challenge and quite a big jump to, I guess, quite a big change to how people are used to doing things. Are you seeing, besides, I guess, sometimes a lack of willingness to invest into training and learning from this company, is any other, anything else that maybe we could have done better as a, maybe as a community, or as consultants to have encourage people to maybe take more of a proactive step into explicitly learning about how does this serverless architecture work, what is a good looking serverless architecture vs, you know, like you said 1000 Lambdas all having to know each other and doing some sort of a pinball thing.

Mike Roberts: 05:27  

Yeah I was actually I was thinking, as you were talking there I was like that, there's absolutely something that we can do better in them. One of the things that those of us that have been very enthusiastic about this over the years now, is that we say this is a simpler way of building applications, you know, you can go from idea to production in less than a day. And that is true. I absolutely stand by that. But that is in the context of, you know how to use these things. And I think that we need to do a better job of saying, you know, all of those time-to-market advantages that serverless offers are true. However, you need to understand that this is not just because you can do things quickly doesn't necessarily mean it's simple, or it's easy to jump in and start doing the right thing immediately. You know there is a point from which you need to stop, you know, you can jump in and prototype something quickly, but you're going to need to think about what does that mean to build a production application there. And for those of us that have been building these systems now for multiple years that transition is going to be easy, but I think we need to help people understand that you know that there is a transition there, there is going to be extra work. One of my old old friends and colleagues, Daniel Terhorst-North, has this idea that he tries to promote with people when they're building new applications of any type that there's these two phases to building an application called spike and stabilise. The spike is when you try and figure out whether what you're doing is at all the right thing to do. And then stabilise is a completely different mode of thinking where, how do I build an application that is going to last years, assuming that you're building something that's going to last years. And it might be and this is sort of one of the crucial things he talks about there. It might be that that second phase, you throw away everything technologically that you built in the first place, and rebuild it, but it's probably going to take you 10% of the time, because you've already learned so much in that first phase. So I think that one of the things that we need to do better is say yes, it is much faster from time to market point of view, to build with serverless techniques. However, in order to build long term sustainable serverless applications. There needs to be some training and education and learning how not to cause yourself massive problems.

Yan Cui: 08:00  

Absolutely. I also, I guess, I often tell my clients how quickly I can get stuff done. I guess maybe one thing I should also mention is that it's taking me 15 years to get to the point where I can get things done that quickly.

Mike Roberts: 08:11  

Yeah, and I will fully admit you know John and I may, you know, sometimes make this mistake where, you know, we've been over the last few months we've been building a lot of AWS Glue based serverless data pipelines. And, you know, especially because it's two of us working together, and we both understand each other very well, it's very easy for us to like build something and figure it's easy, because we just explained it to the other person in 10 seconds. But John and I have this context of, yes, decades of shared experience that you know it's not everyone has and in fact, almost no one has the same type of exact type of experience that John and I have. And so yes, we all need to do a better job of saying, “Okay, this is, this is how we got to this point”. And that's tricky because sometimes people really want just to know the end result. But what I've been trying to do more and my... and this is where the YouTube series comes in, again, is I really am trying to more explicitly explain my thinking about how I get to a point. And some people are going to really appreciate that they're going to appreciate the learning and some people are still going to, you know, they're just going to want the, the instant gratification, you know, that's fine. I don't need to please all the people all the time.

Yan Cui: 09:29  

Yeah, I see the same thing as well, and that most of the time the client just asked me, what's the best practice for x and substitute x for pretty much anything. And most of the time I have to tell them. Oh well, it really depends. And then, have a really long discussion about what it is they actually wanted to do, and the constraints that they're working with is not just a case of hey here's an answer that's gonna work for everybody. So I think sometimes that is a bit hard to digest if you had a client and you're paying someone a lot of money for advice and you can just hoping that it's going to be some magical answer that they know that it's going to help you with everything, so that you don't do any work yourself. And so I guess we are gonna have the YouTube, I'm gonna link the YouTube videos in the show notes as well. Is there anything else that you're working on right now, any, I guess, a pet project?

Mike Roberts: 10:20  

Ah, so I guess the YouTube videos have become my pet project. I'm not gonna lie. And I'm, you know, I mentioned before that I work with Jez Humble who, who wrote the continuous delivery book and DevOps book since then. I was doing this kind of what we now think of as continuous deployment. A long time ago so I'm looking forward to sort of bringing my thoughts on that into the YouTube series so that's, there's definitely going to be a lot of DevOpsy stuff in my YouTube series because that's that's definitely one of my favourite things to chat about. The other thing that I'd really love to do more sort of public stuff on is this thing that I was just mentioning before which is this concept of serverless data pipelines. So we've had three different client projects this year that have all been based upon this idea. So this is about not just using S3 as your data lake, although that's part of it, but it's talking about how you use other tools like Glue and Athena, and even Redshift now which has serverless elements to it, how you use these techniques, these tools to build a bigger data pipeline and data platform, but using these serverless ideas that we've been building over the last few years. So I'd say we've done this three times now with three different clients and so there's definitely some lessons that we've learned there. And it'd be great to write that up but it's one of those things where it's going to be a while because there's a lot to it and just picking out one thing kind of doesn't make sense. That needs to be a sort of a bigger story there. So I'd love to be able to dig in more on that. Jon's been doing some research about how we can have some examples that we can show the public, but I think it's gonna be a while but it would be great to do more on that. 

Yan Cui: 12:09  

Okay, so I'm actually quite curious about Glue because I've only ever used it in the context of using Glue with Athena. You said what you were doing as well or were you using Glues as a, as the engine for running ETL jobs as well?

Mike Roberts: 12:23  

Yeah, kind of both. So, Glue is actually the sort of big suite of services that really sort of break down into two. So one is this concept of the Glue catalogue, which is basically metadata about various data sources, and some of those data sources are in S3. But some of those can be in other places as well. Most people assume that you have to populate that data catalogue using these things called Glue crawlers, which is very common but you don't have to. You can also populate that data catalogue manually yourself if it makes more sense. You could even do that using Cloudformation resources to populate parts of that blue catalogue. So that's one half of Glue is this metadata repository. So Glue catalogue, Glue tables, Glue crawlers are all in that realm. The other half of Glue, is this thing called Glue ETL, which is basically serverless Spark as a Service. It's running Spark under the covers. And what Glue ETL lets you do is write ETL jobs. So the whole point about ETL extract transform and load is the idea where you take data from one place or maybe multiple places, do some processing on that data, and then save it out to one or multiple other places. So, very typically you might read it out of S3 and load it into Redshift, that's a very common pattern. But you can do many, many other things as well. We've been doing all kinds of really interesting things with that. And the nice thing about Glue ETL is that it has the power of spark which is extremely complicated. And for many years people have been having to run their own EMR platforms or servers or clusters rather to do this work, but Glue really abstracts a lot of that. And what happened this year in the summer was Amazon introduced this thing called Glue ETL version 2. And what that does is, I know this is gonna sound small and trivial, but it makes the minimum job size price one minute. It used to be that you had to pay for at least 10 minutes of a job. Now you only need to pay for the minimum cost is one minute. And what that does is that really opens up Glue ETL to being much more useful to smaller data size problems. And it really used to be, the unless you were dealing with, you know, gigabytes of data that it didn't really make sense to use Glue ETL from a cost point of view, but now you can think about using Glue ETL for much smaller data sets and for it to make sense from a cost point of view. So yeah, that's what, that's what ETL is. It uses the Glue catalogue stuff but it's really a very different service.

Yan Cui: 15:12  

Okay. Gotcha. Yeah, so I guess that explains why people used to tell me that the Glue is really expensive, I guess, that's because of that, you have to charge for 10 minutes minimum, even if you just want to run something for like 30 seconds. So, I guess, in this case is, if you want to run something that's for 30 seconds you are still gonna pay for the whole one minute right? It's gonna round up.

Mike Roberts: 15:35  

Yeah, absolutely. And you know those things that... it still takes, Glue ETL jobs take a couple of minutes to start up. Well, it used to. Now they take about 10 seconds to start up. So these are not things that you're going to do in terms of reacting to user events immediately. This is very much offline asynchronous processing. But yeah, you can now say run 20 of those sort of 30 second jobs a day or 100 of those 30 second jobs a day, and for it to make some amount of cost sense, whereas before you would have start, you'd have actually had to have started thinking, does this really makes sense from a cost point of view? But now, now those kinds of things really do make that kind of sense. And one of the things that's super useful is that it really can, writing Glue ETL jobs really can save you a lot of coding effort. So one of the things that we're doing right now with one of our clients, is that the source for one of our ETL jobs is an upstream SQL Server database. And I don't want to write custom SQL that goes and calls SQL Server database because that's not something I've had to think about for a while. One of the things that Glue ETL lets you do is it lets you, well Glue in general lets you do is it lets you set up that JDBC, set up that Microsoft SQL Server source as a JDBC source, and then Glue ETL will just do all of that JDBC stuff for you. You just say, “Oh, I want to read from that location.” And Glue ETL does all the work you just declare it as a data source. So that can really save a lot of coding work. So now I think for these medium data, because it used to be really just a big data platform, for these medium data questions, I think that Glue ETL as of this summer is a much more interesting tool to use. 

Yan Cui: 17:25  

Okay, I'm gonna have to look into that, something that I've had to do a few times myself. But sounds like I could have made myself, my life a lot easier if I had just looked at using Glue ETL instead.

Mike Roberts: 17:38  

I would say the documentation for Glue ETL is not always the best. So there's definitely some pain there. But yeah, from it from a technology point of view it's pretty cool.

Yan Cui: 17:50  

Oh, I'm familiar with the pain with AWS documentations. Cognito has been my number one pet hate when it comes to documentation fails. I guess...

Mike Roberts: 18:02  

Many things I could tell you about Cognito that would be an entire separate podcast.

Yan Cui: 18:09  

Okay that's that's great. Thank you so much, Mike. And certainly I really enjoyed the learning about Glue just now, something that I've been meaning to learn about for a while. Is anything else that, I guess, you want to tell the listeners, before we go?

Mike Roberts: 18:27  

Yeah, go buy my book. Obviously that would be great. If you have... one of the things that's great, we did this book with O'Reilly, and it was really fun to write a book with O'Reilly because the very first Java book that I bought was an O'Reilly book 23 years ago. One of the things that O'Reilly have now is that O'Reilly learning platform. A lot of companies just give their employees access to this learning platform. And so if you have access you can just go and read our book you don't actually have to go and buy it. Obviously, we'd like it if you go and buy it but if you have the learning platform, then, then go and take a look. One of the other things that we did with the book is that, not surprisingly, all the source code is on our GitHub repo. And so if you're interested in some of these things that I've been chatting about around separating out different Lambda functions with all of their different dependencies and libraries. That is all. embraced in the source code for the book, which you can find in our public GitHub.

Yan Cui: 19:26  

Okay, I will make sure that I put the link to the book as well as the GitHub repo in the show notes. So how can people find you on the internet.

Mike Roberts: 19:34  

Two ways,, s-y-m-p-h-o-n-i-a, although I'm sure that Yan will link this. And then on Twitter, I am @mikebroberts. And I do tweet. So that's the best place to find me personally. 

Yan Cui: 19:52  

Yep. Yep, I will include those in the show notes as well. So with that, I guess, thank you so much, Mike, for spending your time to talk to us today and hope you stay safe. And, you know, see you in person sometime soon.

Mike Roberts:  20:06  

Yea. Thanks. Thanks Yan. Thanks for inviting me and everyone please stay safe and sane, it's a tough year.

Yan Cui: 20:13  

Take it easy. Bye. Bye. 

Mike Roberts:  20:15


Yan Cui: 20:29 

So that's it for another episode of Real World Serverless. To access the show notes, please go to If you want to learn how to build production ready serverless applications, please check out my upcoming courses at And I'll see you guys next time.