Real World Serverless with theburningmonk

#37: Programming Lambda in Java with Mike Roberts

November 11, 2020 Yan Cui Season 1 Episode 37
Real World Serverless with theburningmonk
#37: Programming Lambda in Java with Mike Roberts
Show Notes Transcript

You can find Mike on Twitter as @mikebroberts on LinkedIn here.

Here are links to what we discussed in the show:

To learn how to build production-ready Serverless applications, go to productionreadyserverless.com.

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.


Opening theme song:
Cheery Monday by Kevin MacLeod
Link: https://incompetech.filmmusic.io/song/3495-cheery-monday
License: http://creativecommons.org/licenses/by/4.0

Yan Cui: 00:12  

Hi, welcome back to another episode of Real World Serverless, a podcast where I speak with real world practitioners and get their stories from the trenches. Today, I'm joined by Mike Roberts. Hey, Mike.


Mike Roberts: 00:24  

Hi, Yan, thank you very much for inviting me.


Yan Cui: 00:27  

So we've known each other for a little while now. And you've just had a new book came out on the writing Lambda functions with Java. Before we get into that, can you just maybe tell us about yourself and your journey into serverless, I guess?


Mike Roberts: 00:40  

Yeah, absolutely. So I've been working in the industry for 20 years now, so a long time. My formative professional years are in the early 2000s. I worked for a company called ThoughtWorks, who were very much into sort of the early days of Agile software development. And I was, I was, I had a great opportunity to work with a whole bunch of people then that were sort of forming those ideas around then. So I worked with Jez Humble who, who's now very famous for continuous delivery, and DevOps books. I've worked with Sam Newman, who's now obviously famous microservices, and a whole bunch of other people besides. And then... that was in the UK. So I grew up in the UK. I moved to New York in 2006, and sort of had various jobs that was sort of... I used to flip between management and technology jobs. And the last full time job I had before doing what I do now was, I was an Ad tech company. And I was sort of for a year, I was the interim CTO. And then I was a VP of engineering there. But one of the things that we did there, and this was about five years ago, was we replaced an old data pipeline system, so tracking all of our events that came into our app platform. And we replaced that with a system using Kinesis and Lambda. And at the time, this is very new. Java Lambda support only just come out. And the guy that was sort of the technical lead, and who ran the data engineering team, was this guy called John Chapin. Anyway, I left that company in early 2016. And then John and I got together again later that year, and started Symphonia, which is this company that we have now, four years old, I can't believe it. And it was really the premise of the company was, we want to do more work of the type that we've been doing in that Ad tech network, in that Ad tech company. So serverless, architectures on AWS. And at that time very, very few people were doing it. And so it was a little bit of a leap of faith. But that's what we've been doing now over the last few years is helping people out with AWS architecture, operations and that kind of thing. And one of the things that we hoped would happen is that serverless has really come to mean a lot more than just Lambda. And so that's one of the things that we've, we always felt was important. And that's that's actually come to pass. So that's one of the reasons that we've been able to keep our company going for four years. So yeah, and then, and then we wrote a book, but I'm sure we'll get into that.


Yan Cui: 03:26  

Yeah, let's get on to the book in a minute. I guess before we get into that, how would you define serverless in that case?


Mike Roberts: 03:35  

I'm, I always try to summarise this as briefly as I can and then fail. To me, serverless computing is where you build an application, relying on services that have a number of common traits. And those traits are things like, you are not thinking in terms of servers and hosts, or you are thinking in terms of capability. Things like you are not paying for servers and hosts you're paying based upon how much that service is used. Things like not having to worry about high availability, because you know that the service is going to provide that for you. And turns out that one of the most popular services that satisfy those traits, is this sort of subset of serverless called Functions as a Service. So this idea of writing generic software, on a platform that has all of those traits, and what obviously a lot of people when they think about serverless, they think of just Functions as a Service. But serverless is a lot more than that. It's any service that satisfies those constraints. And one of the things I always like to say is that the oldest serverless service on AWS is S3 because S3 satisfies all of those, all of those traits. So anyone that is using S3 on AWS is doing serverless computing. It's just a matter of how much they are bringing those ideas into their entire system.


Yan Cui: 05:04  

Yeah, absolutely, I totally agree. And that's certainly how I see serverless as well. And a lot of the, I guess people in the serverless community also think in terms of serverless as you said, any technology that satisfies some of these criterias, where we don't have to worry about the managing and provisioning servers. So in your new book, you are focusing on the writing Lambda functions with Java, which is a bit of I guess, different angle to Lambda, because most people are writing stuff in JavaScript and Python, certainly, I don't see as many people that are writing stuff with Java. But I certainly see a need from enterprises that are already running on Java and wanting to move some of that infrastructure into serverless world so that they can get things done faster. And I think what you're discussing with your book is certainly satisfying and helping a lot of that community from the enterprise side of things, getting their foot into Lambda. So can you tell us a bit about your motivation for writing the book, and some of the things that you really wanted to teach the audience with the book?


Mike Roberts: 06:11  

Yeah, I think there was these two parts to it. One was John and I got our start in Lambda with the Java runtime. So when we started building that data pipeline replacement, JVM support for Lambda had just been released. So we're talking here at the end of 2015. And so John, and I have been Java developers for a long time. I mean I started programming Java in 1997, when I was at college. And so we have a lot of Java experience, and JVM in general, we’ve written using other languages that run on the JVM. And so for us to be able to use Lambda and Java back in 2015, was sort of great, because we didn't need to learn a new language just to use a different set of services. And so we got very used to writing Java Lambdas there. And for us, in that particular use case, Java worked out fantastically well, we were processing millions of events a day. And it was an asynchronous message processing system. So for that the concerns that people sometimes have about Java and Lambda really weren't a concern for us. And it was a perfect fit even then in its limited form. And so the, you know, there's a couple of reasons for us writing the book about Java and Lambda was...that was the, you know, is still the language probably I'm most comfortable talking about when we talk about the big languages on Lambda. But also, we, you know, there is no book apart from ours, for Java developers using Lambda. And I found out the statistics this week, there's something like 10 million Java developers in the world, which I mean, I've been programming Java a long time. But I still find that huge numbers surprising. So we did want to provide a book that, you know, helped those people that have all that Java knowledge, sort of start building systems in a new way. But the other thing that we were very aware of was that we didn't want to make it exclusively for Java developers. And the Java in it is purely as examples. And I had a good chat with Jeremy Daly, a few weeks ago, another AWS podcast, for those that don't know. And he read through the book. And he's not really a Java developer. And he was like, this is a great book, even as me not being a Java developer. And that was important to us in that what we really wanted to also do with the book was bring what we felt were important basic practices about thinking about Lambda applications. And we didn't want the Java stuff to overpower that. So we wanted it to be accessible to all developers that have this sort of extra facet for Java developers.


Yan Cui: 08:51  

Certainly, a lot of the practices that I use with Lambda are, I guess, transferable across multiple languages. And certainly when you mentioned that the 10 million Java developers in the world that, well, that's a huge number. And if we want the serverless to ever go mainstream, we can't just leave those guys behind. We need to get everyone included in this massive, this amazing journey that many of us have been on for quite a few years now. And yeah, personally, I'm a big fan of Jeremy's podcast as well. And your episode will be linked in the show notes for anyone who's wanted to listen to Jeremy and your conversation. So in this case, in your book, you also talked about some of the techniques that one can use to improve some of the, I guess, common concerns people have around Java and Lambda, which is around cold starts. Can you talk about some of the techniques that people should think about when they are working with Java and working on maybe user-facing APIs where you are concerned about those cold starts?


Mike Roberts: 09:52  

Yeah, absolutely. And I think that, you know, I know you've just published a great, great article on cold starts, Yan. I think there's two things to start off being aware of when you're thinking about cold starts and Java Lambdas. One is what is the overhead that comes from the platform? And what is the overhead that comes from the application that we are building as Java developers? So one of the things I actually did a couple of months ago, and I wrote a blog article about this was I wanted to look at what's the platform impact on Java and cold starts. And basically, it's, it's less than a second, but it's definitely more than Python. So Python is like 100 milliseconds or 150 milliseconds system cold start. And the Java it's, it's more in the upper hundreds of milliseconds. Now, there is a lot of variability there, depending on a number of factors. Things like, what you've got your memory size set to, and all that kind of stuff, and how big your artefact is. But I'll get into that in a moment. But there, but you know, it's not what people think, oh, Java plus Lambda, we're talking about 10 second cold start. And that's really not the case anymore. You can have a Java app that's doing real work in Lambda, and it can have a cold start of, you know, less than a second. Would I use it in certain very low latency scenarios? Maybe not, certainly not if they were low throughput, but, but that's the sort of the first thing to say is that the system cold start for Java is, it's more than half a second, but it's less than a second. And then, and then, that comes on since there's not much we can do about that. But what we can do is tackle the stuff that we have control of. And in Java that's really about how big is the artefact under the function artefact that we are uploading to the Java platform that contains our code, and all of our libraries. And this isn't particularly different to all of the other languages out there, it's a little bit more sort of visible in Java, because of the way that the JVM works in terms of it loads up the entire binary code base before it does any work. And so we feel it a little bit more. But, you know, the overall techniques are similar. Typically, what I reach for is trying to minimise the size of that function artefact, and that means minimising the actual code that I have in there and minimising the amount of libraries I have in there. One of the techniques that John and I talked about in the book that actually is a lot easier using Maven, the main Java packaging tool and some other languages, as we talk about this technique of having each different Lambda function has its own artefact. And one of the benefits of that is that you can really say well each function artefact each Lambda artefact only gets the libraries that it needs to do its job. And if we sort of dig into that, then it normally means we can get the artefact size down a lot. So that's sort of the main technique that we use. I think there is a good call on, a good point to be made on memory size with Java apps. So the default memory size on Lambda. What is it these days, Yan? Is it 128 MBs?


Yan Cui: 13:15  

I think it depends on what tools you use. If you create a function using the serverless framework, then I think the default it uses is 1 GB, I think SAM probably defaults to a smaller memory size. Certainly if you create one in the console, I think it just slaps on 128 MB.


Mike Roberts: 13:32  

Right. So you absolutely don't want to be using 128 MB with the JVM app, because it's just not enough memory. And so it's really going to be thrashing in doing all of its internal processing. And so, I will recommend that for Java Lambdas, that you want really at least 512 MB. But, you know, because of the JVM nature in the garbage collection nature, you do get a boost from those higher memory sizes in terms of the extra CPU power that you get. So there is definitely something to be said for increasing the memory size. And then the other thing, especially because Java Lambdas are, you know, more often used in the enterprise world. There is this technique now Provisioned Concurrency, which I'm not a huge fan of it because it has some limitations at the moment. But you can pretty much pay your way out of this problem in a lot of places by using Provisioned Concurrency, but to me, that's sort of a tool of last resort. And really the sort of the, the basics of minimising your package size snd also sort of focusing on applications where Java and Lambda is is a really good sweet spot. That's going to really solve a lot of problems.


Yan Cui: 14:49  

Yeah, I guess if you use Java with Lambda in the mostly, I guess, data processing pipelines where your actual latency from cold start are not user facing then it doesn't really matter. I actually worked on some projects using Scala and, and Lambda. And the cold start was pretty horrific when I looked at it. But since it's all happening in the background, so no one ever sees that, you know, couple of seconds of cold start time. So it didn't really matter there. So one of the things that I remember doing as well at a time was with Maven, you said you can select, you can be quite selective in terms of what dependencies you bring in. But I think you can also exclude, I guess that some of the transit dependencies from some packages as well. Is that something that you also recommend people doing just, you know, trying to figure it out? So you bring in the AWS SDK, but then maybe there's some that internal dependencies that you don't really need. I also read that you can save yourself some cold start by using the native HTTP client instead of the one that's baked into the AWS SDK, which is I think, Apache HTTP. Have you ever seen some examples of people using that to reduce some of the cold start?


Mike Roberts: 15:58  

We, so the first of those questions is getting more selective about the libraries that you bring in. And you also mentioned Scala, and the funny thing is that the Scala build tools actually have some really good tooling to enable you to see what libraries your applications using and to actually trim that down. So this idea of a sort of going, it's relatively easy to figure out what, you know, actual libraries are being used by an app, but you can go sort of into more detail than that, say, which parts of a library are being used by an app and only bring that in. Now, that is definitely a technique that I've seen used. It's a little bit dangerous, because, you know, sometimes you might, it might not be clear what parts of a library are being used by application, apart from in some edge case scenarios. And so you might hit that edge case scenario, you know, once every week, and if you haven't got the needed part of the library there, then the application might fail. So, doing this what's called tree shaking down into a lower than individual library level is possible. But it's definitely, you know, that that comes with big warnings over it. The other thing you mentioned was the using different HTTP libraries. I haven't dug into that yet. You know, so there's, one of the nice things about the Java ecosystem is that it is very pluggable in terms of, you know, for nearly two decades now, there are certain places in that world in the Java world where there is an interface defined, and then you can have multiple implementations of that. I haven't tried out different implementations of the HTTP library, I saw that Amazon just brought out their own Java HTTP library a few weeks ago. So it'd be interesting to try that out. But I haven't personally dug into that one. But I would, you know, for example, there are XML libraries in, in Java that satisfy a common interface. And when I, you know, if I was thinking about, okay, which of these ones do I want to be using? And I really care about cold start times? And yes, I would think about that a little bit like, which is, if one is if one of those libraries is 20 times bigger than the other, I'm going to probably want to try and pick one.


Yan Cui: 18:17  

Okay, so I guess along that line, then, are there any other libraries or frameworks that you would strongly recommend people to not use? I’ve seen a lot of people use things like Spring Boot. And then they suddenly realise, wait, why is my function cold starting in two seconds? Something similar?


Mike Roberts: 18:33  

Yeah, this is where we, John and I sort of go against some of the prevailing trends of the Java world, even the Java and Lambda world. So yeah, there are these large application frameworks for Java, which are helped a lot when you're building traditional applications. They don't really help very much when you're building Lambda applications, frankly, and what they can add, if you're, if you're going into building a Java Lambda app, in the same way that you're building a more traditional app, you could you may find that you've got like 5, 10 second startup times because of that framework, or because the way that you're using that framework, I should say. And so John and I, our rule of thumb is when building Java apps is not to use any of those frameworks. I know that Amazon actually has some open source libraries that help people use those libraries with Lambda. And I think that's okay, if you're in a sort of, you want to pour over an existing application, you know, anything to try and help people into the Lambda world. I give a solid two thumbs up to but once you're starting to think about, well, what do we actually want this to be? What is an optimal setup for building Lambda apps, then typically, you can, you know, get rid of those frameworks and just focus on just the bare libraries that you need. Especially if you're leaning on the rest of the serverless ecosystem to do work. So you know, one of the things that, Yan, you and I know, people like Ben Kehoe, are very enthusiastic about saying is, if you're using an API gateway and Lambda, then define all your different routes on the API gateway configuration, and have different Lambda functions each of them, don't have the request routing inside of the Lambda function. And so I would absolutely strongly agree with that position, which is, you know, sure, if you're getting started, and you're porting an app over there, and it's already got routing in them, fine, leave it in there. But as you're starting to try and embrace the entire AWS serverless platform more, then look at separating out those Lambda functions, separating out that Lambda function into multiple functions for different routing endpoints. And at that point, you start needing those bigger frameworks less and less, this is the same in the JavaScript world with things like Express.


Yan Cui: 21:02  

Yeah, totally. And I think it's great that those, some of those tooling exist to help you use existing frameworks to keep doing things the same way. But moving things into Lambda, as, as you, I guess, lift and shift, but then you shouldn't be just stopped there, you know, to get the full benefit that serverless Lambda is going to give you, you still want to design your application to be more, I guess, native to how these services are intended to work. But yeah, I think having those tools available to use with Lambda like Express.js within Lambda, or using Spring Boot within Lambda, I think it's a nice way to get people going and maybe hopefully get some benefit from Lambda first before they think about how to optimise for performance and security in the serverless world. Another thing I guess, we talk a bit, a little bit now, this, you talk about how all these different practices that people can do. I also see that you've started doing some videos on YouTube that specifically talk about some of these practices. Is there something that maybe you can talk about here, in terms of some of the common practices that you think are good ideas for people to adopt when they're working with serverless?


Mike Roberts: 22:17  

Yeah, absolutely. And it's sort of this sort of comes under the heading of I've been sort of thinking about this a little bit this week, actually, for multiple reasons. I think that one of the biggest challenges that serverless faces now in 2020, is not about technology, it's about education. It's about helping people understand how to use these things correctly. And on one hand, is architecture. And so we were just talking about that a little bit where you know what, everything that we just said right there is can be summed up as, don't build mono Lambdas apart from when you're lifting and shifting this idea of not building one Lambda function to rule them all trying to separate them out. I was chatting to somebody else this week. And I came across this phrase that I haven't heard before called Lambda pinball, as Lambda pinball. So it's this sort of problem where people are coming into building Lambda apps. And they might build an application that has a hundred different Lambda functions, and they're all communicating with each other. And it's really hard to tell what's going on. And that, of course, is not great. And I wouldn't build an application like that. But those are, both of those things sort of point to these, this topic of that we need to get better at sort of architecture, education. But on the other end of the spectrum, I really think that we need to help people out with understanding, you know, what is the day to day experience of engineering, software engineering in a serverless world. So I've seen a number of places where, you know, people are all using the AWS web console to incrementally develop their application. And I know some of us will be like, That's absurd. Why would you do that? But you know, that that, if people's first experience with building Lambda is a Hello World where they are copying and pasting some code onto the AWS web console, then maybe they think that that, you know, that's a reasonable, a reasonable path forward where obviously it's not. Another problem I’ve seen is that people's thinking that even if they're writing the code locally, that the only debugging that they can do is in the cloud. And that's obviously a problem as well, because that can be really slow. So one of the things I've been trying to start doing recently is sort of talk about fundamental software engineering practices in a serverless world. You know, I've been programming for, professionally for 20 years now and longer than that, otherwise, but what, what are techniques that I think are really important for software engineers. So I've started this Youtube series from six episodes in now. And four of the first six episodes are about testing. Because I think that sort of focusing on testing is really valuable, not just because it helps you prove whether your code is working or not. But getting testing right enables you to move much more quickly as a developer. And so I talk a lot about how I think about testing serverless and Lambda applications certainly. But also thinking about continuous integration practices, and that kind of thing. So I've got a few more episodes here, where I'm not even going to be writing any more Lambda code or introducing any more serverless services, because I think it's, it's really easy to get, you know, distracted by all of the shiny services that Amazon have to help us build applications. But sometimes it's, we need to get the fundamentals right first, around how do we, how do we set up an engineering environment so that it's not taking us 10 times as long to actually build something because we haven't got an effective engineering environment.


Yan Cui: 26:03  

Okay, so how do you approach testing on your, in your, in your projects? Do you still write unit tests? And how do you go about running the code and testing them? I guess, before you deploy to an AWS environment, and I guess do you still do end-to-end tests once it has been deployed?


Mike Roberts: 26:22  

Yeah. So I wrote an article about this relatively recently, and sort of summarising, again, we have, we talked about this in the book quite a lot, because we think it's a super important area. And also, it's one of these other areas where we sort of go a little bit against the trend, some of the trends that we see in industry. So to quickly summarise the way that I approach testing, and I'll say non-production testing, because of the whole concept of production testing, which we can get onto, but it's a different area. But non-production testing, is that we focus on unit tests that run locally, that just run the code. And I'll get into what that means in a moment. And then we have end-to-end tests or integration tests that test our application actually deployed running in the cloud. The thing that we don't do is that we don't tend to use local simulators for our tests. So we're not using local stack, and we're not using AWS SAM local for our unit, or functional or integration testing. When we run our unit tests, we treat the Lambda functions as just any other code. That's one of the lovely things about Lambda is that there is no heavy framework, they are just functions that take JSON, and may return JSON. And so they're very, very easy to unit test in a traditional way. So we really doubled down on that. The other thing that I really like to try and get folks to think about is separating out this idea of testing from experimentation. So and this is where I think local simulators, local simulators are really useful, is if we are trying to figure out how all of our application comes together, like how is our browser communicating with an API gateway, what are the kind of messages that are going between those two things like trying to understand that system, that's sort of the bucket that I call experimentation. And I think that is very different activity to sort of test driven development. Test driven development works really well, once you know what your application has got to do. So that's why you want to write your tests that are going to last, you know, over months, or years. And so that's where we write our unit tests and our integration tests. Unit tests are testing the code does what it does, integration tests are testing our assumptions about the environment. But experimentation is a different activity. And what I see a lot of people try to do is merge those two activities. So they'll write a lot of their tests, using in a way that is like they're still trying to figure out what the system is doing, what the bigger platform is doing. And I think it's useful to separate out those two modes of thinking.


Yan Cui: 29:21  

That's music to my ears, because I see people making the same mistakes a lot as well. And certainly I don't use local simulators for my testing either. Sounds like you guys have taken a very similar approach to testing that I have done, even though we are probably using very different languages for Lambda. I guess another thing I want to maybe drilling into a little bit as well is so when you are doing your integration testing, in that case, if you're not using local simulators for say local DynamoDB, do you then have your code talking to the real AWS services and then verify that when you say make a query against DynamoDB your query syntax is correct? And how about testing IAM configurations, do you do something around the roles that you invoke your functions with, when you invoke them locally to make sure that the IAM configuration is correct?


Mike Roberts: 30:17  

For our integration testing, which is we, I use the phrase “integration testing” and “end-to-end” testing sort of synonymously. Or at least they're very similar types of technology. So we test for that kind of testing, I do that in the cloud. So I deploy my entire application to the cloud. And so say, I have a simple web service type app where I have an API gateway, a Lambda function and a DynamoDB table, all is one service, I will deploy that entire service to the cloud. And then I will then test it by making calls against the API interface. Now in that scenario, the Lambda function is going to be calling the real DynamoDB table. And so we are effectively testing our, all of our IAM role stuff and permissions there. So that's sort of the way I think about it. And Amazon, if we want to test our application talking to another service, I kind of like the sort of the Amazon way of thinking there, where we're deploying our app, our real test application, and then sort of trying to use a real version of an external downstream service. But again, the integration of end-to-end tests are as much as possible using the complete deployment of our application. On the other extreme, the unit tests that are run locally, don't test any of that. And so one of the things that John does very well in the book is described for local running tests, he separates out this idea of unit and functional tests. And unit tests are literally testing a method at a time, whereas a functional test is testing like the entire component, so perhaps, an entire Lambda function. And in that case, we may stub out something like DynamoDB locally just in terms of a function interface. But we're not going to be testing things like IAM roles in the unit and functional test realm.


Yan Cui: 32:33  

So that's it for part one of my conversation with Mike Roberts. We will return next week and talk about serverless adoption in the enterprises. In the meantime, please check out the show notes for links to Mike's new book. And if you want to learn how to build production ready serverless applications, please check out my upcoming workshop at productionreadyserverless.com. I will see you guys next time.