Real World Serverless with theburningmonk

#22: Real-World Serverless with Gojko Adzic

July 29, 2020 Yan Cui Season 1 Episode 22
Real World Serverless with theburningmonk
#22: Real-World Serverless with Gojko Adzic
Show Notes Transcript

You can find Gojko Adzic on Twitter as @gojkoadzic and his books:

Check out his latest projects:

In the show, we also talked about the Ports and Adapter (or Hexagonal Architectures) pattern, which you can read more about here. This topic was also explored in length during Episode 18 with Aleksandar Simovic and Slobodan Stojanovic.

If you want to have an easier time debugging serverless applications, then also check out Lumigo, and you can get 15% off your monthly bill with the code "Yan15"

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.

Opening theme song:
Cheery Monday by Kevin MacLeod
Link: https://incompetech.filmmusic.io/song/3495-cheery-monday
License: http://creativecommons.org/licenses/by/4.0

Yan Cui: 00:12  

Hi, welcome back to another episode of Real World Serverless, a podcast where I speak with real world practitioners and get their stories from the trenches. Today, I'm joined by a fellow Serverless Hero Gojko. Hi, Gojko, welcome to the show.


Gojko Adzic: 00:26  

Hello. Thanks for having me.


Yan Cui: 00:28  

So you've been quite an inspiration for me personally, and I’ve loved some of your books in the past as well. For audience who maybe haven't heard about you before, can you tell us about yourself and your experience with serverless building products like MindMup and Video Puppet?


Gojko Adzic: 00:43  

Sure, I’m Gojko. I've been developing for professionally for the last 20 something years. Last seven or eight years I've been building with a colleague, a tool called MindMup, that's an online mind mapping. Up in 2016, we started moving it over to AWS Lambda from Heroku. And we were live 100% on Lambda in early 2017. So that kind of makes us one of the early adopters that we're full on production serverless. Apart from that, I wrote a bunch of books on software testing, on software development. I've been very active in the Agile development community in the early 2000s, when that was hot. And lately, I've been building a tool that helps people create videos from PowerPoint, PowerPoint presentations and images and slides very, very easily called Video Puppet. That's also fully serverless. So yeah, that's kind of a bit of a background, I guess. 


Yan Cui: 02:01  

Yeah, I will make sure I include your books, both Specification by Example, as well as impact mapping in the show notes. So anyone wanna check, check them out afterwards, they can go to the show notes and find out. So from a very high level, what does the architecture for MindMup and Video Puppet look like?


Gojko Adzic: 02:18  

So MindMup is a collaborative document editing app where multiple people can edit the same document at the same time. And the front end is pretty much a standard JavaScript up where it's talking to a REST API on the backend, the collaboration is mostly achieved through S3, we've done that slightly before AWS published the whole AppSync thing, and we never really migrated to AppSync. If I was building it now, probably AppSync would be a good choice. But because that wasn't available back then. Basically, what happens is users submit changes directly from their browsers to S3, we are giving users a pre-signed URL to be able to post changes. And then we have a back end process involving Kinesis and Lambdas. That will take those changes, aggregate them up and distribute them back to kind of other uses, again using S3. Other people viewing the same document are basically pulling an S3, to look at the changes and the changes are being appended. And it just kind of works performance wise, it's amazing. S3 is I think one of the first services that got launched at Amazon and I… It's incredibly robust, it's incredibly scalable. The fact that we can very easily plug event handlers with Lambda functions when certain things get uploaded to S3 is amazing. And I think that whole architecture gave us almost infinite scalability at ridiculously low cost. The rest of the app is more or less like a standard account management system. So there's accounting Dynamo. There are websites to access that stuff, but the really interesting part is I think this whole chain of browser directly to S3 and then Lambdas, firing aggregating, saving back to S3 and aggregate and clients pulling it back. For Video Puppet, Video Puppet is a video building system effectively where people can submit files in either as markdown with assets or even as a PowerPoint application where they can put text into speaker notes and Video Puppet will convert that into video because video building is generally something that takes a bit longer than 15 minutes, sometimes even a few hours that can't really run in Lambda. But I've designed it to also be kind of obeying all the serverless principles, but using AWS Fargate. That's a container management system that can also execute on demand and is paid by usage. So most of the initial preparation is done again, as a single page app, communicating to S3 and other Amazon services. The actual video build task itself is a Fargate container that fires when people start to build. It's using S3 to communicate assets, use assets that the clients are uploading. So again, a very similar architecture where a client browser will get pre-signed URLs to upload the assets to S3. It invokes the client then invokes a Lambda function directly from a browser using the AWS SDK. Video Puppet uses Cognito for authentication, so I can use the AWS SDK in the browser and invoke Lambda functions, then, this Lambda function does some kind of pre-validation and fires off a Fargate task that runs for however long it needs to run. The Fargate task is meanwhile saving status updates to S3 to a well known URL for each task that again, the client will pull periodically. And it's again making this use of pre-signed URLs for S3, that is wonderful. And the client will display thumbnails, it will describe display progress during the video build. And at the end, again, the Fargate task just uploads the video to S3. So it's kind of a very similar architecture where most of the file communication is going directly from the browser to S3 to other resources in AWS. And the communication back from AWS resources to the client is going through S3, again, making use of this incredibly scalable and stable and wonderfully cheap service. That is S3.


Yan Cui: 07:23  

There's quite a lot of things to unpack there. One of the things you mentioned there, which I think a lot of people would have maybe be worried about in terms of invoking Lambda function from the browser. I'm sure I'm sure a lot of people hear that would think, “Oh, what about security?” You can't you can't you can't do that. What's your argument there? What would you say to this kind of argument?


Gojko Adzic: 07:43  

So that's a really interesting question, because one of the really big learnings for me, since we started migrating to Lambda in 2016, is that my thinking about front end and back end resources and what's available to where, was really constrained by how I was designing traditional three tier systems where you have your story job, as you said, functions that generally users shouldn't be allowed to talk to. And then there's a load balancer with a web server or some kind of API in front of it that users are allowed to talk to. And then kind of, there's a client application. And generally, that makes a lot of sense. It made a lot of sense the way we were designing these things. But the key constraint there was actually utilising reserved capacity. Usually, there is an application server or a API server or something like that. And that was running on a specific reserved set of instances or machines or virtual machines. And that kind of usually had full security access to everything in the backend. So generally, my applications would not trust user requests coming to the application server, but then the application server gets fully trusted to access the database, or fully trusted to access the storage where after we started moving to Lambda, and things like that, I realised that, well, S3 isn't really backend storage and DynamoDB isn't really backend storage in a traditional sense. It is available over HTTPS and Lambda functions aren't really functions inside my highly secure application server. They're available over HTTPS. So at the same time, because they're all running on a multi-tenant architecture that is kind of AWS Lambda, my functions are there. Your functions are there. Everybody else’s functions are there. Amazon doesn't trust my code executing my Lambda function any more than it would trust your code executing my Lambda function. So, each request has to be authorised individually. And if each request has to be authorised individually, then this whole middle gateway role of kind of a boundary of trust really does not need to be there. And if we remove that, we open up lots of interesting opportunities. Like I said, I can get Lambda functions invoked from the client directly. So, what we use there is AWS Cognito. Cognito is a managed username and password database. But it also allows you to invoke AWS resources, given a set of keys, basically, that Amazon will generate for each Cognito session. And you can apply templated policies to that. So I can say, well, users that are authenticated can only call this function under these conditions, or they can only access this particular file on this storage, for example, I use that a lot for allowing people to upload assets to S3, I can say users can only upload to the project's bucket, but only with their own prefix. And you can set up templated policies in IAM to do that. So with that, I can effectively set up a client app that when I'm logged in, I have access to my files, when you are logged in, you have access to your files, but I can't see your files because it's a different prefix. And if I can do that directly from the front end, using IAM policies, inserting another application component between that just creates unnecessary latency and unnecessary cost. And I've learned that one of the best ways of actually saving money and reducing latency and creating more robust applications is to avoid having these middle tier services where they're really performing platform functions. I think Alex Casalboni who who works for Amazon, and previously was a big community organiser for serverless events in Italy, in one of the originators of ServerlessDays, has this wonderful sentence where he says, transform, don't transport in Lambda functions. If your Lambda functions are only there to transport data from A to B, you're probably doing it wrong, you should probably connect these two things directly, where if there's some kind of transformation going on, if there's some business logic going on between the two things, then there's a justification to put the Lambda function there. So a lot of these services like S3, like Dynamo, Kinesis, and things like that, are reasonably well protected with IAM so that if you can give the client an IAM policy to use effectively, and if you can strictly using an IAM policy, then the platform already performs the security function, doing it twice inside the Lambda function is just wasteful. And it works kind of the same whether you're invoking a Lambda function or whether you are saving data to S3, or whether you are, I don't know, sending an email or something like that. So each of the... what Amazon has done really well since 2016 onwards is kind of decouple these things, to make them better integratable into serverless workflows, work better inside this kind of workflow where if you can give the client an IAM policy, then pretty much the clients can do a lot directly with the backend unless there's some specific business function they need to do. So my suggestion for people who've kind of heard me saying that the front end connects directly to the back end resources is to think about whether that's actually what, you know, they're just used to doing because of very valid constraints of the previous architectures, which are no longer present in a serverless architecture or not.


Yan Cui: 14:41  

So that's definitely something that's been sort of growing on me as well over time since we first spoke about this. I think about a couple years ago, when you mentioned going directly from client from the browser to DynamoDB table at a time, I was thinking, “Wow, what the hell? That sounds crazy.” But over time, I'm beginning to see your point where Yep, so if all your functions is doing is just providing authentication then DynamoDB already have that and Cognito...


Gojko Adzic: 15:04  

IAM already provided that and you know you are going to use IAM again to authenticate from Lambda to Dynamo. So your Lambda is not doing much in that case.


Yan Cui: 15:13  

Yep, yep. Agreed. Another thing that I think is quite interesting about what you said earlier is that you are doing a lot of things directly against S3. Have you run into any sort of consistency related issues with S3, because S3 is eventually consistent by default you just don't have any way around that?


Gojko Adzic: 15:30  

Yeah, well, I mean, we we did run into that with aggregating data. So S3 is consistent immediately for writing new objects. But for updates, it is eventually consistent. And we have to kind of work around that a bit. S3 does support object versioning as well. And it is consistent in terms of versions. So you can, if you're just asking for the latest version of something, you might turn into consistency issues. But all the events triggered from S3, if you turn on versioning will also include the version ID. And I strongly recommend if people need to update the same thing over and over again... For example, with Video Puppets, I am uploading the status of a task to start to ski under a task ID. So the client will continuously pull for the same key and the the updates will update over a key. In cases like that, you do need to be a bit careful and make sure that when processing something you're reading the correct version ID. For MindMup, we had to deliver, we had to develop our own kind of consistency check that will, after a file is aggregated, check a few seconds later whether an older version was restored or not. And if so reprocess the events a bit. It's not ideal. But I'm hoping that at some point, S3 will introduce atomic updates, I think that's one of the biggest missing functions of S3 for me is just being able to submit a, like an append function effectively that the file system has, where you don't care what's already in S3, you're just giving it a bit more data to append to the end of the file that would make S3, slightly more powerful than it is now, we looked at some other options for doing that and concluded that S3 still has a million other benefits that we want to use. And all the other solutions that we looked at didn't have those benefits. So kind of developing this, check a few seconds later, if you know that the new, the latest version is what you expect it to be, is up, works perfectly fine for us effectively.


Yan Cui: 18:03  

Okay, that's great. I think I've also missed the ability to be able to do like append on to S3 files in the past, where you end up having to do all these reads and then update and and and then dealing with some of the consistency issues that come with the fact that updates are eventually consistent. I do hope, you know, they give you something in the end, that's gonna make your life a lot easier in this particular case. What about in terms of the economic benefits of serverless? Because that's something that you have also talked about quite a bit in the past as well. Can you maybe share with us some of your experience running and scaling MindMup? I mean, did you guys save much money when you moved from EC2 to, to Lambda with MindMup?


Gojko Adzic: 18:44  

So we moved from Heroku to Lambda, I guess Heroku does run on EC2 so we were on EC2 by implication. But we saved a lot of money. But we saved money in two different ways. And I think although one of those ways is a lot more quantifiable. And I wrote about that a lot earlier, I don't think that is actually kind of the more important one. So the first way we saved the money was just blanket operational costs moving from Heroku to Lambda. And because we were kind of already using S3 and other things like that at the time, we have some hard data about that that we've collected from February 2016 before we introduce the first Lambda function to February 2017 when we completed the migration. The number of active users we had in the system kind of grew about 50%. And our operational costs dropped about 50%. All in all, so kind of just comparing that gives me a rough estimate that we saved about two thirds on operational costs, which is nice. But ultimately, what's really important there is that by using Lambda, we were able to kind of get things that are included in the price of Lambda that are not included in the price of other hosting things like getting Amazon's operations, support and expertise, getting all this kind of wonderful infrastructure stuff like alerting through CloudWatch and automatic retries and this whole IAM integration. And I think we dropped a lot of infrastructure code. Back then, you know, we had services that were just waiting on SQS, and polling SQS, and then invoking something else and retrying it if it fails. And a lot of that stuff just went away when we migrated to Lambda. And as Lambda started maturing and maturing, I remember when I met you the first time, you had this incredibly complicated way of getting functions from SQS into Lambda, we'd kind of Lambdas that die, but recursively call themselves and things like that. And it was like, one of the most complicated diagrams I've seen for an architecture. But you know what, a year and a half ago, they introduced the direct connector from SQS to Lambda. So the platform is evolving quite a lot. And I think we've benefited significantly from these things being improved over time, and being easier and easier to do. And where penny really dropped for me in terms of that operational cost savings that is indirect was, I think, in 2018, during the summer, there was this IBM processor vulnerability, this Heartbleed. And I forgot what the other one was called. So the first one made all the news, and everybody was writing about that. And it was really kind of shocking. And then, about a month later, there was a second vulnerability discovered, and we got a email from a concerned University admin, that obviously kind of you know, his job was now being shaken by all these cloud services and vulnerabilities. And we got an email something like call, you know, what are you doing to mitigate this CV vulnerability, we need to know now. And I just woke up. So most of our clients are in the US. We work on kind of Central European Time and British time, and I just woke up and got this thing, still drinking coffee, I have no idea what the CV kind of number is relating to. I typed that into Google. And the first result coming out of Google was that AWS Lambda was already patched. So from that perspective, being able to you know, rely on Amazon to do the whole infrastructure piece saves a lot of time for me, because I can focus on product development, and I can focus on doing business unique features, rather than dealing with all this other stuff that needs to happen when you have a product. MindMup is a product that you know, a few million teachers and students use per year, where it's being developed, supported, operated in by a team of two people. And in addition to that, I do a lot of other stuff. And I'm building this new product as well that is becoming popular. I can't do that if I'm on pager duty all the time. And the fact that basically the whole pager duty thing, infrastructural things, operational things, security things, I can rent from Amazon for the price of running it inside Lambda is wonderful. That's a massively liberating thing to do. And I think that can significantly reduce the bar, you need to operate a kind of production via the system where really you can build up the business unique stuff of your product and focus on that and not have to spend time doing commodity stuff that Amazon is gonna be a lot better at that job than you anyway. So I think that that's, for me, I think the biggest economic benefit of doing serverless is faster time to market and wasting less time on things that are not my unique business proposition where Yes, then you know, you can you can reduce your operational costs compared to EC2 or compared to other things, but that's not really the key thing. I've mentioned Video Puppet. At the moment. I'm running Video Puppet on Fargate. And Video Puppet as it is, is, you know, like probably two orders of magnitude, fewer users than MindMup or maybe even more. But at the moment, it's costing more to run than MindMup significantly, an order of magnitude more where operationally Yes, I could save a few hundred dollars a month by doing my own infrastructure in EC2 or doing my own kind of scaling and stuff like that. But if I look at like the developer time, if I save two hours of my time a month, I've already made the profit on that. So although Yes, as I mentioned, yeah, Lambda can probably be cheaper than EC2 if done right, it can definitely be cheaper from things that are reselling EC2 like Heroku. But I think the biggest economic benefit is being able to focus, really valuable developer time on really valuable development outcomes.


Yan Cui: 26:11  

Yeah, I couldn't agree with that any more. And I certainly remember, when Spectre and Meltdown happened, I was working in a team and we had all these containers. So we spent a couple of days just busy patching all of our images, and updating all of our clusters. And then I remember at that moment in time, I saw a Tweet from Chris Munn, saying that all infrastructure running Lambda and Fargate has been patched for Meltdown and Spectre, and it occurred to me that we weren't even thinking about Lambda functions, because it's just not a concern that concerns us. Someone Amazon in this case is doing a much better job at securing and patching those infrastructure than we can. And certainly in the recently, I've been doing some work on a client project where I built a back end for social network in just a couple of weeks on my own, that's sort of delivery speed, that that would just not be possible if I have to constantly worry about setting up networking, or the EC2 servers and building up docker images and things like that, rather than just building just the things that the business needs to ship a feature or product. So you're also a world-renowned expert in terms of when it comes to testing. I mean, your book Specification by Example, was a favourite of mine. Incidentally, that’s also a topic that many people are still confused about when it comes to testing. How do you personally approach testing for MindMup, and and when it comes to Lambda and serverless?


Gojko Adzic: 27:35  

Well, I kind of technically, Lambda is just a highly distributed execution system. And it's no different than any other highly distributed execution system. I guess the big problem here is that very few people have actually worked on proper high throughput transaction processing systems, like in banks or in trading and things like that. I had the kind of luck of working on lots of systems like that in early 2000s. And kind of the approaches we were using for testing these things back then I use for Lambda today with great success. I think, designing things where you have a clear delineation between the kind of business core and outside infrastructure and using stuff like Alistair Cockburn's Hexagonal Architecture or kind of Ports and Adapters becomes absolutely crucial. Isolating all infrastructure connectors behind adapters that can then be integration tested separately becomes critically crucial and having good coding design practices, creating small modules, creating low coupling between modules becomes really, really important. So in practice, a typical Lambda function entry point kind of the, you know, Lambda.js that has the handler is very, very small for me, what it needs to do is kind of just do the wiring. And it is only responsible for ensuring that it's reading the input parameters from the right position, parsing the event in the right way and passing it to some kind of business handler and passing all the other infrastructural connectors that business can use. So for example, a Lambda function that receives events from Kinesis and sends it to S3 could have just like a Kinesis event parser and it could have an S3 event repository and request processor initialises with these two objects. And what that allows me to do is then have proper unit tests for the request processor that are using in memory adapters for the event storage during memory adapters for the event parser. And testing all the business logic there. I can then have integration tests for the S3 event repository, and properly test that. I can then have an end to end operational test that submits a test event and checks whether it is done correctly. And it allows us to kind of build the whole pyramid of tests doing that. And if people are struggling with how to test Lambda functions I would strongly recommend investigating Hexagonal Architecture, because that's one of the best ways you can isolate these concerns and then apply different levels of testing and different ways of testing for different levels. For more kind of complicated stuff, we tend to use a lot of automation, of course, and on different levels of things. So for example, for Video Puppet, I've built some tests around kind of the core business logic of testing videos that will render the sample frames, and they will render the audio kind of frequent synth kind of image and combine the two and then do kind of comparison between the expected and the actual outcome. So I can review lots of these things quickly. But in terms of kind of the, you know, Lambda itself, I think I'm reviewing or looking at books around kind of designing high throughput transaction processing systems and messaging systems. And investigating how to test these things is really, really important. The nice thing about Lambda in architectures like that is, as I mentioned, you can benefit from all this operational experience of Amazon running these high throughput transaction processing systems. You get a lot of benefit, lots of scalability, lots of opportunities to just connect to different services. But the downside of that is that even a Hello World is a high throughput transaction processing system effectively. So you know, people need to learn how to design the systems, you can’t just kind of jump, I mean, you can jump into it, and you will get somewhere, but to avoid all the pitfalls and all the problems, people have to investigate a bit how to design these kind of robust systems.


Yan Cui: 32:52  

Yeah, I think that is definitely a big jump, especially for companies who's never, who are not used to building these type of systems. From my sort of experience working with lots of different companies, I find that if the company has already been doing microservices, they used to sort of that mindset of building distributed transaction systems, and all the testing, all the monitoring challenges that come with that. They adopt to serverless very, very easily and they don't tend to have too many problems. Whereas teams that are still in that sort of building things monolithically, and never having to think about what happens when you have a distributed system and how they monitor things, how do you alert on things. Those are the teams that tend to struggle with a bit when it comes to adopting serverless, because is quite a big jump, both in terms of the architecture style, the sort of new challenges that you now face, but also in terms of the technology underneath that as well, in terms of Lambda functions, in terms of how do you test things and all that. But definitely I can add links to the Ports and Adapters book by Alistair Cockburn, so that you guys can check it out.


Gojko Adzic: 33:57  

Yes, Slobodan Stojanovic did a few really nice posts about kind of Hexagonal Architecture and things like that, in the context of Lambda functions. In my book, running serverless have a whole chapter, we'd kind of code demos, how to do this exact thing I mentioned where you use Lambda just for wiring, so it doesn't have that entry point Lambda kind of script or the entry point, Lambda Java class is very, very small, just kind of is responsible for wiring and then you can kind of separate concerns for other things and test them properly in isolation for different classes. So those would be some good resources to also look at.


Yan Cui: 34:42  

Cool, I will include the link to your book in the show notes as well. And so obviously, serverless worked out quite well for you and that you are clearly a big fan. But are there anything that didn't work so well, any sort of pain points or platform limitations that made life difficult for you that you wish wasn't there or AWS would just fix it?


Gojko Adzic: 35:02  

Well, the biggest, the biggest issue, you know, really, we've experienced so far is this S3 eventual consistency, the thing that surprised me is that if you really have a hot object that's been kind of written a lot, we've sometimes experienced latencies up to like 15, 16 seconds until it gets consistent. And that was surprising. I kind of expected it to somehow become consistent in you know, you know, less than a second or, or a few seconds. But having a delay of, you know, an order of magnitude of 10 seconds or so, kind of, you know, really, really caught me off guard, but then again, it's something you have to test and you have to look at. In terms of missing things, I think, you know, the the platform is moving fast, and lots of things are changing all the time, being able to do a larger Lambda function, or being able to put more things into container would have certainly been beneficial. We both for Video Puppet and for MindMup have to do, sometimes very kind of graphics heavy processing. And in order to do that, you need to have a nice variety of fonts. And that sounds like ridiculous is a concern in 2020. But if you consider that a full Unicode font, just one value to it might be something like 80 MB. And if you add italics and bold and, you know, bold italics, you get to something like 200 MB just to support one font where if Lambda packages, a total size of 250 MB, it doesn't really give you a lot to play with. So at the moment, we have lots of weird workarounds, storing things to S3, downloading them to Lambda containers when they need to and things like that. And I know, Amazon has been promising to increase that limit for a long time. But I think being able to do larger functions, even if they start slower than they do now, would be wonderful. I think the the observability of Lambda functions is amazing for asynchronous functions, you have lots of things around retries and sending dead-letter queue events and things like that. And I'm really sad that that kind of functionality is not available for synchronous functions as well. I've caught a tonne of unexpected bugs just by inspecting events that failed. And just as a kind of ridiculously simple example, when I launched Video Puppet and allowed people to upload PowerPoint files so they can convert things. There was one lady that uploaded a PowerPoint file where she used this vertical tabulation character in Unicode. I had no idea that even existed and the whole thing exploded because trying to read the vertical tab. But I caught that in an event log and then I kind of figured out what it is and fixed it where we request reply Lambda functions with synchronous invocations, there is no such possibility. And I think that's a that's a really missed opportunity. Because in a highly distributed system, where, you know, users keep surprising people, we can either spend years and years and years doing perfect analysis in user research, or we can move fast and look at, you know, do gradual releases and look at what people are doing that surprising us and fixing it while it's still a small problem. So I really missed the opportunity to have error logs, proper error logs and captured events for async, sorry, synchronous Lambda invocations. I know I can log errors manually, but you can't properly log an event that caused the Lambda function to timeout. You can't properly log an event that caused the Lambda function to explode in an unexpected way. And you can't forward kind of errors from a synchronous invocation somewhere else where for async invocations that's possible. So I think for me, those two things would be the biggest improvements to the Lambda platform that Amazon could do. And of course, you know, like an atomic append to S3 would be absolutely amazing.


Yan Cui: 39:46  

On the observability piece, we should have a chat afterwards about getting you hooked up with Lumigo. So I've been using Lumigo and also do a lot of work with Lumigo myself as well. And one of the great thing about Lumigo is that they do record all of these things for your invocation. So they do record the invocation event, as well as the result, the return value. So when there is a problem that you are surprised, including when your function times out, you still get those information. And they even record the environment variables used for each invocation as well. So that if you ever run into a bug, like I did recently with a client project, you can go back in time and see, Okay, what was the environment variables at that moment in time. And certainly being able to see the events that cause a function to error that is super useful. I can just grab that from the Lumigo console. And then I can replay that function locally on my laptop with things like “serverless invoke local” or “sam local invoke”, and put a debugger and attach a debugger to the code to actually step through the code and see what actually happens. And a great thing also is that it reports all the requests and responses from every IO operation you make. So when you say getting an error, you know it’s something to do with how you made a request with DynamoDB, you can then go back in time and check, “Oh what did I send to DynamoDB?” And then work out what the actual problem is. So let's have a chat about this afterwards. And I can also get you some discount as well, if you want to try it out Lumigo.


Gojko Adzic: 41:14  

That’s fair enough, but I'm paranoid they don't want to trust the third party with my data. If I'm already trusting Amazon I want to trust Amazon and I think, you know, as I said, the fact that Amazon is offering this for asynchronous functions, but not offering it for synchronous function locations is I think a missed opportunity.


Yan Cui: 41:32  

Yeah, so we have been asking Amazon, well, the Lambda team to also support to extend Lambda destinations to support synchronous invocations as well. So that'd be really useful, especially for some of these cases that we just discussed. And so you are also one of the authors, one of the co-authors of cloud Claudia.js as well. So as a potential user, I mean, can you tell me why should I use, why should I choose Claudia over something like AWS SAM or the serverless framework or AWS CDK? What's the what's the special thing about the Claudia?


Gojko Adzic: 42:08  

So, Claudia was designed with one specific purpose in mind, to make the kind of easy use cases of Lambda really really easy. So, if you want to set up just a webhook or a very kind of simple web API that parses request parameters and response something back, I think Claudia is still the simplest solution for that. After we've released Claudia and kind of again, we're talking about 2016 here so the other tooling was lagging behind or significantly more complicated. Amazon caught up quite significantly with AWS SAM and I think SAM is going in a great direction. I wrote in my book running serverless about SAM it's it's quite a nice initiative. And I think SAM has made Lambda API is running on Lambda functions, web APIs are significantly easier to do than before SAM but if you’re a JavaScript shop, SAM still has some really weird limitations that they're refusing to fix and we've even took the part of Claudia that I think is the most interesting for that and contributed it to SAM. Slobodan had a PR open for 11 months that he didn't want to merge, that's around packaging local dependencies. So, what Claudia can do and SAM cannot do now is properly package the dependencies from a mono repository reference kind of on a local drive relative dependencies. If you're not the JavaScript shop or if you're not kind of worried about local dependencies that much I think SAM is probably what I would use today instead of Claudia because it's just supported by Amazon and and kind of be under more active development for MindMup we are very very actively still using Claudia and we're building all our Lambda functions with it. For Video Puppet I've actually just use the packaging aspect of Claudia. We've extracted that so I can just package functions like that. And then I've used some SAM. And I’ve used CloudFormation directly to create functions. So I think, you know, Claudia had its moment in 2016, 2017 when the other tools were really difficult to use. Since then, things have really caught up so I think CDK or SAM is where the future is. And at the moment, you know, even for people that are suffering from not being able to use local dependencies with SAM what I would recommend is use Claudia for packaging and then use SAM for deployment. 


Yan Cui: 44:56  

Okay, so before we go, are there any other upcoming projects or books that you'd like to tell us about beyond the ones that we already discussed?


Gojko Adzic: 45:04  

No, I'm kind of at the moment maxed out with, you know, capacity to do things. I am really enthusiastic about Video Puppet because I think it's, first of all, it's a lot of fun to build. The second thing is I think it's going to help a lot of people do things like video courses and promo videos and tutorial videos much easier. For, you know, people who are listening to this podcast I guess more of them would be developers. Video Puppet came out of my need to have version control videos where I can have a source code for a video, I can build it through GitHub, I can... When my application changes, automate the kind of update of videos. And I think I've not seen anything else do video production like that. We often have to do short tutorial videos. Originally we had wanted to do it for Claudia but then kind of I started using it for MindMup as well and a bunch of other things, and usually it would take me something like one and a half or two hours to do just five minutes of video, and most of it had nothing to do with the actual content, most of it had to do with synchronising audio and video, figuring out the narration and re-recording it because I said something stupid and Video Puppet automates all that away. So I'm trying to spend as much time as possible, developing that because that's kind of still in early days, it needs a lot of love. So I'm probably not going to be writing anything else or starting any new projects soon.


Yan Cui: 46:46  

Yeah, when I looked at it does look pretty unique in this particular space and something that I do wish I can use. One of the things that's stopping me right now in terms of using Video Puppet is that I do still want to use my voice, because that's quite an important thing in terms of identifying and building that connection with the students. Is that something that you guys are thinking about in terms of how to not just...


Gojko Adzic: 47:10  

You can record your own audio and then upload it. That's you know of course that's been there. I think what Video Puppet can help you do there is you can use the automated narration to just iterate on on the content. And then once you are finally happy with the flow and everything you can record your own audio and Video Puppet will synchronise everything. There are services emerging like Microsoft has this beta-level service that allows you to basically record a couple of hours of your voice, and then it will synthesise audio from text using your voice. I don't know how good it is yet. I've not really tried it, but I think in a few years that's going to be possible as well. But for now, yeah, I kind of... we have a couple of users that want to personalise video but what they're doing is they're doing the text to speech to experiment with the content, move things around, cut things up, paste things in different places. And once you're finally happy with the script, you can record your own audio.


Yan Cui: 48:12  

Yeah, that voice synthesiser, that's the thing I'm looking for, because if I had to record lots of  small clip, well, clips of my audio then, that to me is even harder to do than to just record one really long take of something. But you know, once that synthesiser becomes good enough, then that's make the whole problem go away because really I want to just write a script and then have something like Video Puppet turn that into proper audio and then merge that with the slides and everything. So looking forward to when that becomes proper, you know, good. 


Gojko Adzic: 48:46  

The reason why I'm so excited about this. You can see there's there's a lot of innovation in this space, generally now. And there's a lot of research going on. So i think, you know, Video Puppet is a product that might not be for everybody now. But in a couple of years when you know all these new technologies gets built that is being built now, I think lots of interesting things open up as possibilities. And that that's why it's fun to build it now. 


Yan Cui: 49:08  

Yeah, for sure. That's quite exciting thing to look forward to. So Gojko, thank you so much for taking the time to talk to me today. And I hope you stay safe and hope to see you in person soon.


Gojko Adzic: 49:19  

Yeah, thank you very much. Likewise, I hope to see you as well. Thank you. 


Yan Cui: 49:23  

Take care. Bye bye. 


Yan Cui: 49:36  

That's it for another episode of Real-World Serverless. To access the show notes and the transcript, please go to realworldserverless.com. And I'll see you guys next time.