Real World Serverless with theburningmonk

#39: Fighting COVID with serverless with Denis Bauer

December 02, 2020 Yan Cui Season 1 Episode 39
Real World Serverless with theburningmonk
#39: Fighting COVID with serverless with Denis Bauer
Show Notes Transcript

You can find Denis on Twitter as @allPowerde and on LinkedIn here.

Find out more about the work Denis and CSIRO at

To learn how to build production-ready Serverless applications, go to

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.

Opening theme song:
Cheery Monday by Kevin MacLeod

Yan Cui: 00:12  

Hi, welcome back to another episode of Real World Serverless, a podcast where I speak with real world practitioners and get their stories from the trenches. Today. I'm joined by Denis Bauer. Hi, Denis, welcome to the show.

Denis Bauer: 00:25  

Hi, thank you for having me. It's fantastic to be here.

Yan Cui: 00:29  

Yeah. Saw your talk at the ServerlessDays Virtual event recently. And it's very much on topic, You were talking about how you're using serverless to tackle COVID-19. And can you tell us a bit about yourself and some of the work that you've been doing at CSIRO?

Denis Bauer: 00:45  

Yes. So Workflow CSR, which is Australia's government research agency. And CSR are really passionate about translating research into products that people can use in their everyday life. Specifically, I work for the e-Health Research Centre, which is the largest digital health initiative in Australia, and worldwide, quite unique in covering the full value chain, from basic science, all the way up to delivering health services into the clinic.

Yan Cui: 01:13  

And guess what sort of role has your team played in the Australia government's response to COVID-19? And I'm also quite curious how serverless technologies have played into the work you're doing in terms of fighting against COVID?

Denis Bauer: 01:29  

Yeah, with COVID, the key element to it was that we had to respond quite quickly. Therefore, the one of the first questions that we needed to answer was, well, what is this virus? What can it do? What is its building plan? And how can we create a vaccine and treatment for it? So luckily, with COVID-19, or the SARS-CoV-2, which is the underlying virus, the dynamic sequence was generated quite quickly. So we did know about its building plan quite early on. And this really gave us a head start in understanding how, you know, how the inner functions are. But given that all RNA viruses, which that one is mutating, it was quite important to compare one virus that was, you know, selected from one individual to another one that was found in another person. And in order to do that, we were basically taking an fingerprint of the virus and its genome, and comparing that we use machine learning in order to really understand the evolutionary trajectory that was on. Now doing that was quite difficult, because the information luckily grew quite quickly, in that the international community shared the data quite freely. But that also meant that very quickly, we accumulated huge amounts of data at the moment. So the virus itself is 30,000 letters long. And at the moment, we have 200,000 versions of that virus sequence that we need to analyse. So clearly, 200,000 times 30,000 is a huge number to be dealing with. And serverless really helped us with that. With serverless, we can split out the individual tasks of analysing that genome to smaller chunks, and those chunks can be processed on individual Lambda functions. And then we can collect them back together in order to have, you know, the full spectrum, which is, you know, very much aligned to, you know, embarrassingly parallel processing on the high performance computing machine. But here, we have the commodity hardware of the cloud, rather than a really expensive high performance computing system where we would have to queue up in order to get access to it. So all of this really has helped us in getting a head start and understanding the virus better.

Yan Cui: 04:03  

Okay, that's interesting, because I've heard quite a few people talk about similar use cases where the using Lambda is like a really cheap supercomputer where you can spin up, I don’t know, 3000 concurrent executions with no warning, and that that gives you a lot of parallel computing power. And without having to, like you said, to rent a supercomputer where you have to go through approval and queuing up before you can get access to it. So in this case, what does your architecture look like from 30,000 feet?

Denis Bauer: 04:34  

Yes. So it's interesting that we had multiple components to it. So there is the interact with a user. The other one was the hardcore compute elements to it. While the interaction with the user part was the basic, you know, client server. In a high performance compute, the scheduler, is sort of the core or where the smarts really is. And in the serverless flow, we don't really quite have an equivalent element to that. There are Step Functions which can fan out and collect things together. But the problem is that, you know, when you're talking about the massive scale that we're talking about, you know, 3000 Lambdas, you know, a million Lambdas. This gets quite expensive very quickly. Therefore, we have to go back to basic, and use the SNS topic in order to have the transaction between the individual components, and then even lower grade, who are writing things to S3, in order to sort of have temporary files that we can then collect and record in order to keep track of where in the fanning in and fanning out process we are. So for our from 30,000 feet overview, we do have the API gateway interacting with the user, we do have the initial Lambda function that triggers everything. We do have a DynamoDB that, to some extent, collects and keeps track of what is happening. And then most of it goes to S3 and has some of the communication on S3. And the SNS topic oversees this, and ultimately, it goes back to a final Lambda function that collects all the data back together and sends it back to the AP gateway.

Yan Cui: 06:18  

Okay, so in this case, that the request from API gateway is that for someone on the website, trying to look at the visualisation, or it is API gateway for someone posting some data to you, because they have published a new, I guess, identified a new variant of the virus, and then they published some data?

Denis Bauer: 06:36  

Yeah, so though, we do have multiple use cases that to some extent, have a similar underlying pattern. Therefore, we do our very first one was CT scan, which is basically the search engine for the genome where it is supporting genome engineering approaches. The next one was more around diagnostic or variant prioritisation. So in them, you know, in your 2 million differences between you and the next person, the question is, which of those differences might be relevant or might be critical to determine your ongoing treatment, for example, or your stabilities to certain drugs. Therefore, we do need to identify which one of those 2 million differences are crucial. And that is a another serverless component to it, where we can annotate all of this with the information, the scientific and medical information that is out there. So in both cases, it is the user wanting to have information, and then the inner workings of the system is bringing in additional information be that, you know, a genome or be that medical information in order to synthesise that, that knowledge and condense it down and serve it back.

Yan Cui: 08:06  

Okay, so I guess let me try to follow through the flow of information here. So you've got some, I guess, some crawler of something or some scanner, I think you mentioned, that collects data, publishes to your API gateway endpoint that triggers a Lambda function then kicks off all these parallel processing in many, many Lambda functions running concurrently. And you're saving the data into S3 temporarily so that those Lambda functions can get them from S3 and then start to process it. And then you distill the information down to small chunks so that when someone goes to your website to see... I think I've seen those, your visualisation of the different, I guess, lineage of those viruses and, and mutations. And then you, I guess, at that point, once you first, you've computed a lot of data and then you distilled it down to whatever is necessary to support those visualisations. Is that what's going on here? Is that how the data is flowing through the system?

Denis Bauer: 09:08  

Yeah, that's right. And then the web page is basically just a static web page that takes the updated information and visualises that for, for the user.

Yan Cui: 09:17  

Okay, gotcha. So in this case, when you need to kick off those parallel processing, you said that you were using SNS. So in this case, do you fire one message and trigger multiple Lambda functions to each pick up a different chunk? Or are you firing one message for every chunk?

Denis Bauer: 09:36  

It depends on the use case, right? So typically, is that we do have one task to say, will give me the results back for a certain region in the genome. And then that region might be 10,000 letters long. So we strip it down into smaller chunks, and then depending on whether that smaller chunk is processing in time, it gets processed. And then sent back. If not, then it gets split further in order to really fit in with the Lambda and its memory and runtime requirements and keep the runtime to a reasonable length for the user. 

Yan Cui: 10:16  

Okay, Okay. I'm asking because for a lot of, the sort of MapReduce type of task being described earlier, where you were using many concurrent Lambda invocations to process something in small chunks. I don't see that with SNS, because with SNS, typically, you want a message to be fanned out to multiple listeners, they're all interested in one message where as in a MapReduce task, you didn't want the same message going to everybody, you want everyone to get a different message that points to a different chunk of the task. I’m trying to understand what's the role of SNS here?

Denis Bauer: 10:50  

Yes. So I think we started off with that everyone would get their respective chunks, like, you work from 1 to 100, you work from 100 to 200, and so on. And I think in the end, we went more with the, we process, we give you the same information, and then you go to S3, and look at what kind of temporary files need processing. And then you pick one of those, processed them, and deposit the results back, which in the end, especially with the, you know, with a recursion based approach of being spliting further and further and further, there was an easier way to handle that rather than keeping track of all of the SNS messages that were sent and received, and so on.

Yan Cui: 11:38  

Okay, so when you say recursion, are you writing, so I guess, does that mean that you're writing your function in a recursive way where they process as much as they can, and then they call themselves so that it recurse and you pass it on pointed to how far along you are in the file?

Denis Bauer: 11:54  

Yes, but except that this is handled through what is written on S3, as the temporary files that is set out on an S3. It means there's still a termination element to it. But in terms of the identifying which point in the inception, be this particular Lambda function is to operate, it's not the message that it gets from the previous recurrent call, but from the information that is on S3. And all it knows is that it needs to split itself.

Yan Cui: 12:27  

Okay, I see. I see. So you're not really writing a recursive function per se. Okay. Yeah, I've actually written quite a few recursive functions in the past when I need to process a really large S3 file that I can't finish in one invocation for 15 minutes. So at the end of the function, well, I process them in smaller chunks, that I know we only take, what, 10 seconds, and then every, at the end of it, I check, do I have enough time left in the invocation? If not, then I recurse, call my function again, pass along a pointer. So that I know, okay, I've processed the first 30,000 rows. So when I call myself again, and invoke myself again, I should start from the 30,001st row, I guess it's more classic that recursion technique they use in functional programming. But I think a lot of people can see that the recursive functions as an anti-pattern. Certainly a few other people in AWS have told me, yeah, that you shouldn't use the recursive functions. Because if you're not careful, then that you, you kind of miss the terminal condition, and then you just have an infinite recursion.

Denis Bauer: 13:40  

Yeah, I think I think this was one of the reasons that we went for this, you know, half hearted, if you want, recursion in having the fallback option with the, with the temporary files on S3, because you can spend a lot of money if you don't terminate your thing, and it just keeps on running forever and spinning up 10,000 of Lambda functions. 

Yan Cui: 14:05  

Yeah, yeah. And I also had, with S3, I also had heard people run into infinite recursion in the past where you use S3 to trigger a Lambda function, which then writes the file back into S3 but in the same bucket, or I guess you didn't think about it through so that that triggers function again, writes files into S3. And then, that also happens as well. I've had a few people. I think I've read a blog post a while back or someone did exactly that. And then they got a reasonable sized bill at the end of it.

Denis Bauer: 14:37  

Yeah, it’s true. I mean, in the past with Haskell, you are running it locally, and nothing bad could ever happen from it. Yes, yes. You can make up quite a big deal.

Yan Cui: 14:47  

Yeah, exactly. I mean, I come from a functional programming background as well. I did love F#. First how many years of my career, and I guess, recursion comes naturally to me. I remember when people tell me you are going to miss the terminal condition and end up with infinite recursion. I’m like, that's never happened to me. But I totally see how it is an easy mistake to make. Even with things like S3 and things like that when you trigger your function that can still be bad, but I guess you're not doing that you are running to S3, but then you are kicking off the invocation yourself. You're not using S3 to trigger the function, right?

Denis Bauer: 15:23  

That's right. That's right. And, you know, some of the Lambda functions can get lost as well. Therefore, even if you have the perfect failsafe recursion, if the system fails you, then we still run into the issue that it might keep on going forever.

Yan Cui: 15:40  

Right, right, gotcha, gotcha. So I guess maybe, have you run into any issues with using serverless for this line of work? Was it something that was easy to introduce to your team? I guess, a lot of the data scientists I've worked with, aren't really, I guess, programmers by trade. They know how to write Python. They know how to write algorithms, but maybe not quite familiar with some of the practices that we use, things like CI/CD, things like building observability and logging and all of that, have you run into some issues in terms of the technology itself, or in terms of the practices?

Denis Bauer: 16:20  

I think we're a bit different yet again, to the traditional data scientists, I would say. So as researchers, I wouldn't say it becomes second nature to explore things that take up new and new things and leave your training behind is something that we need to do, basically over and over again, like you never stop learning. And I think from that perspective, there wasn't the problem of we've never done it this way, so why would we start doing it this way? There was never, there was never an issue. But given that we were never really trained, either way, we do have to discover everything from scratch, and decide if it's good and bad in that you don't have the baggage, but at the same time, you don't have any guidance, either. So the main issue with serverless was not necessarily around the logging and the inconveniences because we didn't know better. For us, the problem was the limitation of the technology itself. Like at the very beginning, Lambda was so tiny that it was incredibly crushingly painful to do anything with it. Therefore, the the parallelisation had to be extremely hardcore in order to get anything done. With the increase, that is a bit a bit better. But I think we're still not quite there in terms of the orchestration of the Lambda function itself, as I was saying this, there is Step Functions, but it's not a job scheduler, in the high performance compute sense. Therefore, if you're using Lambda, in that sense, it gets tricky, and you have to write your own scheduler. And I think similarly with, with, for example, Athena, it wasn't, it wasn't quite up to scratch for the application case that we had for writing large volumes of data very quickly and reading it and writing it again. So therefore, in most cases, we had to go back to the basic building blocks and build our own.

Yan Cui: 18:33  

Okay, so I guess when you say Lambda is so small, and then later increased, I guess you're referring to the maximum runtime of five minutes, then now is 15 minutes?

Denis Bauer: 18:44  

Memory, more so than runtime? Because you see that the genome is quite big. And while you can fit it up into chromosomes, they are still enormous. And therefore splitting it up into further chunks is possible. But it requires some additional cleverness in how to handle that because one region of the genome might influence another region of the genome and you lose that information when you chop it up into buckets.

Yan Cui: 19:15  

Gotcha. Gotcha. So in this case, I guess, you're running functions that are maxed out at 3G so that you can keep as much as possible inside one invocation and do all your number crunching?

Denis Bauer: 19:28  

That's exactly right. Yeah.

Yan Cui: 19:30  

Okay, got you. Okay, now I understand. So in this case, I guess, how do you guys decide to use a Lambda to begin with? You said the fact that you can get a lot of compute power very quickly, very cheaply is a good reason why you wanted to do it. But then, like you said, some of the limitations around memory, some limitations around runtime. Do you guys look at maybe containers and potentially as an alternative at least for some part of your workload.

Denis Bauer: 20:03  

Yeah, I think when we started with serverless, I remember it was one of the first AWS Summits in Sydney, where they introduced Lambda is this cool new thing for, you know, the Alexa skills. And we figured if they can do it for that maybe we could look into that as well, because one of the key elements to it was the versatile Spark mode, and being able to handle that, and this was exactly what we had. Not because back then we would go to hundreds and thousands of users. But because you would have days where there was no activity on the page at all, and we didn't want to spend money on that. Therefore, given this combination of that, we had to process huge amounts of data, and therefore would have to spin up very beefy instances, that would either sit idly for days, because there was no one on the page, or the user would have to be patient and wait for those beefy instance to be spun up. And both were not not acceptable, because clearly, we didn't have the money to keep a beefy instance running. But in terms of the time that a user, researcher would wait for a web service to spin up is basically non-existent, historically that someone would submit a job and then it would come back to them as an emergency or [inaudible]. But I think the acceptance was like that around new technology, new analysis is quite low. Our new approach, you know, analysing the genome, we had to do it fast. Therefore serverless fitting into the bill of what we wanted to achieve in an economical way, fast turnaround time, for catering new researchers.

Yan Cui: 21:52  

Okay, okay, gotcha. And in your talk at the ServerlessDays Virtual, you also mentioned that a lot of the typical architectural patterns don't work for research. So you guys had to sort of come up with some of your own patterns. Can you go into more details on that? I'm curious about some of the unique challenges that you guys face in the research space.

Denis Bauer: 22:14  

Yeah, throughout the communication with a user, as a client server would be a standard pattern, and we reuse the standard approaches and tweak them if we had to, the rest isn't quite fitting in. So with other people now, repurposing Lambda, as a high performance computer, maybe there are some patterns emerging around that. But the parallelization and and using Lambda through a scheduler isn't something that is a standard pattern out there. So we do need to create our own communication system there. But I think more so is that research, the product that you develop there is not necessarily the way that a user interacts with the webpage, it is more around the analysis result that you return. That's, that's the value proposition. And therefore, I think, in the research, we are potentially a bit more flexible in standing up and tearing down solutions more so than in the commercial world, where they really have to put up rock solid redundancy proof and uptime guarantee systems. Whereas I think in the research, researchers are a little bit more forgiving in that respect, when it comes to the architecture and less so in terms of the actual analytical methods that is underlying it. Therefore, the focus is more on the analytical methods. And with the architecture, we can be more flexible. But that also means that with the underlying analytics, being the focus, there is a lot of change in there. Therefore, our technology needs to be, our architecture needs to be a hugely modular, because from one day to the next, the analysis engine could be ripped out and replaced with another one, again, you know, in terms of Lambda this suits us perfectly, because we can replace one Lambda function with another Lambda function that has a different analysis engine in it. So I think from my perspective, we are... the emphasis is more on agile, fast, not necessarily robust architecture when it comes to delivering out the analysis.

Yan Cui: 24:31  

Okay, gotcha. Gotcha. And I think that's actually a quite a nice position to be in. Because one of the hardest things is that when you are developing, you can just know like what you guys do, we need to make a big change, you just go ahead and do it and then just replace the whole stack. But once you go live, that's it. You know, a lot of decisions that bad decision that you've made are fixed in time. You have to just live it to some extent and gradually, maybe move away from it, but it's a slow and painful process sometimes. Being able to just throw a lot of that mistake away, once you know better, I think is such quite a nice position to be in.

Denis Bauer: 25:06  

It is a double edged sword though, like, on one hand, I'm perfectly aware that we don't have to put in the due diligence that other people have. So we can move faster. If we're when, you know, whenever people come to me and say, “Oh, this is revolutionising what you've done!” They're like, well, that's what we are expected to do, rather than producing this really well, architects, robust client things. But on the other hand, it also means that we are expected to move fast and come up with new solutions, and never stand still. Therefore, we can never have this perfect architecture that we can polish and be done with it one day. Do you know I mean? Whatever we put together is never, can never be beautiful. It will always have to serve the purpose and be tiered down the next day.

Yan Cui: 25:59  

I guess that can be difficult, because you know, you're spending all this time into this work. And then the next day is torn away, because the project's over, you know, you know, you have to do something else to maybe go into another research, another project.

Denis Bauer: 26:14  


Yan Cui: 26:16  

Okay, so thank you so much for that, that gave me a lot of insights into how the expectation, but also in terms of the requirements, you guys have maybe been slightly different to more traditional application developments. I've been talking to quite a few people in, I guess, Australia, and there seem to be quite a lot of interest in serverless. Certainly, I'm seeing a lot of uptick in the server less in the last 12 months. I recently opened a new video course on AppSync. And it was a big surprise to me, when I find out that Australia is my third most popular country of students. So have you also seen something similar happening in Australia where there's a growing interest in the serverless technologies?

Denis Bauer: 27:00  

Well, once you go serverless you never go back. And I think, A Cloud Guru, for example, it is an Australian company, right? And I'm sure you're familiar with it, or their video system is running serverless, right? I think that was one of the first really big Australian companies that did that. Similarly with the Bureau of Statistics, the sensor webpage is running serverless architecture So I think from our perspective, as Australians, I think we are quite eager to explore new things. And serverless is this fun, lightweight, modular technology where we can just have a go and experiment. And I think people that really resonate with it and they like that, it certainly does with us.

Yan Cui: 27:49  

Okay, yeah, gotcha. Yeah, I do know, ACG very well. And I had Dale Salter from A Cloud Guru on this show a little while back. And one of the one of the founders, Ant Stanley is a good friend of mine. He also hosts, because he's also part of the community that runs the Serverless state conferences around the world. We've, me and him, we have done quite a few things together already. But I guess, like you said, having that one really prominent company doing a fully serverless, I think that has got a nice networking effect within the companies around the area. So with re:Invent coming around a corner. Do you have any wishlist items that you hope that they will address this year?

Denis Bauer: 28:37  

So re:Invent, I always look forward to that one. Similarly, with the AWS summits, it's like, you know, my Christmas. And but this year, I notice that in terms of science, and academia, there's not that much there. And I wonder why that's really the case, I think as AWS hero with science and academia, so you know, my core business, I think I need to work a little bit harder to bring that community in. So I think one of my big wish list is to really have more content that is catered for research, innovation, potentially academia, you know, to bring all that innovation potential that I know is in the academic and the research space to bring that into, into the real world, like, translate that into things that really could accelerate the companies and economy in general, that is from, you know, very inspirational level. In terms of technical things, something that is very niche and specific to, I think that the research community have said that we definitely need to have more reference data, like that is enormous data sets currently on-premise somewhere, and you can apply for access and then you typically are expected to copy that data over, like, how last century is that practice. We do need to have more of those high value reference data sets on the clouds, that the updates have streamlined, validation is happening automatically. And as by scientists, we can just consume that data to do our research. Similarly, in terms of compliance, specifically in the health space, there is machine learning as a medical device, software as a medical device in general, all of this is coming in order to improve patient care, but the compliance work-around around it to have only one version, and then that version is accredited, and you have to stick with that one version until you get the next one. That is not with continuous integration, that is not something that the software industry is, is advocating or has evolved to. Therefore I think the medical space needs to evolve with it and have compliance as a service, where the software is accredited on the fly on, say, a standardised data set, and therefore gets the accreditation as it continues as part of the continuous delivery process. And I think the economy of scale of the cloud providers can really bring that into something that is realistically achieved within the next two years. And lastly, is to have serverless on the Marketplace. So there is a digital MarketPlace, but it largely is focused around commercialising virtual machines. We did tweak it a bit to now also encapsulate less MapReduce clusters, EMR clusters. But that's still a far cry from an actual serverless architecture to be monetized through the Marketplace. That is something that I would like to see. 

Yan Cui: 31:51  

Okay, that's a really interesting idea. And I think something that I remember Simon Wardley spoke about a long time ago, now that you've got your Lambda functions as your low level abstraction for a lot of your business logic. But then there's nothing stopping you from taking some of the analytic stuff that you guys have done, some research work you guys have done, and then packaged it up as a function that somebody else can just deploy into their environment, or maybe some trading algorithm, and then they can pay, I guess, a fraction of the Lambda costs that they incur as money going back into the people that are publishing those packaged functions in the MarketPlace. He said along what you're thinking that you can package your functions, your logic, and then just make it available for other people to run in their environment.

Denis Bauer: 32:39  

Exactly. So I feel like we're 90% there already, with the libraries, serverless libraries, that is out there, where you can go in and find patterns of serverless patterns or even function themselves. Now, the only thing that is missing is the monetization elements around that. And I think this really would support open source development, where you can have an open source core that is published that is potentially peer reviewed, that is rock solid, and a lot of people have looked over it, and then you sell the convenience of having that packaged up in an architecture that is efficient, and links in together modularly with a lot of other interaction points, I think this will be this will be definitely something that we would like to see going forward, it would definitely, you know, open up the whole building block, like you can, you can just take a building block off the shelf, to pay for that element as part of your of your runtime, just as you would pay for the architecture to AWS. But here, you pay to the actual developer to continue their open source development further.

Yan Cui: 33:50  

Okay. And along that same line, they've announced the Lambda signing recently, I think just yesterday, that I guess that also maybe helps in terms of getting something onto the MarketPlace so that it's if the Lambda function itself is signed, that means, you know, the code hasn't been tampered with. But I guess, is there any sort of concern, at least as a researcher, is there any concern that, okay, if someone guess my function, they runs it from the MarketPlace, there's nothing stopping them from just copying all of the hard work that we've done. And then at some point, maybe they just stopped using the version of the function they got from the MarketPlace and just did a copy and paste and so they don't have to pay us anymore. Is there any concern from that side of things in terms of protecting intellectual properties?

Denis Bauer: 34:45  

Yes and no. So I think they just... with the Open Source 2.0 elements to it that where the core, again, is open source, but the convenience wrapping around the convenience, but I think it will be a similar way that we need to be cleverly coming up with a convenient wrapper around the the actual Lambda function itself, that is the one that can be monetized. But ultimately, I'm not too worried about people copying elements, as long as it's... So we might be able to copy a specific version of it, but they subscribe to the continuously updating of that Lambda function. That is, you know, the core business of researchers or developers to continue pushing your product or your, you know, your software further, and only you and the community that really invest in it can do that. And not necessarily someone just copying blindly the code itself.

Yan Cui: 35:47  

Okay, okay, gotcha. And we definitely need to think about this some more. Certainly, I guess, with the more traditional software licencing, there's a lot of version 1, 2, 3, 4 and so on. And, you know, if you copy one version potentially, you have it for a certain amount of time when it's still valid and up-to-date, and then the new version comes out, and then maybe you do same thing again. Maybe more thought need to go into that, potentially some kind of support form AWS and the platform to maybe better protect those intellectual properties. But definitely, I think it's something that's worth doing or definitely something that's worth exploring, like you said, as a way to encourage more people to do the work but also share with the community so that people don't have to keep reinventing basic building blocks, patterns and things like that. I think the Serverless Application Repository was an attempt at that. But it's really just not quite as easy to use. And like you said, the Serverless Framework Components looks a lot better, in terms of how easy it is to integrate with an existing solution that you have.

Denis Bauer: 36:54  

Yes, that's exactly right.

Yan Cui: 36:55  

Yeah. So Denis, thank you so much for spending the time to talk to us today. Before we go, is anything else that you'd like to share? Maybe any sort of upcoming projects or, or speaking engagements?

Denis Bauer: 37:09  

I think, in terms of upcoming project is... we definitely need to think about how to bring the innovation and the commercial world closer together. I think it's absolutely crucial, especially in a world where we want to build a digital economy, that the innovation potential is actually harnessed in the digital commercialization approaches, right? And I feel it is still very much a silo in that you have the developer community of commercial companies and you do have the innovation and research community that is basically minding their own business, potentially disliking papers and be done with it. We have a very interesting approach to see two areas can be fused together. And that certainly is one of the things that I would like to work on, might not achieve relatively soon, but definitely work on and definitely needs the support from the community at large, from both communities at large.

Yan Cui: 38:12  

Okay, so how can people find you on the internet and keep updated on the what you are working on?

Denis Bauer: 38:17  

Yeah, people can definitely find us on the or generally on Twitter.

Yan Cui: 38:28  

Okay, sure. I will make sure those are included in the show notes that people can find easily. And thank you again so much, Denis, for joining us today. And stay safe and hopefully see you in person soon.

Denis Bauer: 38:41  

Thanks for having me. This was fun. And as always looking forward to in-person meetings. And [inaudible].

Yan Cui: 38:48  

Ok. Take care. Bye bye. 

Denis Bauer: 38:50  


Yan Cui: 39:00   

So that's it for another episode of Real World Serverless. To access a show notes, please go to If you want to learn how to build production ready serverless applications, please check out my upcoming courses at And I'll see you guys next time.