You can find Gareth on Twitter as @garethmcc.
Install the Serverless framework here, and don't forget to check out Gareth's full-stack Serverless course here.
For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.
Opening theme song:
Cheery Monday by Kevin MacLeod
Hi. Welcome back to another episode off real world Serverless a podcast where I speak with real world practitioners and get their stories from the changes. Today I am joined by Gareth Markham Ski from Service Inc. Welcome to the show.
All right, thanks so much for having me.
So we were having some interesting chance before the show. And you told me about some of your histories off building stuff of serverless. Maybe. Can you tell your story to the audience and about how you've been been installed with serverless and how you got to working for Serverless Inc nowadays?
That's such a quiet, common story. I've heard were teams the oldest companies that start with one project to test the water on with serverless and then realized how much faster things can be and how much more scalable on the easier things becomes on. Then just become like a wildfire. And you get this a massive adoption throughout a company where everything turns into a serverless. So to go from No. Zero service to one and then very quickly go into tens and twenties project or running on serverless. Because once you once you realize that benefit, that oh, you get always sick. Inability thinks it's cheaper, more resilient, and you have to do less work. I mean, why wouldn't you do that right?
It's exactly the case. And in fact, there was nothing was one very specific project we worked on that really solidified that, folks, where the head of the team he was trying to, we would, we would, trying to always find a way to low test the application, because if you're running on servers, you need to be able to test the load off your systems because come your annual cells time like black Friday, things could fall over. We know this and he was trying to build a low testing application to help the team test the infrastructure on a staging environment. Before Black Friday comes around and things fall over for Riel. And one of the interesting signs of this is that he went through a lot of effort trying to build a load, low testing, simulation type system using multiple, easy to its senses effect. I think it was using at the time one of the largest easy to instances that you could get, which was costing enormous amount of money to run. Andi was still having problem coordinating a large quantity of simulated users running through essentially and into in test so you could simulate folks user clicking through Teoh purchase and products on the staging environment. And ultimately we got together. And I realized that the what he was trying to do was Pet was have a lot of parallel processing happening and slammed. Her was perfectly positioned for this kind of situation. So we're taking the existing, uh in two in test the integration tasting platform that would click through a process and then just spinning that up in multiple MD functions, and we got to the point where we could simulate 15,000 users opening up the site and buying a product within five minutes, which was something that he could get close to doing using a more traditional server based architecture. I think that the team working together on bowling this this one specific tool helps everybody realized how powerful building with serverless could be.
So now they said, You you're working with the Service Inc What are you doing there as a custom success engineer?
Well, I joined the team. I was essentially. I saw that Serverless head a position open for a customer success engineer, and I applied initially for the role, and I had a few interviews and spoke to the team and was very excited. The prospect of joining because there has been some had been something I really grow to love over the years has gotten to the point. Now, if I get any kind of project, I just cannot actually find myself going back to building things in a more traditional way. I have a few projects from started years back that I still maintained now over time, and I just wish I could find the time to go back to re architect them seriously. But that being the case, I went through all these interviews and the team eventually said that they wanted me to join, and that's that. They wanted me Teoh be a part of the growth team, which is where was hoping to end up because I love the idea off, off, informing and educating more developers about the power off service technologies. And that's one of my primary roles now at Civilised. Think I'm part of the growth team where I'm involved with developer advocacy on education? Um, I My role tends to focus a lot more on the support sites, so we probably will go into more detail later. But we do have a pay for product that we offer, and with that is the support support offering that goes with that on our team to focus a lot on that, helping make sure that anybody using our tools knows how to use them, gets education and need. We answer questions, take feedback and any any bug reports and so on, and I help pass it on to the engineering team so that we can make sure that uses get what they weren't, but it also involves a lot on the educational in content sites. So I find myself writing a blood post a lot of the time and currently in the middle of producing a full essentially a free set of were busy, a free course that anybody can go through to help get started and learn how to build a civilised applications.
Okay, that sounds great, and I'll measure that the link to their courses, including the show notes for anyone who is interested in taking a course. Eso I guess you're working very closely with customers for Service Inc. Are there any sort common trends that you see amounts your customers maybe comments that a problem that people run into?
Yeah, this is the thing again goes to why civilised things decided to have some kind of a product that we offer to uses because syphilis is a great tool for getting infrastructure that you would normally to spend a lot of time setting up managing and so on. But the observe ability on these kind of systems is a solved problem. There's the new reelection, the data dogs out there that let anybody see what's happening in their server based architecture's quite easily, mostly because all of the work is being done on the server. So it becomes an earning to teach of monitor and manage the problem with cloud native and cloud First start applications like civilised applications Is that your architectures? But it is a bit spread out across the cloud vendor, which can make monitoring and watching what's happening in your in your application a little bit more difficult to do. That's one of the rial problems that we've had. We've seen customers ask questions about is how do I know what's going on? My application? And I remember feeling this myself. You were one of the first things when the first real test for syphilis development for me was when we had our first essential, basically annual sales time coming on our new jam stack start platform. And then we're gonna push you. Thousands of customers act and something fell over when the when the initial sale lords the first half an hour, nobody could purchase anything because something had gone wrong and there was no way to easily see what was causing the back love. What part of infrastructure was causing a problem so observe ability is one of those things that a lot of folks find trouble with and now into today, more. A lot of tools are coming about to help solve those problems, including from Service Inc One of the front related to this is the effect off having deployments and not having and teams being concerned about having control over what developers of deploying into companies aws account. For example, you know you have a have a day of UPS team, which is normally had a very strong hold over what is deployed and created. And we're saying Teoh to these organizations that if you want to go serverless, you need to let developers have mawr, say and mawr freedom to deploy what they need into your AWS account. But you still want to have some way to help control there for security reasons and even for financial reasons, just to make sure that you know what's going on. So again, these things, a couple of the problems that we've we've seen over time,
So what about a C? I see the pipeline used to also part of your services. Well,
yes, I see A CD is one of those interesting things because with some things framework itself just doing it, deployment from your local machine ends up being a relatively simple thing to do. You could just run civilised, deploy on little deploy based on your on your AWS credentials into an AWS account on. That sounds like a really a really simple way to manage deployments. But one of the promises this goes to the whole whole side of control is, we found customers want to have some way to manage that deployment process a bit better, especially when you come to a larger organization with a team of more than a couple of developers. Eso giving up a proper CSC process set up can be a little bit daunting in a lot of cases. Eso one of the one of the solutions that we came up with with civilised was to add a C sed solution into into the summers frame of pro dashboard. Recall it on the CSE. The solution is meant to be configured around the assumption that you're building a serverless framework application on so we would be building tooling on top of our open source framework to help make sure that developers can get the best out of the framework. It's itself. So you have the great open source base, but you have all these additional tools that is very difficult to provide in an open source base that needs to be provided as a service in some way, like a CSC platform that has the containers in the back that can build the deployment packages and so on that can aggregate your monitoring data and so on. And with a central last see SED system, you can now integrate all of these things together and using just your git repository. By the time this episode comes out, we may even have our support for but bucket out entirely as well, eh? So you can just use your existing gift Repo and using pull requests and mergers into branches, completely control the flow off deployment into production into development into whatever other environments that you need is a team.
What sort of additional tools are you talking about here? Because one of things that I get asked a lot about See, I see the pop alliances focus around the security aspects. How do we make sure that we are son in the right permissions to our functions that we're not giving them too much permissions and that some of the things that I see other so vendors like your part of out of network number, they required the pure sec and protect go against Checkpoint. Now, he says, I mean, you guys also venture into some of that automated their security and checking permissions and thinks that that
so that's one of the one of the really cool things is because we are way. We've been developing the open source service framework for a number of years now, and I could say we've heard these concerns from customers. So we decided we just help sold that we built a a platform called Suffers From a Pro. And this includes the CSC. The solution I was talking about. But on top of that, because we initially focused on helping teams add a to three ability to deploy to specific AWS accounts through a future we call deployment profiles. So what happens is in the platform itself, you can actually connect your service frame of pro account to your AWS account or multiple AWS accounts, and the way to do this is using a feature called Deployment Profile. So you can say I want to create a new deployment profile. I'm gonna call this my staging deployment profile. I'm gonna connect this to my staging AWS account. It was one of the best practices we've seen over the years often is that multiple environments are broken up into into separate AWS accounts just to protect you from accidentally deploying staging grade code into production accidentally. If you break this into separate AWS accounts, sort of limits that capability of it on. With the deployment profile, you can say I want a staging profile that points of my staging AWS account. But on top of that, with these multiple environments on a put safeguards in place something that will help me make sure that when a developer deploys something into an environment, it's something at the quality that makes sense for that environment. So the second step forward. If you decide you want to create a production profile, you create a you point their production profile at your production AWS account. One thing you obviously want to make sure you don't do on you when you deploy something into production in the civilised world is having I am policy that has wild cards in it. You don't want to give a lender function full access to an entire dynamodb table or even worse, full access to all dynamodb tables, which is so easy to do with just Dynamo Colon star, for example. So one of the features that happens because we we have a deployment profile that ties into the deployment process off the service framework, you can add a safeguard that just says Block all. I am a little deployments that haven't a wild card in the iron policies. What this means is anybody who is an administrator off this off this account can consent that up so that any developer accidentally trusted deploy that will just get blocked from doing it. There's a lot of these this that that that's just one example. We have a whole suite off these safeguards. We call them that you can you can activate optionally on your deployment profile to help control these effects you can do. You can even go so far as to write your own safeguards. If there isn't a pre built one that makes you means eso. It's a great way for especially the Dev ops teams who wanted maintain control some kind of control over the what gets deployed into the environments to set those up. And along with this, because we we are now injecting these kinds of a rules into the deployment process, we can also help you manage parameters that often me to differ based on the environment you are deploying to. So I mentioned, you can have a staging environment and you can have a production environment. And in a lot of cases, you want your development team to be able to test something like, for example, access to a stripe, a p I and as a part of that your strive AP, I will have a sandbox environment that you develop this contest with, and you'll have the A P I key for that. So you'll set that episode perimeter in your service. That yellow file so that in the staging environment, the strike, a P I key parameter, has the sandbox. A Yankee has a value, but you can use the same parameter with the same name in your production deployment profile. So when you deplane to production, it'll actually have the production key. And in this way as well. You're preserving the fact that you don't need to start sharing these keys around between the team of developers. You can actually maintain security on these keys on. That also means that with the with the control of which AWS accounts to deploy to, you can even maintain strict control over spreading excess keys for your AWS accounts in third body environments because of their connection between service framework and your AWS accounts.
Okay, that's a pretty cool. I've tried the guardrail stuff before. It's quite nice. The addition, friends you can dio. What about in terms off thesaurus from work as a whole against the is being a really key part of the ecosystem for people that are doing stuff. We've service technologies today. What is the business model for Service Inc is that focusing on this value? Add services
well, one of the things we've done over the years is like any open source organization. Finding a way to monetize is always the tricky bit on. There's many organizations have tried many different ways to accomplished some better than others on what would be done is we've listened to customers concerns and what they feel is missing when they're building serverless applications, and we're trying to help solve those problems. And a lot of cases those problems end up requiring and manage solution. Eso wave. We often go to looking at the problems and saying, Is there a way to solve this problem within the service framework itself in the open source framework on this is where we start adding features in that allow you to do things like add additional features into AWS itself that allow you to maintain control of things and help solve those problems but little cases. You need some kind of manage solution that can help you do things like aggregate data on manage deployment profile Centrally, a manager see a CD processes are very tricky to do in a single open source framework that that be building, and that's really where our focus has been. So we have over the years offered support and training to organizations, and we still do. If organizations contact us and approach us to discuss partnership or training or workshops and someone we still do have those discussions with with organizations are probably focus now, though, is on helping promote our says product that, as I say, it helps solve these additional problems that developers have, and anybody using the using our platform right now might have noticed a little tab appearing at the top of the screen. And by the time part, of course, is that this is probably gonna be a fully release product. But we've recently released another part off the solution called Studio, which helps developers link the service that they're building on their local machine to a deployed stack inside AWS itself and gives you immediate feedback so you could do something like develop arrests AP I on your local machine at Inter SLS Dev in your CLI. For this for their service, it'll automatically deploy that stack into AWS for you. And then and then in the browser, you can view the logs and actually run tests against the A P I n points. So some of that to a postman style interface. But this is actually against the actually little alive infrastructure off your civil service sitting in AWS. So it's a great way that do that sort of integration tasting that most developers need to do anyway. But in a much faster and responsive environment. so that you can You can click the button to run that get request or add the body data to run their post requests and immediately see the console log from cloudwatch appear instantly on the screen for you that you don't have to have their constant back and forth. And the best part about this is that it integrates into a local development environment now because if I go and edit my handler for notice, I've got it Syntex era. For example. When I run that post request, I can make it in it. In my Lambda functions, save the file and immediately through the studio s l is dead in the background is is redeploying that function within seconds so I can run my test again to see that I fix the problem or not. It is a great way to do that. It relative local style development that we all want. But we're actually testing in the cloud, which is far more accurate than something. Then then what weaken do locally
your harvest inquire feel different to It is popping up on the radar for Surplus Inc because one of the things that left me slightly uncertain is that because all of these are different verticals and you have people that are more specialized in more focused on issue of this tooling, especially as a service based, gets bigger if you just look at this of monitoring side of things alone to your whole opposite ability. Side of things. For serverless, there are your know, your guess. You're more so focused. The specialized players like Thunder Amigo Absalon. But and you also have the more traditional vendors now is jumping to the space that we've new relic buying io pipe and all that as a potential buyer. What would be my trigger for going with your serverless solution versus using something from a vendor that I know are more focus and more specialized in this particular space?
So one of the interesting things is because we are, we have been developing the open source serverless framework for a few years now. One of the goals that we had with building our own tooling is that we wanted to make we want integrated as seamlessly as possible with the framework that a lot of people are using it vendors. Other vendors are focused on service applications in a more general sense, whereas our focus tends to be on summers framework developers that the initial goal is that we want to be appealing to those developers because our tools integrated very, very seamlessly. And to put that into perspective, if no one's used, it really is a simple as creating an account that if you go to something like dashboard, that's Overstock. Com you'll be able to create yourself on account. Once you're what you go through the on boarding process you'll have created you'll have. You'll have an organ, which is just a usually, if you're using it is your first orc. You can create others for your own organization that you that you build civilised applications for on your create another entity called an Absolute that you can have all your civil service is contained in an act. And if you want to connect on existing service of your own to your summers, you're someone's framework account. You go to your service like Gmail filed. You add or gap as Aziz properties in your service, The demo file. You're on a log in come on in the CLI, and the next time you deploy it'll automatically be added. And all that instrumentation that you that you would want for monitoring is automatically added for you. So there's no there's no need to insert any libraries. There's no uh, no AP. I calls that need to be made inside your lambda function that's gonna manage their monitoring information for you. The summer's framework automatically integrates into your lender function and start helps. That's helping capture the monitoring and an observer. Observe ability information that you would need to help manage your your service framework services that you're deploying. And along with that comes the advantage that, as as the sums from a team, we've been exposed to a lot of the best practices we've seen over the years that have been coming about in civilised development. And that's helped guide us in how, in how we are building our monitoring solution to provide the information that you need as a service developer, where is a lot more generalized monitoring tools will throw. It was all the data that they haven't you, because it's not necessarily clear what is important for a civilised application, whereas we understand that showing cold start information is something that we add relatively up front. We allow you to dig straight into step traces, often error in a lender function in cloudwatch because that's what's important. Were you trying to debug problems in your lender functions on someone? So we try Teoh because we expose those base practices. We know what exposed to you to help you get the information you need as quickly as possible.
I guess they love the other vendors who are specializing in this service base. They also understand the problem. They also share a lot of information that you guys do a swell intense of Costa information. Certainly I know for sure that the amigo and the thunder both through the same thing, but yeah, that is, I guess the question forced them, or traditional vendors who have been focused on the containers and virtue and easy to space. But I guess what's interesting from what you said there was that while most of the other vendors are focusing on the horizontal slice of anyone who's doing service development today and trying to solve a particular problem, you guys are focusing on the vertical slice off Everyone who's doing service development using the several framework and try to left the fact that they already used framework so that you give them a C I c d. Give them monitoring all these things on top with minimal amount of integration effort required. That does sound like interesting, slightly different from most of the other vendors who are focusing on everyone whose doing stuff of serverless but having problems see a CD as opposed to just people who are using the service from work today.
So that's that's one of the interesting things as well is because we've got that focus on the summers framework itself. When we build a CSC tool, we're not building it for that. All those possible serverless application use cases out there. We understand how serverless framework deployment happens so we can build a C R C D system that matches that were that use case very cleanly and very minimally. Uh, we don't want to expose you with a 1,000,000 options when it comes to configuring how you want your deployment to happen, because we understand how soldiers frame of deployments didn't happen. So we give nine of the 99% use case for that while thought providing, trying to provide in some way the way to configure for those edge cases. They don't often match what's needed, but we don't have to bring those right up front. We can manage those more cleanly and minimally, which is kind of asking to do.
Yeah, having built quite a few different open source tooling and thinks that that myself be able to make these kind of assumptions about what? From what people use the dust make life a whole lot easier. You can just look at the service Jamo and figured out everything. There's someone trying to deploy as part of Stack, for instance. So is there anything else that you'd like to tell the listener before we go? Ah, the niece of upcoming announcements or changes from the server from work.
Yeah, well, we always busy doing a lot of things in the background. On one of the things that I'm sure folks may have noticed in 2019 was a project that we released in beater called Components. Honestly, about the time the Packers comes out as well, they will probably be a more official announcement. But anybody who isn't aware, uh, this is actually a very exciting project that we're working on for quite a while and is something that we're building this kind of in addition to the existing service framework that we hope people will like. It's a way to build civilised applications that is quite different from what we might be used to now. And it's very difficult for me to go into detail because we're starting on nail all the bits and pieces to absolute completion. At the moment. We're expecting that to happen in the next week or two, or maybe just maybe three. It's just about there. It's just very tricky and now all the bits and pieces down. But it's a very exciting, very exciting step forward. We feel for civilised application development. It's essentially our attempt to have take civilised application development to clean a way to build that feels more local. That doesn't feel like the way we had to build civilised applications in the past on that sort of brings things back to the way we used to develop applications when we went as dependent on cloud infrastructure for testing and so on. And it sounds really vague what I'm talking about now. But if you've looked at components that the B division of Components in the past. You may see what what that means on where that's leading to, but it's a very exciting future that's coming up. Apart from all the usual maintenance work and work that goes into maintaining the existing open source server, this framework as well, constantly, any new features that come from AWS and other cloud providers on proponents is one of the really big ones to keep an eye out.
Yeah, I played around with the beater version of a component. It does look much move not much nicer and east of the work we've compared to the saw that you get from a to B s, which I've done quality work with as well. Eyes just know us, I guess, smoothly integrated with into your true chewiness, as you would like. So, Gary, thank you very much for taking the time to talk to me today. How can people find you on the Internet? What about any of your blog's personal things, that project they're working on
or for me? The easiest way to find out more about me is just following on Twitter. It's a Gareth MCC mass, an easy on and yeah, I don't really have a blood that I'm intent. At the moment, all my blogging seems to be at the civilised stock, com slash block page or site eso you can keep up to date with anything I'm blogging about. But if you, if anybody is interested, even just checked about service development or anything like that, I'm always open to have discussions with folks, even if it's just Diem's on Twitter. One way to also get in contact with the broader community off Solis developers is to take a look at the Southerners framework community. I'm gonna mention we have a slack workspace at civilised dot com slash slack. Yes, easy way to find that we're even our forums. We have forms at forum dot Civilised that come.
Great person I've been using those resource is quite often find the answers or engaging with other people on the slack channel find that's quite useful again. Thank you very much, Garrett. And hopefully things are not too bad where you are with the Corona virus on and stay safe.
Yeah, thanks very much. 60 things are going ok here, so yeah, we just staying indoors and keeping and clean and staying safe.
All right, men, take care. Bye. That's it For end of the episode off real world serverless Ah, Hope you've enjoyed this conversation with Gareth Markham. Ski from Serverless Inc to ask has to show notes and the transcript Please go to real world Serverless Start, Come and I'll see you guys next time.