Code with Jason

289 - Lio Lunesu, CTO at Defang

Jason Swett

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 51:46

In this episode I talk with Lio Lunesu, CTO of Defang, about infrastructure as code, Docker, and Docker Compose. Defang compiles Docker Compose files into cloud infrastructure code.

Links:

A Paper Newsletter For Programmers

SPEAKER_01

Hey, it's Jason, host of the Code with Jason podcast. You're a developer. You like to listen to podcasts. You're listening to one right now. Maybe you like to read blogs and subscribe to email newsletters and stuff like that. Keep in touch. Email newsletters are a really nice way to keep on top of what's going on in the programming world. Except they're actually not. I don't know about you, but the last thing that I want to do after a long day of staring at the screen is sit there and stare at the screen some more. That's why I started a different kind of newsletter. It's a snail mail programming newsletter. That's right. I send an actual envelope in the mail containing a paper newsletter that you can hold in your hands. You can read it on your living room couch, at your kitchen table, in your bed, or in someone else's bed. And when they say, What are you doing in my bed? You can say, I'm reading Jason's newsletter. What does it look like? You might wonder what you might find in this snail mail programming newsletter. You can read about all kinds of programming topics like object-oriented programming, testing, DevOps, AI. Most of it's pretty technology agnostic. You can also read about other non-programming topics like philosophy, evolutionary theory, business, marketing, economics, psychology, music, cooking, history, geology, language, culture, robotics, and farming. The name of the newsletter is Nonsense Monthly. Here's what some of my readers are saying about it. Helmut Kobler from Los Angeles says thanks much for sending the newsletter. I got it about a week ago and read it on my sofa. It was a totally different experience than reading it on my computer or iPad. It felt more relaxed, more meaningful, something special and out of the ordinary. I'm sure that's what you were going for, so just wanted to let you know that you succeeded. Looking forward to more. Drew Bragg from Philadelphia says Nonsense Monthly is the only newsletter I deliberately set aside time to read. I read a lot of great newsletters, but there's just something about receiving a piece of mail, physically opening it, and sitting down to read it on paper that is just so awesome. Feels like a lost luxury. Chris Sonnier from Dickinson, Texas says just finished reading my first nonsense monthly snail mail newsletter and truly enjoyed it. Something about holding a physical piece of paper that just feels good. Thank you for this. Can't wait for the next one. Dear listener, if you would like to get letters in the mail from yours truly every month, you can go sign up at nonsense monthly dot com. That's nonsensemonthly dot com. I'll say it one more time nonsense monthly dot com. And now without further ado, here is today's episode. Hey, today I'm here with Leo Lunesu, CTO of Dfang. Leo, welcome.

SPEAKER_00

Uh hi Jason. Uh, thanks for having me.

SPEAKER_01

Yeah, so we were talking a little bit pre-recording, and it turns out we speak a few of the same languages. Uh what what did we identify that we have in common?

SPEAKER_00

So Italian and Mandarin. Uh you told me you were studying both a bit, that that which is interesting. I I've uh lived in China. I have not lived in Italy, but my family's from there. And English, of course, we have that one.

SPEAKER_01

Uh Right, yeah. Um, so I I just barely started learning Italian, like a number of weeks ago. So I I would not say that I can speak Italian by any means. Um, but me and my teacher can kind of have a uh very rough conversation occasionally. Then we have to switch back to English. Um, but I did study Mandarin for like three years, so I got to a point of being like sometimes I'll meet Chinese people at conferences and stuff like that, and and we can have a conversation. Um but still I I can have a conversation in the sense that a child can have a conversation. You know, we don't have to switch back to English, but my vocabulary is very limited. But then for me. Mm-hmm. And then you'll remember we we started off in French, because for some reason I got the idea that you speak French, and I happen to be right, but I don't know why I thought that.

SPEAKER_00

Yeah, so uh having uh growing up in the Netherlands, we we they teach us uh German, French, and English pretty much until age 16. I think you can choose not to after age 12, but everybody gets basic uh language lessons. Because pretty much I was born five minutes from Germany, uh 10 minutes from French-speaking Belgium, so those languages come in handy. Every direction you go, you you speak another language, right?

SPEAKER_01

Okay, interesting. Um yeah, in in German and Dutch, are those fairly similar? Like do if if you know one, does some of that transfer to the other?

SPEAKER_00

Um well, I'm biased because I was born so close to the border. Uh I have I wonder how people from Amsterdam feel like it, but for me it's it's uh uh listening to German is pretty much normal. Uh doesn't feel very special to me. Speaking, it is harder, mostly because German grammar is so much harder than Dutch grammar. Um they have like four cases. Uh so that depending on how you use a noun, you have different suffixes. That that is uh the Dutch don't have that. The Dutch doesn't have that.

SPEAKER_01

I see. Um of course Chinese doesn't have that, not even close. It's the grammar so easy.

SPEAKER_00

Yeah, which is kind of interesting, right? You you you they have no grammar, but then they make up for it by having this crazy complexity in the writing. Uh so it kind of feels like the complexity stays equal across across different languages.

SPEAKER_01

Right, and of course, for a westerner, there's no overlap in the vocabulary except for maybe like coffee. Coffee is like the same in almost every language, and and so is ma. Like it's almost every language, it's it's something something close to that. Like mother and father, right?

SPEAKER_00

Yeah, that's right.

SPEAKER_01

Yeah. Um, yeah, and then I also maybe my uh uh uh tied for first strongest language, I think, would be Spanish. Do you speak Spanish at all?

SPEAKER_00

No, but it's a little bit the Dutch-German situation, right? Uh speaking Italian, then uh listening to Spanish feels fine. You of course you'll you'll have words that you don't recognize, you have some false friends that you'll uh you know, you once somebody tells you you'll know. Um, but speaking it uh is harder. I had this is an anecdote. I went to with my friends to Italy to visit some uh friends there, uh, and one of my Dutch friends speaks Spanish. Uh, and he was having a conversation with my Italian friend. And so from the outside, you see these two people speaking different languages, clearly different languages. But they were fine, right? They were they were having a conversation like like one uh has. So but it's just from the outside, it it's really like two people speaking different things, but they were fine. And so German Dutch to me has a bit of that same that same vibe.

SPEAKER_01

Yeah, I kind of guessed that maybe that was the case. Yeah, with my Italian teacher, a lot of times if I don't know the Italian word, I'll say the the Spanish word, maybe try to Italianify the the Spanish word, and quite a lot of time it's you know, the guy will know what I was trying to say, you know. And I I had a girlfriend many years ago, uh Mexican girlfriend, and she said that her mom used to work for an Italian at an Italian restaurant or something like that. And um, same thing. Uh the Mexican lady would speak Spanish, the Italian would speak Italian, and they could understand each other because it's close enough.

SPEAKER_00

Yeah, that that's cool. I always like these kind of situations. Uh I really like languages, so I I would say um yeah, it's kind of frustrating that living 10 years in in China, my Mandarin is still, I don't think it's much better than yours uh from the sound of it. Um so um, but yeah, that that's just uh the kind of language that is, right? Coming from uh Europe, there's not many connection points, so you're kind of stuck from scratch.

From Clicks To Code: IaC 101

SPEAKER_01

Um okay, there's a lot more that I could ask you around language and culture and China and all that stuff, and I really want to, but uh let's let's transition into the the technical stuff. Uh so so tell me a bit about DFANG and what it is and all that.

SPEAKER_00

Yeah, so there is uh I would say it is related in a way, right? So uh DFANG, just in a one-liner, if if you're familiar with DevOps, um you know when this cloud started 15 years ago, um AWS started, Azure started, and you had all these platforms where you could click around, start your virtual machines, etc. Um quickly that got out of hand. And so we had infrastructure as code, right? As of maybe maybe Ansible was maybe the the first kind of uh thing there, and and then uh Terraform, of course, the big one, uh Polumi, and there's others. Uh but now all those clicks became code, and those hundreds of clicks became hundreds of lines of code. And so um, having you know built different applications and having written this infrastructure code, uh we got this idea to create that code from a very simple description, and then that's kind of what Dfang does. So the input is a Docker compose YAML file, which is a standard and many people already familiar with. Uh, and that gets converted into infrastructure code uh based on the cloud you want to go to. Right? So kind of to two birds one stone there, you have uh an input file that many teams already have. Uh complicated infrastructure code gets uh it's very tempting to say generated, but there's no language model there. I feel like generated now means almost implies a language model, but gets uh compiled is maybe a better term. Compiled into infrastructure code. And then you you have some knobs that you can turn, like I want to go to this cloud or that cloud, uh, or I want to do it uh cheaply, or I want to do go high availability, you know, all bells and whistles. So those are uh a few uh parameters that you can set. But the whole thing is automated. That is kind of what Dfang does, compiling compose YAML into infrastructure code. Under the covers, we use Palumi.

Docker Demystified: Why Containers

SPEAKER_01

Sorry, if I may, if I may pause you there. Um, I want to make sure I I usually like to keep in mind the the listener who may not be familiar with all the things that we're familiar with. Um so infrastructure is code. I think I went kind of deep into this on a recent recording, uh, but my memory is a little fuzzy about how much we went into it. But the idea with infrastructure is code to me is like you have two ways you can do it. You can set up your infrastructure manually, click around in the console, um, or you can specify your infrastructure as code. The problem with doing it manually is that like uh let's say let's say you you you provision a server manually, and then six months down the road you want to provision an identical server. It's like, well, how do you make one that's identical? You have to just like remember every single step that you did. And what if at some point after you initially provisioned it, you went in and like manually made some change to something? There's like no evidence that you did that necessarily, and so it's all obscured. You you manually be perform these actions, and then the history is all lost, and so you can't make a new one, and you know, your server goes down or whatever, it has to be killed. Um you you can't replicate it and like bring back to life an identical one to replace the original for the same reason. Um, so I think it's a lot better to specify your infrastructure as code. That way you can copy it indefinitely. Uh the history doesn't get lost, you know exactly what's there. It's just so much better all around.

SPEAKER_00

Yeah, you're totally right. I I uh maybe maybe as uh many people in in cloud space building services, building apps, they m have learned the hard way. Uh the server that you set up with a few clicks, maybe you assume this is gonna be the only server I'm ever gonna need. It's gonna be running there for the next 20 years or whatever, but that never happens. Like the server crashes, even uh the clouds they change. They they might, you know, that server that you created five years ago might no longer be offered. And now you have to start from scratch. I'll give you a deadline, you get this email, all of it all of a sudden that says, hey, by November 2026, this this thing is gone, and now you have to click again, right? Whether you want to or not. Also, the other thing that maybe uh people don't think about is you mentioned it, spinning up in different regions, right? So I have a cluster running in in uh US West 2, which is a very popular AWS uh region, and now you have a customer that says, Hey, I I I want to use your service, but my data cannot leave Europe or cannot leave Canada. I'm in Canada now. And now you have to spin up that thing, uh uh a new cluster. And like you said, if you remember, or you you you went through the trouble to track exactly what you need to click, then maybe that's doable, but nobody does that. Once you get into clicking, nobody does uh makes a record of it. In fact, the infrastructure as code would be the record, right? It would be this server, this type, uh, this image, um this region, everything expressed in in code. Yeah. So if you if you're not using if any listener is dealing with cloud resources and not using infrastructure as code, totally worth looking into uh the the tools that are out there.

SPEAKER_01

Definitely. Um I I I like wouldn't consider doing it any other way now that I've done it this way. Having said that, um as we speak, some of my infrastructure that's out there is not infrastructure as code. Um shame on me, but I'll I'll have to I'll have to convert that. Um but but uh quite a lot of it is. Um anyway, um my next question, we'll have to answer this at a fairly high level, I think, because we could go as deep as we want. Um big question. What is Docker and what is Docker Compose?

Compose For Teams And Onboarding

SPEAKER_00

Oh, yeah, so Docker is a tool that helps you run containers. So now we have to explain what are containers. Uh I think containers are often misunderstood. They're nothing else but uh processes. Uh process is a program running on your computer. What makes containers special is that they are isolated processes. So this is a process that has its own file system, it has its own chunk of memory, and processes are not supposed to, or containers rather, are not supposed to be able to talk to each other unless very explicitly allowed to do so. So I feel uh people are familiar with virtual machines, right? You have a physical machine, a CPU with memory, and you can run multiple virtual machines on that. Containers are a little bit like that, although maybe not as secure as virtual machines, but they also don't have the overhead of a virtual machine, right? So a virtual machine is literally its own operating system, uh it's its own uh background processes, all of that, whereas a container is just a program running. So there is no overhead there, there's no emulation there. And so Docker is a tool that allows you to run containers, so isolated processes.

SPEAKER_01

And sorry, let me pause you there. Um obviously everything you've said is correct, and that's that's the answer that we usually get when we ask what is Docker. I want to ask a slightly different question. Why would somebody want or need to use Docker?

SPEAKER_00

Uh um that is a good question. Um so once you have a container and you have this isolated uh process, uh, a thing that you can run separately from all the other things, uh, you can run that on your local computer, you can run that in the cloud, and it would be exactly the same thing. If you run, um compare that to running a program on your Mac or whatever developer tool you use, or developer machine you use, uh, you do some testing, you're all happy with it. Then you go to your cloud machine, virtual machine or whatever you use, and you run that same program. But first of all, uh it's gonna be a different processor architecture, it's gonna be a different operating system. If if macOS is your developer machine, it's very likely in the cloud you'll be using a Linux-based machine, and so you get a different executable. So now all that testing that you've done locally, how much of that is actually applicable to this machine that's running in the cloud if you have a different program running, right? By by creating a container, you can actually run 100% the same code uh on your local machine and uh in the cloud. And in the cloud, and you can make multiple copies of it. So you can say, hey, I'm my web server is getting a lot of traffic. I'm gonna spin up 10 of them, 10 web servers. And this becomes like 10 containers across different virtual machines or physical machines, uh, and all of them 100% exact copies. So that is one of the advantages of using containers because now you have this package that you can run everywhere. You can run it locally, do your testing locally, and you can actually uh know that all the testing you've done translates to to the machines in in the data center because it's a hundred percent exactly the same code that is running.

SPEAKER_01

Um I just gotta say this is a breath of fresh air. Because uh these definitions and stuff are often explained so poorly, but your explanations, at least to me, I find these explanations very clear. Uh, and hopefully, dear listener, you do as well. Um, like going to the official Docker documentation, like, forget about it. Like it's so it at least for me, it was really hard to understand when I was trying to learn what Docker is and Docker Compose. So, okay, now that we've talked about what Docker is, what's Docker Compose and like what's the difference between Docker and Docker Compose?

Dev Vs Prod: Beyond Single Machine

SPEAKER_00

Yeah, also uh a logical next step almost, right? So you now you have your container running. Uh, you can do it locally, you can do it in the virtual machine, you can do it in the cloud data center. Uh, it's all the same thing. Um, but that is maybe take the web server example. Now you have your web server running, but almost every application has more than just a web server. Uh, there will be a database, there might be a cache running, there might be uh more than one uh process running, and uh like a web server, maybe there is a separate worker. Um think of a worker like you know, you have background stuff. When when you click the the buy now button in in Amazon, it that that kicks off a lot of background processes, like there's going an order is going out, there's going some invoicing. So now you have more than just your web server running. So you can imagine uh multiple containers communicating with each other. Um and so that is what Docker Compose describes. It's uh a composition of multiple uh containers.

SPEAKER_01

Yeah, and let's maybe if we can if we can talk through an example. Let's say we have our custom application written in Ruby, Python, whatever, and then we have a PostgreSQL database, and we have our local development environment, we have our production environment. Where would Docker Compose come into the picture there and what would that look like?

SPEAKER_00

Yeah, um those are typically very different. Like locally, you want to say, uh, okay, I'm gonna run my whole cluster on my local machine for development, right? I'm gonna test whether it does what I'm what it's supposed to. Uh I'm gonna have a web server, uh, which is uh front-end backend uh container, and I'm gonna have a Postgres database. So those two things you can create a uh Docker Compose file, which is a YAML file. Now it's just your called Compose. Uh Docker open sourced the specification and the loader uh for Compose, and they called it uh Compose YAML. In that file, I'm gonna say I have two services. One is my uh web server serving static files and uh doing some back-end processes. Uh and then I'm gonna have a Postgres database, a second server, and the web server is gonna depend on the Postgres uh server. Now I do uh Docker Compose up on my local machine, I can go to localhost, and my web server can communicate with the database, can create records, can query, everything is fine. But now it's time to deploy. So how do I deploy this?

SPEAKER_01

Sorry, if I can pause you once again there. Um the way I think about this is like what's the alternative to using Docker Compose? Let's let's say we have a team of five developers. If we didn't have Docker Compose in this scenario, then every one of these developers who onboards the application, they would have to manually install PostgreSQL, because that's the that's the service we're using as the example here. Um they'd have to manually install it and they'd have to manually start that service every time they want to run their development environment. Um but with Docker Compose, you can just specify that PostgreSQL is one of those services. You can tell Docker Compose, hey, this is where you can find the image for PostgreSQL. So then it'll just download it. And like you said, Docker Compose up. That's what will get your Docker Compose environment going. And so that whole download, install, run, all that stuff is automated. So you know, everybody's had the experience where they start a new job and you have to spend like the first however many days uh setting up your application, uh setting up your development environment with Docker Compose. You can, in principle, just have it be one single command and you're completely good to go.

DFANG: Compile Compose To Cloud

SPEAKER_00

Yeah, that that's right. And there's a few dimensions to that, right? Uh versioning being one of them, right? In your Docker Compose file, you'd you'd also be very explicit about this is the version of Node.js or Ruby that I'll be using for my web server. This is the version of PostgreSQL Postgres uh uh SQL that I'll be using for my database. And when uh multiple colleagues, you know, different people install software, they might not end up with the exact same version. Take Brew Homebrew, for example, very popular package manager from macOS. Uh if I do a brew install node today and you'll do one a month from now, you might end up with a slightly different version of Node.js or Ruby or Python or whichever. But in your compose file, there'll be uh a very specific version number. You'll say, okay, this is node 22, uh, this is Postgres uh 14, etc. And also uh the what you mentioned, because in your Docker Compose file, you'll you'll give names, like call them like logical names to each of the services, and that's how they uh find each other. So if your your application, your web server knows that the database is called db because you mentioned db in the compose file. And when you said this is the database, I'm gonna give it a name db, and now the web server can connect to the database by using that same name. So also await from for different services to discover each other. So and you have that compose file. I can run it, you can run it on your machine, you can run it on a Linux machine, you can run it on a Mac uh machine, uh, you can run it in the cloud, it it'll work everywhere.

SPEAKER_01

So let's talk about the production environment aspect. As you said, that'll be quite a different picture. Um, because in the local environment, you're gonna be running PostgreSQL to continue the example on the same machine as your application process. But I suspect at least part of why you say it's different is because in a production environment, you're probably gonna have your application and your database running on separate servers, right?

Managed Databases Over DIY Clusters

SPEAKER_00

Yeah. And if you haven't done this manually, maybe you don't appreciate the complexity there. But like I mentioned, you can run that same compose file on a virtual machine. Uh it would run the different containers that are declared in the compose file. But now your Postgres is running on us you know on the same machine as your backend. If that machine reboots or that machine crashes, or you know, we are talking about data centers, uh, and at that scale, uh, stuff always happens. Like uh a server actually dies and never comes back, right? So you have to consider that scenario, uh, and then your database would be gone forever. So, what can you do about that? Now you can run multiple copies of your database and you can cluster them. Each database has different ways of clustering, uh primary, secondary kind of setup, uh where the the web server is talking to the primary, and then the primary and the secondary they they keep in sync in case the primary gets lost. Now the secondary gets uh upgraded to primary. You have these kind of schemes to make sure you never lose data. But now we're outside of the Docker Compose realm because this compose file that we want to run everywhere does not describe this primary, secondary kind of thing. A compose file is really meant to describe uh what's running on a single machine. You can run the same compose file on different machines. Uh that's one way to do it. But of course, your your machine that is running your primary database is not necessarily the same as the machine that is running your secondary database. That's going to be a very different thing. Uh the secondary typically is like a read-only thing, uh a read-only node, and you have to have some orchestrator that knows when to upgrade, when to signal that you know the primary is lost. All this complexity is kind of not expressed in that compose file that we are using during development. And so this is where infrastructure as code comes in, right? So in infrastructure code, you can express this complexity. You can say, okay, I'm gonna need two or three virtual machines. The first virtual machine is gonna do this and uh the web server, and it's gonna be called, you know, we give it a name. The second virtual machine, I'm gonna call it database, it's gonna be running Postgres, and all of that I can express in this infrastructure code, whether that be Terraform or Pulumi or CloudFormation. Uh, third virtual machine is gonna do you know a copy of the database. You can do this in in infrastructure code, but now this file is much more complex than this compose file that we use for local development and local testing. Where this uh Docker compose file on your local machine might be like 40 lines, you know. Typically they're very small because you'll you'll need 10 to 20 lines to describe your web server, 10 to 20 lines to describe your Postgres uh database, and that's it. Now you go into infrastructure code, uh you have not only are you declaring what's running on each machine, you also have to create all those security groups. You maybe you'll want a load balancer because uh your web server has to scale with the number of uh connections that you get. So now you say, okay, I'm gonna set up in my infrastructure code, I'm gonna describe a load balancer and the auto-scaling for my web server. I'm gonna describe the primary database, the secondary database, how they talk to each other, some health checks for those databases. Now you get into a thousand lines of infrastructure code, even though your compose file kind of specified your intent. And so this is where kind of DFANG comes in. If you have your compose file and it it you're you have written your intent there, that I want a web server and I want a database and I want them to talk to each other, then we can uh create the infrastructure code based off of that logical description. So basically create the physical description of your uh cloud infrastructure based off of your logical description. That's kind of what the tool tries to automate.

SPEAKER_01

Yeah, interesting. Um because yeah, the essence of it is there. Um the all that other stuff, like you can so so I'm I'm thinking about like layers of abstraction. There, there's a lot of that stuff that's like essential that you do want to think about, and then there's the stuff that's kind of below the water line that you'd rather not think about unless you you have to or want to. Like you can make reasonable assumptions about how you want the security groups to be configured, for example. Um, to make somebody manually configure all that stuff is is maybe not necessary. You can again make reasonable assumptions about all that stuff. So you can think about the the level of abstraction at the Docker Compose file level, which again, the way I think of that is like I'm thinking about what services do I have, and maybe how do they talk to each other a little bit, but mostly just what services need to exist, and everything else can be inferred or assumed from that.

SPEAKER_00

Yeah, exactly. And and people that have worked in startups, they know how startups end up with the infrastructure they have, and it's really let's get stuff working first. Uh that that's the attitude. So what you end up with is security groups that are not you know locking down what they could be locking down. Uh you end up with IAM roles, uh, roles and policies that are not uh you know single access uh minimal uh uh roles, maybe if people not familiar with roles, those are the things that decide what service can connect to what, or what user can connect to which service. And typically you want them to be minimal, right? So if the web server has to talk to the database, but nobody else has to talk to the database, then you'll want to make sure that the database only accepts connections from the web server. These kind of things. Typically, though those are afterthoughts because a startup has like you know bigger priorities. They have this customer that they want to be online yesterday, and you know, let's get this thing up and running. And uh and with this time pressure, you know, ports are open that shouldn't be open, security groups are not locked down, they accept traffic from everywhere, all of these things uh can actually be automated. Because if you describe in your uh Compose file, this server talks to this server, and and there's also a concept of networks. So in Compose, different services can be part of different networks, and they are typically isolated. Uh, but then you if your web server needs traffic from outside as well as talk to the database, then the web server would be maybe the only service that is in both the public and the private network. Everything else would be in the private network. All of this can be translated to the uh uh the security groups and routing uh primitives of the cloud. And and and and there's another dimension here, cloud. Uh it's also very different for different clouds. So you might be familiar with it on Amazon, but then if you want to do the same thing in Google for whatever reason, maybe it's cost, maybe your credits run out, which is a big reason people switch. Um and and now you you start from scratch because uh the the model is very different, and you have to learn the this new cloud, these new names, these new concepts uh from scratch.

Who DFANG Helps And Why

SPEAKER_01

So how does how does Dfang work? You explained a little bit, but I'd like to understand a little deeper. So you have your Docker Compose file, and then what happens? Like, does that Docker Compose get from that gets generated like some Terraform files or something like that, and then infrastructure gets provisioned based on that? How does all that stuff work?

CI Pain, SaturnCI, And Compose Config

SPEAKER_00

Um yeah, so uh let's let's take our example a web server with a database, right? Um on your local machine you use Docker ComposeUp, you do your testing, HTTP localhost, everything works. Uh so we made Dfang uh to be compose compatible. It's it's not actually using Docker, but it uses the same input files. It's uh open standard. Uh so you do Dfang Compose up, it would actually show you a picker. Where do you want to deploy? Do you want to deploy to Amazon, DigitalOcean, or Google Cloud? Those are the three that we support. Uh then you pick uh one of the clouds, assuming your credentials are set up, because we use the same uh credentials that their respective uh CLIs use. So if you have Amazon, uh AWS CLI already configured, then DFang uh picks those up as well. And you do DFANG Compose up, it'll uh package your project. So maybe another aspect that we haven't talked about yet is building the applications. Even building the containers is something that if I build on my machine and I put the container in the cloud, but you build on your machine and you put the container on the on the cloud, do we end up with exactly the same thing? Only if our Git repository is completely clean and we don't have any customizations that we didn't uh commit on push, right? Um so we try to solve that in Define by doing completely centralized building. So the CLI packages your sources, uploads them to the cloud that you picked, and then kicks off a uh CD in the cloud. Now CD stands for continuous uh deployment, continuous integration. Um in the cloud that you picked, we we kick off the deployment process, which includes any any builds that need to happen. And because that's centralized, it'll be the same for everybody. It'll use the sources that were uh captured and rebuild and the dependency resolution, downloading any dependencies, it's all happening in a centralized uh place in your own cloud. Uh and then we have the compose file as part of that package, and uh in the cloud, that compose file gets converted into infrastructure code. Um, under the covers, we use Pulumi. Polumi is uh it's a great tool, uh, nothing bad to say about them. The they are basically like think of it as uh maybe more modern Terraform, but even Terraform is not the Terraform of uh five years ago, right? They they they added a lot of uh new stuff as well, but yeah, we we picked Pulumi and we create Pulumi code uh in that process that's running in your cloud. We create Pulumi code based off of your compose file uh and the cloud that you want to uh to go to. So in the Amazon case, that means uh you know the different services they get converted to the equivalents in Amazon. So if you have a Postgres, we'll actually convert that into a managed Postgres uh cluster in Amazon. So we won't be running Postgres in a container because we know that doesn't scale. Uh and maybe if I may, as uh a side note here, we talked earlier about running Postgres primary and secondary, just setting up a Postgres cluster that is resilient, doesn't go down, does an automatic rollover from primary to secondary, you know, the what we talked about, just that is super hard, right? It's it's very hard to do. There's startups that just do that. Uh and and so therefore, if you can rely on uh can you can offload this complexity to the cloud and you can rely on a managed version, uh, and each cloud has a managed database offering. Um and in Amazon that would be RDS, and in Google, that would be Cloud SQL, uh, DigitalOcean has their managed database, uh, manage Postgres. You should totally rely on those managed services because just running a managed Like just running a resilient cluster with the data recovery and uh point in time restore restoration, that'll keep your your ops team busy, just that, right? So always rely on those managed services. I think they're good value for money. And in the defense case, that means when when we see a Postgres uh in the compose file, we'll pick a managed Postgres to represent that service. But for the application, it it doesn't matter because the application, the only thing it cares about, I want to talk to a database, the database called DB, or whatever name you gave it, and it'll be able to do that in the cloud as well. So your web server will be will be talking to a managed cluster, uh, but it it shouldn't care, right? The code is the same.

SPEAKER_01

I I got three questions for you. And and this time has really flown by. We only have about 10 minutes left at this point. Um but three questions. My first is like, what makes a good DFANG customer? Like, how does somebody tell like this this is for me kind of thing who who tends to gravitate toward it? Um, that's one question. And then after that, I have some shameless self-promotion of my own. And then lastly, I want to ask you a question or two about the business side of DFANG, if I may. Um, but first, uh, yeah, what makes a good DFANG customer?

Wrap Up And Where To Find DFANG

SPEAKER_00

Yeah, it's actually something that we're still learning. Um we started building DFANG because it's a problem that we wanted solved. Uh because I uh was before DFang, I I I worked in a in a um like a startup studio. I ran my own startup studio building uh projects for big companies and and small companies and kept writing a lot of infrastructure code and these patterns appear where you think, you know, I'm I'm literally copy-pasting this infrastructure code from another project because it's so much the same. So why can't we start from a high-level description? And we just chose uh uh compose because that was uh a file that many teams already use for local testing. So we said hey, this is a good, uh it's not a perfect, but it's a good enough uh description of what an application, a complex application can look like to use that as an input file. Uh and so to your question, teams that already use Compose, they are uh good potential customers. Um, but that question is it's actually something as a startup, we're still trying to figure out what what where it resonates. It definitely resonates for people that have never dealt with cloud resources before. Um last year we sponsored like 20 odd hackathons. And and if anybody has ever been to an hackathon, most projects at a hackathon they end up getting demoed from HTTP localhost. Why? Because people don't want to spend their precious 24 hours, uh, you know, six hours dealing with Amazon uh clicking around, right? So and and we we sponsor those hackathons hackathons knowing that you know people can actually deploy and have their domain because SSL TLS is another rabbit hole that DFang tries to take care of. Um and so uh those teams that had never dealt with cloud before don't know how complex it is. Uh they definitely appreciate a tool like DFANG. On the flip side, there you have teams that we see that a lot. They already have some scripts, some duct tape chewing-gum solution to deploy to the cloud. And unfortunately, they don't really feel like dfang is a tool they need. Uh but once that blows up, but once the the duct tape solution blows up, then then they'll they'll they I start to appreciate automating the complete uh infrastructure based off of uh a tool like DFang.

SPEAKER_01

Yeah. Yeah, something I found when I was doing consulting work, and I've talked about this on the show before, it's almost like the people who could use my help the most are the people who are least interested in it. You know? Like if if your company is just an absolute disaster, probably the root cause is your attitudes and habits and stuff like that. And you're not going to be the kind of person who like uh seeks outside help and adopts uh new ideas and stuff like that. So that phenomenon totally makes sense to me.

SPEAKER_00

Yeah, it's a bit frustrating to see. I know there's companies that we talk to that could totally have we know our infrastructure will be more resilient. The the one that gets created by the tool will be more resilient, will uh often be cheaper, but to to convince them to rely on a tool um is hard. Well, the there's two challenges there. One is we need to get better at sales, but the other one is also we need to kind of de-obfuscate what we're doing. Like people, maybe the tool does too much, maybe there's a bit too much magic. And so we're we're starting to add features that show you, you know, this is what's gonna happen before we actually do it. So we try to kind of you know show them the you know what will happen, what will be created in your in your account, so then you know, to make it a bit more attractive. But yeah, that's a challenge.

SPEAKER_01

Yeah, man. I I I would love to go into a whole conversation about that if if time permitted. Um the the next thing and I want to wanted to bring up, um I'm noticing some similarities between your product and the product that I'm working on. So I'm I'm building a CI tool. Um, and there's there's some certain things that frustrate me about existing CI tools. Um, you know, there's GitHub Action, Circle CI, GitLab pipelines, all that stuff. Um and Leo, if you were to start using a new CI tool and you were gonna start configuring it for your application, how would you expect the configuration to work, if that question makes sense? Based on what you've seen of various CI tools, how does the configuration usually work? What would you expect?

SPEAKER_00

I I don't know anything about your tool, but um I I must say that GitHub Action, it's so complicated to get like a pipeline together that you're happy with. Uh I feel like we're still two years, three years in, still iterating on that pipeline. Like the different stages, um we you want to deploy to uh a sandbox environment, run some smoke tests in your sandbox environment, go to a staging environment. And I feel like everybody's trying to do this, and everybody has this thousand-line GitHub action. Uh so defense doesn't solve that problem at all. Uh it's kind of out of scope. But I I I think that is a different dimension to the to the same uh to the same in the same uh field why we need to write a thousand-line GitHub action when everybody do the same kind of uh pipeline. Right.

SPEAKER_01

Yeah, it's it's this big nasty YAML file, whether it's GitHub actions or or circle CI, GitLab pipelines, all these giant YAML files, and so often it's even worse. It's like sequentially executable YAML files with nested shell scripting, so shell scripts nested inside of YAML, and it's like this is the way everybody does it. It's like you guys think this is a good idea? This is like insane, this is awful. And so I wanted to do my CI tool a different way instead of having this big YAML file that kind of cloaks, you know, you can tell that the YAML file is being um parsed and then they're you they're using Docker under the hood, you know, like clearly that's how it's working. So can you guess how I did my config instead?

SPEAKER_00

Is it composed thing?

SPEAKER_01

Exactly. Yeah. Yeah. You have a um you have this subdirectory. The product's called SaturnCI, and so you have a SaturnCI subdirectory, and then in there you put your Docker file and Docker Compose. Usually the Docker file is just like a variation on whatever Docker file you might already have. Um same with the Docker Compose, but it's all like if there's anything that you want to be different specifically for the test environment, you can do that. And then, of course, there's there's some things that the environment just just expects to be there. Um, but there's no there's no custom proprietary format or anything like that. Like if you know Docker and Docker Compose, then you know how to get set up on this platform.

SPEAKER_00

Oh, that's cool. I will definitely check it out. And I I feel like there might be uh some synergy there. Because you're solving what I'm solving for infrastructure code, you're you're solving for CI uh CI YAMLs, right?

SPEAKER_01

Yeah, yeah. And and unfortunately, well, I shouldn't say unfortunately, but the reality of it is that I'm starting very narrow. It only works for Ruby on Rails and only Ruby on Rails projects that use RSpec. Um, and I'm doing that so that I can like serve one market really well before I try to expand. Because that's a big part of why um GitHub Actions, etc., they serve everybody so poorly, I think, is because it's so general. It can't make any assumptions about what your job is running because it'll let you run any kind of job. So I'm I'm saying I'm gonna make very specific assumptions about what you're doing so that I can take the work off your plate. You don't have to do all this uh configuration. I'm just gonna make assumptions and so you can be up and running in a short amount of time. Um hopefully later I can I can expand. Okay, with that and and my my little pitch, I've used up all of our time. Um, Leo, I'd love to have you back sometime if you're up for it. I've really enjoyed talking with you. Lastly, where can people go to find out about you, Dfang, whatever you want to share?

SPEAKER_00

Um, yeah, find me on on LinkedIn, uh uh Leo Lunesu. There's not many with that name. I think I'm the only one. Um dfang.io is our website. Um we have a Discord. If you scroll down on the website, the different ways to contact us, uh, the different socials. Uh our Discord is pretty much where people go for questions. Uh there's a getting started, different ways to get started. You know, uh our installer uh is on the website. Uh try our one-click deploy, which will just you know give you, show you what what it would look like in in our environment as opposed to your own cloud. Um yeah, just reach out uh to me on any of the socials.

SPEAKER_01

Uh well, we'll put that stuff in the show notes. And Leo, thanks so much for coming on the show.

SPEAKER_00

Thanks, Jason. That was great.