DevOps Topeaks

#26 - AWS Niche Services

Omer & Meir Season 1 Episode 26

Send us a text

This week we talked about the services you don't normally hear when it comes to AWS: LightSail, Rekognition, Kafka, Open search, Prometheus and many more!

Links:

  • https://github.com/AlDanial/cloc
  • https://cupogo.dev
  • https://www.reddit.com/r/golang/comments/14xyido/the_gorilla_web_toolkit_project_is_being_revived/

Meir's blog: https://meirg.co.il
Omer's blog: https://omerxx.com
Telegram channel: https://t.me/espressops

Oh, oh my lady. Yes, kind of. Good. And action. Action. Action. It's like a low energy episode. It's late. So, hello everyone and welcome to the 26th episode. It feels like it is. Yeah, 26, or 26th episode of DevOps topics. And today we are going to talk about AWS niche services. If it's even a thing, or AI services, we'll see how it goes. Okay, so. Oh, no. Yes. What's the first thing that comes up to? Oh, mine. When I say AWS niche services. Okay. So, this idea, this idea. This week, I had the pleasure of working with recognition. Recognition with K. It's recognition with K. It's probably not the most niche service on AWS. For me, it was the most niche because it was like this AI thing that is there for ages. I mean, there's a trend now of talking about AI and going to open AI and bard and whisper and all kinds of new services. Hang on. I'm confused on it. I'm not sure what you're talking about. Open AI, bard. What are those? What's the answer? These are large language models from open AI, GPT. We talked about it. Everybody probably heard of it in one connection or another. But recognition is a model. It can do a few things. I wanted it specifically to identify to take a photo and understand what it sees. One of the features of recognition is being able to take like a serialized string of a photo and extracting labels out of it. It can be a huge list of stuff. There's a completely list of labels on AWS that they can export and tell you what they can identify. I think it's thousands of words. But the idea is the model takes the image and then outputs a list based on its called confidence. It can be like, it takes now our photo. It'll be 99% sure. It sees two male human beings. We're 98% sure. It sees two microphones. Then it will be 50% sure that I'm outside because maybe it sees a little bit of green here. That's the idea. That's what I wanted to work with. Then I wanted it to create some kind of text based on these labels. Our recognition doesn't have that ability. It does a lot of other weird stuff. It has a celebrity identification feature. I don't know why people use that. Honestly, I don't have any idea why I think it's fun. That's what we do. That's why AWS developed it for fun. I don't know. There's a celebrity mechanism. You can upload a photo and it tells you by a certain amount of confidence if that's a celebrity and if it is who is the celebrity. I don't know. That's it. Okay. What is, oh, you can give it a photo. I can give it Mary's photo and then upload a lot of photos and it'll tell me whether that's Mary or not. And that's kind of useful. You can guess not only for fun, but probably for face recognition, authentication mechanism, stuff like that. It's probably not as secure and as sophisticated. And there are a few others. Anyway, I used the label extraction, but then I wanted to create some kind of a message on top of that like using these labels. Let's say I see you now. And you're in your home. You're using a microphone. You're wearing a t-shirt and a necklace. And I guess I wanted to kind of create a message that says hi, Mary. Great on what it's kind of, it's congratulating you for something. So it's a great great to see you. Wonderful t-shirt. Congratulations for getting your new microphone and it's good to see your home or something like that. It doesn't know how to do that. If anyone knows whether AWS have their own LLM, I'd be really happy to use it because I have a lot of credits, but I couldn't find one. So I went to open AI and I used chargeupity to do that. And I heard you did something with chargeupity to this week, right? Yes. Actually, I wanted to experiment with the chargeupity model for, actually, but we'll get back to the recognition because I'm sure I'm going to ask you tons of questions about it. So I just purchased the chargeupity plus, tied it. And for now, I'm not sure I've seen too many differences between the 3.5 model versus the full model. Well, you know, I heard in the internet and you know, all around that the version four of the model is like crazy and it's way better and everything like did you experience any differences? Like, have you tried version four of chargeupity? So the thing is I'm using the API and I'm using what they call 3.5 turbo, which I'm not sure what it is, but there's 3.5 turbo and there's four. And four cost a little bit more and it's really nothing like I'm using no more than 100 tokens a day. So it's like it's no more than it's not even sets per day, right? But still, chargeupity four is a little bit more expensive, so I didn't go there. I just use the 3.5. I don't think I'll see any difference. The thing is chargeupity four is just trained on like exponentially larger sets of like time than it can extract better code, I think if you're using it for code generation, that'll be probably great. I'm not sure if like generating just speech and human messaging and strings, whether that's better or not. I'm not sure. I'll try maybe. Okay, so this is the event again. We also talked like before this call that we both used or using maybe like some image AI and stuff. So I used, okay, so first I used Dali. Do you know Dali? Yeah. I mean, that's open AI. So I feel like we're talking about the latest type of AI because it's all around, you know, the stock market and on every social talk. So like, so have you used Dali? I didn't like it. I feel like Dali is better at either generating something from scratch or just extending something like extending an existing photo. But if I want to take like just your head right now and generate a completely different thing, like I want mayor to be on a beach in San Francisco, it wouldn't, it's not as convenient and just providing the same photo to something like me journey and telling that put mayor in a beach in San Francisco. What do you think? So to me, wow, well, Dali is very weird. I try, like I wrote a very simple thing. Please show a person who takes vital signs with a selfie and then I got very, very, very weird pictures in Dali. I mean, was that picture work? Like, yeah, I mean, for fun. I just wanted to see if it can describe the product of, you know, the company I work for. So it was like, it was very weird. You know, I got very, very weird results. But when I took the exact same phrase to, and I think I'll post a link to the photos in the description because it's very nice to see the differences, you know, to the exact same phrase in me journey. And I got exactly what I wanted, but, but, and I also want to ask you, maybe recognition has also like AWS recognition has also also will things. So me journey, when it's like showed me the person that takes the selfie, you won't believe it. Do you know how many fingers the person had? Seven. Yeah, it was like six, six fingers on the mobile device. Yeah. So it was very weird. So like, is it something, is it common? Like, it's something. It's very common. It's very common. It's something that they've always wanted to solve. And I think that there was like this wave of rumors that they've solved it completely. And then they uploaded the new version and it went back to scratch in terms of fingers at least. So it was really better at generating just random photos where it was bad with the fingers and making things look very human again. Yeah. So I think that's the only glitch that I saw in the picture. You know, I didn't see any other weird things. But, you know, seeing a person who's taking a selfie, and then you notice the extra finger makes you wonder, like, I'm not sure if I maybe I can tell it. Okay, take the same picture. But now count the fingers, you know, I'm not sure if it works or not. So moving back to AWS recognition. Have you seen any, any quirks like that? So first of all, no, because what I'm doing, okay, I did see quirks, but all I'm using is four is to extract labels based on the photo that it is, right? That's it. Yes. Does it have any, maybe every time that a picture like someone, a person with sunglasses, so maybe it always showed as headphones, so maybe it has like nothing like that. But it has, it's very weird things. For example, I like climbing. So I tested it in many climbing worlds that I go to. It's mostly indoor, wherever it sees climbing holes and they're literally plastic climbing holes. It's very obvious that I'm indoor. It tells me that I'm in nature, climbing a mountain. And, and, you know, everything it sees is like leaves, there are no leaves, no trees, nothing like that. But it identifies like connections as if I was in nature, probably because it associates climbing with nature. There's something in the model that kind of is used to that. I guess it had processed tons of daytime tons of photos that were nothing like indoor climbing and was more around climbing a mountain. You know, the movies, stuff like that. So that's the weird thing. But I mean, that has nothing to do with that day to day. But as we were speaking, I wanted to ask you, it feels like we work with AWS in our line of profession and it feels like they do have AI tools. One of them is recognition. Lex, I think, is for audio, detection and stuff like that. It doesn't feel like you're kind of jumping on the wagon of competing Chaji PT and Google's Bard and, and the rest. And it, I wanted to ask you if you feel that they're going to. And then as, as I was thinking about asking Mary the question, I figured, should they, I mean, does it have anything to do with AWS? Like, I don't know, would anyone use it? Do you think they have a place in that? Like, like, did they have a place in this market of AI? You know, those AWS have a place in the market of AI. Yeah. Maybe, maybe recognition is a useless or pointless service, right? Like, maybe like that. It's not pointless, but it's very initial. It's very small. It's, I mean, what it does, you can easily do today with many other tools. It used to be really great. I remember going to AWS, they have these, they have these days where they invite engineers. It wasn't tell of it back at the time. They invite engineers and you have this, they call it a race day or something like that. It's like a huge competition. So you, it's mostly DevOps engineers. Obviously, you sit around the huge table and then they kind of separate you into small groups to or three people. And then you kind of compete one with each other. Obviously, the competition is something around AWS services. One of them was a challenge of using recognition to identify celebrity faces. I think that's the only reason that service is still there. So I remembered it, but that was like 2017, right? Long before that. So I'm wondering whether they're going to do anything else. It's not new recognition. It's not you by any means. Do you remember what the last type was before AI? What was the huge hype in the world? Well, wait, wait, let me let me hang on before AI. It has to do with money. Not crypto. Yeah, crypto. So I'm asking that because I remember I was talking so much about crypto and I was into it. I think before AI, I would say it would be the metaverse, not crypto, crypto was even before that. It was like crypto metaverse AI. That's debatable. I heard that even Zuckerberg is now kind of ditching the idea of metaverse. That's yet to be yet to be discovered. In any case, AWS have the wrong crypto service. You can launch an Ethereum node on AWS. Did you know that? If you run your if you want to run your own Ethereum node, you can do that on AWS. There's a service for that. There's an AWS like something like I mean, it's blockchain. I think it's a service. I think they I think you can run both as a service. You have blockchain and Bitcoin and Ethereum. I don't know if they have others, but you can get that as a service. Yes, you can run a node on AWS. So they kind of did jump on the wagon of the latest trend back then. And I'm wondering if now they're going to do something and I didn't hear anything. They're usually pretty quick to go out to the market with stuff like that. I'm wondering. I feel like they would say something pretty. I'll take you to another service. I want to talk with you about another niche service that maybe like you heard that they also provide, like they were just provide, um, pull me to your service from you ever. Have you ever heard of anyone ever using it? Maybe. No, why would you use Prometheus as a service? By the way, I can't get it. I mean, other than running tests. If you're like anything you build around Kubernetes today, we'll probably at some point use Prometheus, whether as part of the product or by just exporting metrics for Prometheus, because everyone else reads, uh, metrics from Prometheus, you need to integrate with it. Other than that, why would you ever use that as a service? I don't know, but I'm not sure what, what is Prometheus? Maybe I lost you there. What's Prometheus? Uh, Prometheus, an open source service written by SoundCloud, I think, right? Pretty sure it was something. In any case, Prometheus is a service that can take any kind of metrics. It's built its own convention on our own metrics. So, um, it takes a key value. I mean, obviously over time, but it picks a key value and it goes to slash metrics on your service. It picks it up. There's two ways to integrate with it, right? One in push and one in pull, if you're not mistaken. So it, it's used to scrape services. So you just tell Prometheus, please scrape all the services with this or that annotation and it will just scroll through them every, usually every one minute and just pick up the metrics. And based on that, you can, um, build graphs and see trends and stuff like that. Usually with something like Grafana on top, I'm guessing. Okay. So that's Bombay. So why would, okay. So any idea, why would anyone use Prometheus as a service? Like as you said, usually also usually it's important to mention. Prometheus usually is an internal service. When I say internal, it means only people from the same organization should access this service. You are not exposing Prometheus externally. So usually again, not like as a rule of thumb or anything, but usually you don't need super high availability for it. You need the ability for compliance, maybe, or something like like like backups and maybe put some retention of that service in AWS as three backups and whatever, but you don't need high availability or redundancy or whatever. So why would you use a service? Usually, okay. So moving on to another thing. Usually when I pick a service, you know, like maybe something as a service, maybe Kubernetes a service. My point of view is, oh man, I need to maintain the availability, the ability and all of that. And I don't want to do that. So maybe I'll let AWS do that for me. So I use EKS, right, Kubernetes a service for them to manage the Kubernetes control plane, right? This is what most companies do, these companies that I've encountered. So why take a service as Prometheus? I think I wanted to tell you that I have no idea and not only that I have no idea. You usually use Prometheus for custom metrics, right? Either custom metrics or picking up other services that support Prometheus. You want to read it with one central service. Cloud which supports custom metrics. It is expensive, but cloud watch supports it. That's if you go to AWS solution architects and you ask them, how would I apply the concept of custom metrics on your system, read that build database, build. You donate a kidney and you use cloud watch for that. Exactly. Exactly. But that's exactly what you have to do because it costs so much. And now thinking about it, that's your alternative would be Prometheus. Now with Prometheus, it'll be its own service. So you probably don't pay per metric. You probably pay per usage. I don't know, disk, something like that. Anyway, it's probably a different kind of model and you have a lot of other services that are already integrated with Prometheus because it is open source. They're not integrated with cloud watch. So it might make your life easier and they probably, as AWS do, they probably got so much requests for so many large customers. They decided to do that. Now, and then you asked me, but why? It's an internal service. I can deploy it on my own more often than not. It would sit inside the Kubernetes cluster and I'll just use it there. And then I thought to myself, as you were speaking about the control plane and having that all in place and scalable and durable, the one I was thinking, what if you were a company that your line of work makes those metrics the most critical thing ever. That's your data. PC is what customers are based on. Would you rather have that sit in your own cluster? Or would you want to know? I would definitely use Prometheus's service. If I'm using Prometheus externally, as you said, wow, that's a good idea. I liked it. Good idea. So if you use Prometheus externally, so if you are based on Prometheus and everything depends on Prometheus, I would let AWS manage it for me. But internally, I would do it by myself. Yeah, that's my only thought. I mean, let's take another niche service. Elasticsearch or open search today. That's also something you can consume as a service from AWS. Some people are shouting right now. You know, like, it's not a niche service. We use it every day. Sure. But I think in regards to how many customers use it, I don't do you use it. Specifically, yes. Okay. But as you said, not a lot of people like not all products use open sales assets. Also, and you're not still changing in service. How many of them did you use Elasticsearch from AWS? In service as a world veteran as a facility. So only one company. One company. I think I've used it in one project and we ditched it pretty quickly afterwards to Elastic Elastic.co. I don't think ever since I've used AWS service that wasn't for testing. I don't know. It's another niche service you can consume it. I have a lot of ideas. Why would you? But, you know, just as you would with Prometheus, it's something you normally install inside of your Kubernetes cluster. But it's very hard to manage. Like, it has all those shots and everything indexes and you know what? You know what? You know, what you remind me? What big headache. Okay, for you, what is the most tactics service you've ever been told to deploy? If I tell you a name of a service right now and I like name one technology that I ask you as an ops engineer to deploy and that would be your biggest nightmare to deploy and to manage. What would it be? Because I have one name in my mind. Oh, hang on. Give me, give me, and you can consume it from AWS. Okay. Now, I'm leading you to my to my answer. But if you ask me, there is a service like that on AWS. You can consume that as a service. Okay. So let's just guess what's not. I don't think it's S3 because you don't want to. No, no, no, no. Okay. Something, something way simpler you say. It's not way simpler. It's not simple at all. But it's a hell to manage. In terms of ops, it's just hell. I'm talking about Kafka. So I've never deployed Kafka. That is okay. That's why it's possible. First of all, good for you and stay away from it like it's fire because managing it on your own will cost you your life. And that's why you have huge services like who are the who are the ones that invented Kafka. I mean, the inventors are working in their conflict. Right? Yeah. Confluent. That's the company where it also are kind of like using Redis for Redis is Confluent for Kafka. So Kafka is now being offered as a service on AWS just because of that because the ops part of it is so bad. You don't want to ever touch it. If you touch it, it's like, it's a full-time job. Managing Kafka. So it's just another example. Kafka. And it will do, say, okay, so it just one world about Kafka. So what's Kafka? Kafka is how would you put it in words? It's a Pub sub. It can be a Pub sub. It's basically kind of to reduce it to nothing. It's a messaging service, right? But it's a messaging, a stream service. Kafka gives you like very complex functionalities like replaying back-in-time the events that went through the system. You can, you have consumers and producers and you can write them in, you can write connectors to it. It's a very complex and a very valuable service to have. Okay. So on the same topic. Yeah. S-Q-S. What about it? Well, I would say maybe S-Q-S and S-N-S. I tell you why I wouldn't, you mean as an S-N-S service? I mean, we talked about Pub sub. So I'm like, wait, there is this S-N-S, you know, the simple notification service. Right. Like this whole talk is about, remember, acronyms. Remembering acronyms, right? It's like S-N-S, okay, simple notification service. And also, S-Q-S, the simple Q-service, right? Yeah. Yeah. Everything is simple. And scalable. And scalable. Maybe scalable. Okay. So, S-N-S, have you ever used it? I don't think you can ever use Lambda, not in a large scale without constantly using S-Q-S and S-N-S. It's literally based on that because lambdas are just short-living functions of code, of logic that do something. And then they need to leave something for the next one. And that the way to leave something for the next one is either through a system like Kafka. But I think more often it's going to be something like S-Q-S because it's kind of built into the system. So, yes, I use S-Q-S too much, I'd say. S-Q-S. So, now you have to say a word about S-Q-S, a simple Q-service. Yeah. So, S-Q-S is as it is. It's a simple Q-service. It's the simplest Q-service you'd find if you want alternatives or I mean, Kafka is the full-blown one. You have Robert, which is another amazing queuing system. Robert and Q, Robert and Q. Exactly. S-Q-S is simpler than that. It is growing in the feature that it offers. For example, I don't think that sometimes you want FIFA Qs, for example. Like, first in first step. But it's not expensive. Yeah. So, for the majority of time, S-Q-S didn't have that. It was just not an option. You should have built two Qs and kind of mess with them using S-N-S in the middle. So, yes, now you have that option and it's a little bit more expensive as you said, but it's a very simple queuing service. If that's all you need to leave messages and consume them, it'll provide that. It'll do it pretty good, by the way. It's a great service. It's reliable, but it's simple. If you're ever going to need stuff like what Kafka, for example, offers with producers and consumers and the ability to replay over time, we'll go to Kafka, but please do consummate from a third-party service and do not try to deploy it on Europe. I mean, you can. Sometimes you can't go around it, but in my experience, it's just horrific. Take me back to my job. It's like we're shooting, we're shooting AWS services on fire. It's like tata tata tata. So, take me back on the London. I think it is endless. I don't think Jeff Bezos can name all of them. But we're doing all the niche and interesting ones. By the way, I want to say just something I know. Service that is not niche by any means, but route 53. I just figured out what the name means. It's like a road, right? The road 53, something like that. That's what I thought. So, route is probably speaking about a road in the United States, but 53 is just a DNS port. So, that's my problem. Yeah, it's the 53. Yeah, makes sense. It makes a lot of sense. I didn't think about it myself. I didn't read it. It makes sense. Yeah. Okay, but take me back to the London, right? You said you got to use probably SQS to make the London's work, right? Okay. So, what about the service step functions? Have you ever used it? I use it also a lot. Step functions is your way, again, because London functions are short, live packages of logic that need to do something with one another. Step functions is your way to kind of arrange them in a pipeline. So, this does that. And then this would do this. And this kind of gives you this, what would you call it? A graph, a drawing, I would agree. And I'm really of lambda functions. Yeah, it creates a pipeline of a lot of functions one after the other, which would easily be solved with a container, but never mind. Yes, that's what it does. So, if you're using a lot of lambda at some point, I think you're probably going to use step functions. Also, if you heard, it's not so niche, but have you heard of the new screen in London functions that AWS can now help you in finding like infinite loops in London functions? Really? No. Okay, so you pay, okay, London functions, as you said, you wrap some code, run it on the cloud. That's only a function, good. But what happens if you wrap a code which contains wild through, which will run forever, then you pay money all time long, right? You hit the 15 minutes well, I guess. Great. So, AWS has this mechanism of like, it's very, very new. You'll probably Google it right after this talk, but I heard, I don't remember what I read it, maybe in ready to in a blog announcement, we get a post over here because it's interesting, but I think it's also ready to the AI hype. Look how it's all related, right? I think like in this AI hype, AWS also implemented something that recognizes like a bad thing. Okay, interesting. And then it saves you money. Very nice. I mean, it's funny and interesting that it's funny that they need to find that kind of bug for you. It's interesting that they've implemented it, so I'm kind of now starting to work in my mind how they did that. But the last funny thing I heard they released around Lambda, it was in the last three in event. They released, they announced that they now have a new trigger for Lambda function that is going to be hell of a lot faster, right? It's going to increase your your lunchtime, then it's going to be great and really, really fast. And then there was this star at the end and said, on a Java. Why? Because Java is shit. And the right time is so heavy. And it's if you're writing lambdas, please don't do it in Java. But since people do, so they they like they probably the system only supports Java. At least for the time they released it. Maybe now it supports other things. But yeah, Lambda is going to be faster. Start if you're going to write Java. So yeah, I'm not sure why yeah, sometimes like it ever says this like they are oriented to a specific language. And you have no idea which one it's going to be. I mean, you can guess why? Because Java is everywhere. It's kind of you find it around enterprises aren't it. But you can say go is also everywhere in Kubernetes. No, not like Java. Java is really everywhere. But in Kubernetes, like you said, also you walk with go because it's in Kubernetes and most services in Google. But most large enterprises are not working and stuff like that. Kubernetes area is more like the startup field. I don't think enterprises are pedaling around like the entire thing is built around Kubernetes. Not at the moment at least. And most enterprises work with Java probably paying the millions or tens of millions to AWS. So they have they have the weight to go to AWS and ask them to help them with their Lambda functions. Now AWS can't tell them, I mean, you know, go if yourself, just use something else. But they told them no, we'll help you. And by the way, while you were talking, I did a very, very, very quick Google search about, you know, infinite loops in Lambda functions. And there's a blog post like a week ago. Okay. And the funny thing, you know, without even getting into details about this thing, I see that the source SQSQ. So it also involves SQSQ. So the Lambda function, remember you said Lambda functions must use SQS. So it's funny that they implement, I'm not sure if they implemented it with SQS, it's funny that to see that in that blog post, you see this SQSQ or maybe they talked about how you can avoid using this SQSQ. But it's something related to SQSQ and it would be more about it. But it's cool, you know, a very neat thing. Yeah. Let's live in the link. We'll do any other service you want to talk about and niche or not niche service. I don't know. There are, it's like we did the solutions architect on steroids. Yeah, exactly. So I can throw names, right? Before doing the solution architect, the line of six years ago, I like knew a huge list with things like Athena and QuickSite and Lex and all of those services. I did use Athena, I did use QuickSite like once, but it's not something I think I can talk about and say anything like, whether that's good or bad. Actually, Athena, I wouldn't call it niche today. I think it's, it's very useful Athena if you don't know can, you can do a lot of things. I think the, the, the most useful features of it is being able to kind of integrate with an S3 bucket. Take a look at the data, sitting on it, whether that's per kit or JSONs, I think per kit and JSONs are related to format supported. And I want to show something about Athena Oh no. Yeah, I have a very bad experience with Athena. Okay. I'm not like one month in the field. I'm like about maybe four years, something like that. Okay. Okay. And as you said, Athena is used for, you know, scraping some S3 buckets. So run what I wanted just to end this, you can run SQL queries on top of that. And then the struct data, which is hard to get otherwise. But to me, up until this day and also today, okay, it's not easy. I think it's not easy. I mean, I mean, like you can, it requires a lot of work. So just to give you a very short example, if you have a bucket with logs, okay, because maybe you save logs for, of your size. You want to talk about maybe drone IO supports logs. Okay. So though you store them in an S3 bucket and then you're like, okay, I want to get those logs and whatever. It's very, very hard to configure Athena. But let's instead of one click of a button. Let's explain right. Right. I think one of the most, I mostly use it to go when I need to read ALB logs. And I don't want to query the data. Exactly. That's one of the most useful scenarios or use cases to use Athena. And then you go, you take the logs. I mean, they're for you. They're already thrown into an S3 bucket. All you have to do is tell Athena read it. But when you go to Athena, it wants a structure. It wants you to build a table based on the fields that you have. And what are you going to do? Just read, fill by fill. Maybe someone from the audience can tell me a different thing. But I always go to the same document on AWS help site. And they have the five structures ready for you. So you just copy paste and you build a table fine. You can now read ALB logs. But what happens when you want something else, something a little bit different and structures are complicated and are hard to find. On the other hand, what would you do? Would you just index everything and output the data? It would cost millions. And that's, by the way, the alternative in Google is called BigQuery. And that's an amazing service. But I think BigQuery's, you can easily run a mistake. You can easily run a mistake query on that. But honestly, they're also missing a point in there. Like you talked about doing things, look how sad it is. We didn't even take into consideration doing Athena for multiple environments. Because we both don't want to set a patina in terraform or cloud formation or anything like that. Because it's just too complicated. So all you do it, you only do it manually and you only do it like ad hoc when you want to check something. But it's not a part of your ecosystem. So when you go into a company, you're not like, okay, guys, we need to set up Athena for you. It's important. You're like, damn, we need to set up Athena because regulation and we need to get those logs and how will we get it? Okay, so get it now, do it with Athena and you do it ad hoc and that's it. I think I said many times, probably to you too in the podcast that AWS would take like 80, 85% of the way. They won't go all the way. I don't know, I don't, sometimes I believe they're doing that on purpose to allow startups thrive around them. In the area, I mean, I work for a company that does that. They live room for companies to build something that would be more approachable. Why would Heroku ever exist if AWS is the best thing ever? Why would FlyIO that I talked about a few episodes ago, which is kind of like Heroku? Why would it ever exist? It lets you deploy applications quick site. We mentioned quick site before that's for that. Developers that are not familiar with AWS is really hard to understand, right? If all you want in the world is to have your container running from scratch to having a container running, you can, you either kill yourself in the way or have someone hack your, your shit in two minutes or lose money. You have no idea what you're doing. You have to learn so much. So you have services to do that for you. And why would AWS not make that easy for everyone? I don't have a good answer. You came up with quick site because it means that AWS has at least three services that I know of right now for the flowing services quickly, quick site when two mentioned, light sale, which we both know, and Beanstalk. Yeah. How, how do you pick like which one is the best for you? I don't know. So somebody wrote a blog post about it probably, right? Beanstalk versus quick site. Bill says, I messed up. I wanted to say a quick sale and I said, quick site, I like it. And I said, quick site, quick site. Quick site does it be. I can't forget about it. So that's what I meant. Again, light sale versus Beanstalk, I'm not sure which one is better. I don't know either. I think most new developers would just go to light sale because it was built long after Beanstalk, Beanstalk is a little bit more advanced. It let you get scaling and the deployment and it has its own CLI. It's kind of, it still is lightweight, but it's not as simple as as lighter. Still, I think AWS, it, I mean, it feels like they're living room for companies to build something around that. If you want to start your own company today and you provide, all they have to do, if they want to have you read your LB logs, they just have to provide a button that builds the structure, right? Everybody knows that. I mean, thousands of people are trying to read LB logs. Why wouldn't you just provide a button that builds that? Because that's not part of the system. So what if you build your own company that takes your logs from S3, anyone's S3 bucket and just provide a nice, a nice UI to read the logs, maybe, you know, provide some AI on top of that and figure out what went wrong or something that they need to look at. Here's an idea to anyone listening, build your own company. Yes, seriously. I mean, look, Athena is a pain. I mean, maybe somebody else, maybe in the crowd or AWS, listening to this and saying, it's not a pain. It's easy, but every time I read the docs, I'm like, it's not as easy as clicking deploy and that's it, which you have in a lot of services or things, you know, a cloud formation, you got those templates where you can just deploy. So if you told me, listen, there's a good template for Athena, where you can just deploy and that's it. I would do that, but there isn't. Everything is like complicated. I agree. Maybe if there was, you know, you don't have to provide the button, maybe if there was a marketplace with standard structures. I mean, yes, I'll be probably the logs are changing from time to time. It's not the same structure. Just provide the marketplace, even an open source one, just standard things and logs that you can read. It'll be traffic, probably other systems, firewalls, I don't know, DNS, whatever. There are probably tons of logs on AWS that have a standard structure and you might want to read them with Athena. So yes, here's a shout out to anyone who's going to do that. Okay, so I think on it's time for the corner or do you want to speak to talk about another service? No, no, I think that's enough. I think that's enough. I only wanted to ask a question about recognition and look where we are 35 minutes after. Yeah, it's always like that. You know, we don't really know what we're going to talk about, but it happens. Yeah. 35 minutes, 35 services. Yeah. Okay. So we are moving to the corner. Let's do it. Yes. Okay. Now you do the effects corner of the week. Oh, you're now okay. You're now sorry. You had me you had no left with it. It was without a feeling, you know, I didn't feel you have a feeling for that. You need to put your your guts in it. You know, next time we record the morning, next time we record the morning, try me again. I'll do it. Okay. Okay. Okay. So corner of the week and now Omer and I will share experiences, things that we've learned or anything like that that we've experienced this week. Omer, let's start with you. I have three. Can I do three? You can do even seven. Oh, no, I don't have so do sleep. Okay. First thing, there's a just a cool tool. I saw this week. It's called Cloc C L O C C L O C. Yeah. Sometimes you just want to count lines of code in a certain project. I'm not sure why something used to be like a metric. People used to look at even when they were hiring, they would ask the interviewee how many lines of code did you write in Golang or in Java or in raster or whatever. I don't know why it still happens to this day. It just saw an ad like someone posted. He's a CTO and he posted it. I want someone who wrote, what was it? 50K, I don't know, something about 50K lines of go code. If you have, please reach out. It sounds stupid. I don't know. Anyway, there's clock is, I think it's written in go. I'll leave a link and that will do that for you and some other nice data it can export. That's one thing. The other thing, I'm writing in go some listening to a go podcast. It's called Capo Go and I started listening to it just because I found it on Spotify and it's really cool like small bytes about go and new developments, new projects, stuff like that. Then I figured, one of the guys accent sounded familiar and then I figured his name is shy. He's really one of them. That was really cool to figure out. In the podcast, I just heard if you are writing go and you know about it, you probably heard the term guerilla, realized web development kit in go. It has a lot of services like MOOCs and sessions and cookies, stuff like that. It was archived like a year or two back, not sure. Recently, they found a new team. I think they said they're all coming from Oracle or something like that and they just took it back and they're now developing it. That's really good news for the go community that gorilla is starting back up. They have a long session of fixing issues and updating all the versions, stuff like that. That's it. Sorry. It was a long one. It was a good, so we bought links for everything in the description. I don't think I did anything special or funny this week. I admit, I didn't encounter anything cool other than trying to make a Conan C++ package manager to build for iOS, but again, it's so not related to DevOps, which is crazy. I'll just leave myself out of this corner this week. It's good that you gave a favor because I didn't even give or provide one. We're done. That's it. Just at the 40 minutes mark. Thank you, everyone, and we'll see you next week. See you next week. Bye-bye. Bye-bye.