Real World Serverless with theburningmonk

#49: Making DynamoDB more accessible with Rafal Wilinski

March 03, 2021 Yan Cui Season 1 Episode 49
Real World Serverless with theburningmonk
#49: Making DynamoDB more accessible with Rafal Wilinski
Show Notes Transcript

You can find Rafal on Twitter as @rafalwilinski.

Links:

For more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.

To learn how to build production-ready serverless applications, check out my upcoming workshops.


Opening theme song:
Cheery Monday by Kevin MacLeod
Link: https://incompetech.filmmusic.io/song/3495-cheery-monday
License: http://creativecommons.org/licenses/by/4.0

 Yan Cui: 00:12  

Hi, welcome back to another episode of Real World Serverless, a podcast where I speak with real world practitioners and get their stories from the trenches. Today I'm joined by Rafal Wilinski from Stedi. Hi man, welcome to the show. 


Rafal Wilinski: 00:25  

Hi, yeah. Hey, thanks for having me.


Yan Cui: 00:27  

Yeah, so for audiences who haven't heard about Stedi, I just spoke with their CEO and founder, Zack Kanter a while back on the episode 24. And he has some really great stories to tell about how he came from the manufacturing industry, and how he's seen some of the same inefficiencies in tech that the serverless technology solve, and how there's a lot of parallels between the evolution of the manufacturing industry and technology evolution has happened with AWS and serverless technologies. But today, I want to talk to Rafael about his project that he's been working on for a little while now called the Dynobase, which I've been actually a customer for some time now as well. And I've really enjoyed using. Before we get into that, hey, Raphael, can you tell us a bit about yourself and what you've been working on these days?


Rafal Wilinski: 01:16  

Yeah, sure. So hi, everyone. I'm a Rafal. I'm 25, from Poland. I work as a severless engineer at Stedi, company that is 100% serverless, typeScript, using CDK. So we are locked in into AWS. But it's a good story. Apart from that, I'm also running this side hustle, called Dynobase, which aims to be a modern and professional client for DynamoDB.


Yan Cui: 01:44  

So let's talk about Dynobase then. I've been using it for a little while. And I've really enjoyed it. And I think many of the listeners is going to find it very useful as well. Can you tell us what it does? And maybe the story behind it?


Rafal Wilinski: 01:58  

Yeah, sure. So Dynobase is basically a result of the scratch your own itch model or problem. Basically, I was dissatisfied with the current experience of DynamoDB console, because I think we all know that AWS priority is not always the best UI or UX. So when I was visiting the console on a daily basis, repeatedly issuing the same queries and scans. And I was basically frustrated with the experience and decided to do something with it. It all started as a hobby project, but turned out to be a real thing that now now starts money, and makes money. And it all started probably in June or July 2019, when I became a lead for an project, which was basically an API based on GraphQL and DynamoDB, and single table design. So we decided to bet on those technologies, go fully serverless. And we've considered a few options, for instance, also AppSync that you're teaching. But back then AppSync was leaking some features for us. For instance, the single table designed with AppSync and amplify isn't so straightforward. Even now, right? Back then also, authorization wasn't that easy, especially with custom authorizers there was also lack of proper support for multi-tenancy. So we decided to roll out our own GraphQL server on Lambda, which is dealing with DynamoDB, kind of directly. And we of course, designed our tables very carefully, because that's what single table design requires from you. We started with the Lambda development. And at the very beginning of the development, we start making many of those silly data mistakes, like for instance, you mistyped the attribute name, you've used PK capitalised instead of lowercase or things like that. So we ended up visiting the DynamoDB console very frequently. And it was really painful process, especially when you're changing the regions, changing the profiles. You cannot bookmark a query, you don't see the history of executed queries. So yeah, as I mentioned before, I decided that, hey, we can do better than that. And I've asked a few of my friends that were also working with DynamoDB if they are sharing the same pains as I am, you know, do they also lack the ability to have for instance two regions open simultaneously or two accounts simultaneously? And all of them said, yeah, that's that's, that's, that's exactly it like DynamoDB is a great database, infinitely scalable, easy to use. From the... once you know the API and once you know the query language and once you know the limitations, but visually, it's not that good. So I started this as a project that was meant to be an open source project. Because I had some experience with that I was contributing to the serverless framework. For instance, I did some small contributions to the CDK. I had many, many other serverless focused projects on GitHub. And that's what I wanted to do with Dynobase. But I realised that, hey, if I'm going to execute this properly, this is probably going to consume hundreds or thousands of hours. And that's actually what happened. And I thought that, hey, I've already did so much for the open source space. And for... in return, I almost got nothing. So maybe it's a good opportunity to try productizing this vision, trying to productize this piece of software. And it gave me actually even more motivation to land something that may make money when I sleep. So yeah, I've started doing that. And I'm pretty sure you know this kind of feeling of being excited when you start a new project, when you have this vivid visions how it can turn out, how awesome it can be. So I was really motivated, I was working something like 16, 17, 18 hours a day, after my day job just to deliver this thing as fast as possible. And after two months fighting with Electron, with React, with Dynobase with many, with DynamoDB, with many data shapes, with many tables, scenarios, and all sorts of the stuff. I was already super frustrated at that point. But I decided to release the first alpha version of the Dynobase around around September, I guess. And because I was kind of frustrated I was... my motivation kind of went super low at this point. The first initial release wasn't super good but I was kind of free because it's it's delivered, right? So I can finally rest or focus on other things. And as you can imagine, the first release of the Dynobase wasn't a big success, it wasn't even good. But it was a huge milestone, because it, it attracted some some people. It gained some traction. It made probably like, I don't know, $50 during the first three or four months, but I had an opportunity to connect to many people that provided a very, very important feedback. And among those people that became the early adopters, there was also a person called Pravin, who later become actually a co-founder of Dynobase, basically, you know, he approached me saying, like, Hey, this is something that possibly can be really good project, but it's leaking this, it's missing this, it’s missing that, the productization is not so good, you have zero marketing, you don't have how to how to do advertising and all sorts of that stuff. So I decided, like, Hey, I don't have nothing to lose. So maybe I can just partner up with this random guy that I've just met on the Internet, and maybe we can kick-start this project together. And that's what we did, we signed an agreement that was shorter than one page, but basically, you know, we are doing everything in our best interest and started working together. So for the next three or four months, we spent redesigning the product project, we cut the scope, we focused on the very important core features that are essential for the product, but we solidified them and made sure that they are better polished. And around January or February 2012, 2020 we release the beta version completely redesigned, a little bit smaller, but better. And I was amazed because it was very, very well received. And it started making money, which was kind of crazy to me that you know, you can start some kind of project that is based on open source tools that you can find on the Internet. You can just have a little bit of vision, a little bit of luck, find your niche, publish it and yeah, there is also some work with payments, licencing and all sorts of stuff. But yeah, it worked. And and Dynobase right now is giving you all those features that in our opinion DynamoDB console is lacking like for instance code generation, like for instance history, bookmarking, import and export capabilities from CSV or from JSON. We also recently added support for PartiQL, which allows you to query DynamoDB in this very SQL like syntax language. And, yeah, that's pretty much it.


Yan Cui: 10:29  

Okay, there's a lot of different things to unpack there. I guess one of them was, you mentioned at the start of that, that conversation there was about AppSync and single table design and how it's kind of difficult, still is, in fact, the most people that I find, who are struggling with AppSync, and the root cause often goes down to whether or not they're using single table design, because when you've got multiple tables, or at least you use, I guess, a consolidated table, so let's call it that sparingly. And the way it makes sense, I think most people have no problems working with AppSync and DynamoDB is where you have, you know, squeezing everything to one single table. That's where a lot of the complexity comes because you have to write more and more custom VTL code. And that's where people tend to struggle with. That being said, AWS has to release the RFC for and I guess the VTL v2, which is really just JavaScript as a as a templating language for AppSync. So hopefully, that will alleviate a lot of the problems people have with using AppSync with a single table design, which is not really to do with AppSync itself, but it's just a scripting language VTL. So hopefully, that will make things more accessible. That being said, there's still a lot of reason not to sort of go down the route of single table designs. But that's a conversation for another day. And for anyone who want to learn about AppSync, and how to build applications using AppSync, and Lambda and DynamoDB, check out appsyncmasterclass.com, where I've got a video course that teaches you how to basically use those technologies by building a Twitter clone from scratch. Another thing you mentioned is that the continued struggle you have with Electron. So I guess that brings the question of why build as a desktop app when most things are built as a web app nowadays?


Rafal Wilinski: 12:20  

Yeah, sure. So I think the most important reason is the technical one that you need to interact with DynamoDB. And we decided to do that from the node process that is basically running on your machine. So that part is doing the heavy lifting. And the Electron which is basically just the event webview, is displaying the data that is getting it from this heavy lifting process back to the visual process. So yeah, there will be probably some security problems. If we went with the web page, for instance, what about AWS credentials, how you can be sure that we are not sending them to some kind of malicious server, or we are not doing some kind of bad things. If application is running on your computer, I think it's more secure all or it gives you a notion of security. And this is essential when you're dealing with dealing with data, especially when you're dealing with production data. Another reason is that I kept just closing the DynamoDB console AWS tab because it's just like a web tab in my Chrome or Safari. And I wanted to have something that is more persistent, right, that I have that always sits as an application in my tray bar. For instance, I feel like this is the same reason why you are running a Slack application instead of visiting Slack in browser, right, because it's more more permanent, more more solidified. And Electron also gives you a little bit more flexibility around for instance key shortcuts, or opening multiple browser windows. And yeah, that's it.


Yan Cui: 14:13  

Yeah, gotcha. The security concerns definitely probably number one on my mind that if I'm using a third party service and I have to somehow trust them with my database credentials, then that requires a lot more convincing and not not mentioned that if data have to traverse through your back end, somehow, that means that there's also concerns around, you know, various different regulatory requirements, around the data security, locality, like data can't traverse outside of certain countries and, and so on which you can completely sidestep by making this local application that runs on your machine instead. And if I remember correctly, you release the Dynobase around the same time or at least not long after AWS release the NoSQL Workbench for DynamoDB. So what's the difference between the two? And when should you use which one?


Rafal Wilinski: 15:07  

Yeah, so a little bit of backstory when I was about to release a Dynobase, probably like a week before I realised that AWS just released NoSQL Workbench. And I was kind of devastated because, you know, I've just spent two or three months working super hard on this vision that I had in my head. And now suddenly, this big corporation is coming to kill my dreams, right? So, yeah, the day they've released it, it wasn't a super happy day. But the day after that, I decided, like, hey, maybe do something a little bit of espionage, let's see what they actually came up with. So I downloaded it, and I realised that the way NoSQL Workbench approaches DynamoDB and the way I approach DynamoDB, is radically different. NoSQL Workbench, as in some way, it is a competition to Dynobase, but I feel like they are complementary tools. Where I feel like that NoSQL Workbench shines is when you're planning to start DynamoDB, and especially when you're willing to use single table design. NoSQL Workbench has really good support. For that it allows you to model your data before you're actually starting to develop your application. And it gives you the flexibility to design how things are going to be structured out. NoSQL Workbench also has this very cool operation builder, which allows you to construct complex queries, which allows you to basically generate code that is going to modify your DynamoDB data. And up to some point, NoSQL Workbench was not working super well with offline distributions of DynamoDB. To me, trying to mirror the AWS, you know, structure in the cloud locally is kind of an anti-pattern. But I still get it that many people would like to have something like a copy of DynamoDB locally. So NoSQL Workbench was also lacking that support for a long time. When it comes to Dynobase, I feel like Dynobase is much better when you're actually during the development or if you're in a production phase. So if you'd like to find some kind of item that you don't know what's the key for that item, or maybe you'd like to edit a few items, I feel like exploring your data set and manipulating your items is much, much much better using Dynobase instead of NoSQL Workbench. Also, we are exposing some more functionality around bookmarking, history, imports and exports. So yeah, to sum it up, I feel NoSQL Workbench is super good for planning. And Dynobase is much better when it comes to development and tweaking some values and exporting your data. And yeah.


Yan Cui: 18:11  

Yeah, absolutely. I totally agree that the two are more complementary of each other than, than anything else, though NoSQL Workbench is is really good for modelling data. But for a lot of day-to-day interactions you have with DynamoDB, I think Dynobases is where the action is at. And it does give you a much more polished experience compared to what you will get by using the AWS console. Even though they keep trying to reinvent the console, it’s still just a lot of basic things it doesn't do quite easily. And the fact that, you know, many of you have got different tables and different accounts and regions, having to jump across multiple those multiple accounts and regions to look at things that's always been a bit hassle for me because I’ve worked with many different customers and so many different AWS accounts and regions. So when we chatted before, you said that one of the goals you had for Dynobase is to lower the entry level for DynamoDB newbies by abstracting away some of the, I guess, complexities with AWS and DynamoDB in particular. Can you elaborate on how you plan to do that?


Rafal Wilinski: 19:20  

Yeah, sure. So imagine you're a developer that spent his whole life dealing with SQL databases. And you've just you've just been thrown into this new project, new team, which is using Dynamo, Dynamo, DynamoDB, and in a NoSQL world. And you don't know what the heck is partition key, sort key, GSI. So you don't you don't know nothing about those things. And for instance, you have to find some kind of record. And you don't know the difference between a scan and query. So for instance, Dynobase abstracts away the concept of scan and query. And it automatically adjusts if it should do a scan or query depending on what kind of attributes you're looking for. So if I'm, for instance, using for a member using email, and I don't know whether that's an indexed field or not, I just by email equals some kind of value. And then Dynobase figures out if that attribute that attribute is indexed, and if it is, it knows that they can use query instead of scan. If not, it is using scan. And after you've got the result, you can go to the Codegen tab, and you can get the code that was used to generate that exact query. So when you're developing a feature, when you found some data, and you'd like to have the same kind of operation, your application, you can basically grab the code that Dynobase used to fetch those values, paste that in in Python, JavaScript, or TypeScript straight to your application, and you're good to go. Another thing is that when you're dealing with some really big tables, especially single table designs, they tend to have multiple attributes like really, sometimes even hundreds, because there are multiple entity types in one, one table. So when you are typing, when you are searching for something, you can type, just a prefix of attribute name. And Dynobase will suggest that, for instance, if you type E, then it's going to suggest that, hey, this table has attributes like email, or address, maybe title. So it also suggests you what kind of attributes you can look for in this table. So you don't have to know the context of the table that you're dealing with. Or maybe it's going to prevent you from silly mistakes, like in capitalization, or maybe you're going to use double underscore instead of bash all sorts of that things. Apart from that, there is also a little bit easier navigation. So for instance, when you just look for some kind of collection, then you can just right click on the road that interests you, and then narrow down or narrow down the filter results by saying, hey, filter this with attribute that equals to some kind of value that interests you. So this way, you can narrow down your results by by navigating through the data set. Yeah, I think that's, that's, that's the most important example.


Yan Cui: 22:40  

Okay, so the autocomplete feature sounds like a pretty great idea. But I'm dubious about the auto switching between queries and scans, because the performance and cost implications there are huge. And that's one of the things that particle also offers that it will automatically switch between scans and queries based on the attributes that you're selecting, which are things actually incredibly dangerous, because it gives the illusion of simplicity, but at the same time potentially traps you into a much more dangerous world whereby your cost of performance is highly unpredictable, and probably significantly higher compared to if you have spent some time to actually learn the difference between queries and scans. Because you know, as people that use DynamoDB day to day, we have a saying that friends don't let friends use scans. Because it’s just it’s just terrible for performance and for cost. So I guess on that, what's your feeling in that case? That I mean, you are, on one hand, you are making it simpler for people to use DynamoDB. But on the other hand, is for me is akin to handing them a loaded gun that they can really easily point at the foot and just blow the foot off. Is that something that you think people should be investing more time or actually learning DynamoDB? Maybe reading Alex DeBrie’s book. Or is that just a case that we have to deal with it, we have to meet customer where it is, even though by doing so is a very good likelihood that they can hurt themselves?


Rafal Wilinski: 24:14  

Yeah, I agree with you that there is some kind of danger. And we decided to try to educate users a little bit. So for instance, if you have an email attribute that is indexed, and when you are going to type the email, we are going to display a small label underneath the form saying that, hey, this field is indexed. So we are going to issue a query instead of scan. And we kind of believe that maybe users then will become a little bit interested, hey, what's the difference between query and scan and maybe when they are going to issue a scan, they will notice that this is, you know, happening really, really slow. Apart from that, we are also trying to educate people by publishing articles about, you know, what is DynamoDB? What is query? what is scan? Why you shouldn't be using scan? So on our page, we are, you know, we've committed hundreds of hours trying to explain some of those constructs. Of course, you can also buy Alex book. And I think it's a super good resource for that. But you know, a tool is a tool, it can be used in a good and a bad way. And we cannot take 100% responsibility for what you're going to do with it, we are going to be as explicit as possible with our actions. For instance, when we are doing a scan, there's a big prominent button that says scan instead of query. And I think this is still a little bit more explicit than in PartiQL where you, correct me if I'm wrong, but you don't have a feedback what happens underneath the layer, because when you just issue a PartiQL query, you don't know, if you don't know that the table structure, if it's going to be converted to scanner query, you can protect yourself a little bit by restricting IAM permissions to for instance, forbid scan at all. But it's still it's kind of sophisticated technique. And I feel like not many people are using it. And sometimes scans are still useful.


Yan Cui: 26:27  

Scans are definitely useful in some cases, but they are useful and they should be used in very specific cases where you do need to scan a table, not for your everyday, every transaction, we want to get the data from a table based on some attributes. And then the fact that we don't know that we need to index it means that it does a full scan on every single user request, that will be a terrible thing to happen. And I think what you mentioned there in terms of having some visual feedback, that's great. Maybe me personally, I would be more interested interested to see like a red warning, when the when you work out that this is going to be a scan, and then at that point, that would be, I guess for me, be more of a signal that Okay, I need to understand what is a scan and why is this tool telling me that scan is probably not a good thing to do. Because doing it in the, doing it inside the Dynobase tool itself, it's not a big deal. But then, if I was to take what I'm doing, so the code you generate, and then put into my application code that's going to run, every time someone hits my application, that is going to be where the problem is, because that's where you're going to get out of hand really quickly, if you've got an application that gets hit, I don’t know, 100 requests per second, every single one of them becomes a scan, then well, that's going to be a problem for your application in terms of performance and costs. So I guess there's less of a problem, I guess, for for Dynobass, because I'm using it to look at data myself to understand it, to play around with it. But it's more that what happens if I translate the same practice into my application, that's going to be handling actual real user requests at volume, not just a single person looking at the data trying to understand it. So I guess that's where I guess my particular concern with PartiQL is because it's something that people is going to try to use, at least in the application. And I guess I'm waiting to hear more anecdotal evidence to see whether or not that's something that people actually run into, because they accidentally turn a query into a scan, because they had, they didn't realise they're missing the indices in place and things like that. Another thing I guess, I want to mention is, is that I saw a post that you wrote on Twitter recently, that you have, for the first time ever hit $10,000 of monthly revenue, a year after releasing Dynobase, that is a that's a that's a pretty good achievement within such a short time, and probably more than a lot of people make in a full time job in a month. So not bad for a side hustle. How did you do it? Are there any sort of tips and tricks that you can share with anyone who has similar aspirations to beat AWS at the UX game?


Rafal Wilinski: 29:07  

Yeah, um, I have lots of ideas in mind. And I feel like the most important part is to not go all in on your first idea. I always have this backlog of ideas that I'm trying to drop down in my notebook, and then I'm doing, during the showers, I'm having this, you know, imaginary conversations with myself, whether I should do it or not. And before committing into something, you should really validate whether it's going to be needed. For instance, in Dyno, Dynobase case, I've asked a few engineers that work with AWS and they were, you know, they were super excited and they said, Yes, this is exactly what we are looking for. So I had this validation that gave me this initial motivation. Apart from that, I think you shouldn't be putting all your eggs into into one basket, I think you shouldn't be quitting your day job, when you are trying to build on top of some really vague idea that may end up, you know, not so good. So I'm a really big fan of Daniel Vassallo idea of portfolio or small bets, right? You can start this side hustle, you can still be working full time or part time, maybe you can start doing some video courses or writing a book, which is kind of popular now. So this way, you can diversify your strategy and become more antifragile. When it comes to actual product, I think the most important part is to make something that people actually want, you know, apart from yourself. Basically, there needs to be a market for that. You need to kind of find your niche. In a Dyno, the Dynobase case, we realise that there is a big gap between the technology and the accessibility to that technology. So we decided that with this improved UX and UI, we are going to close this gap. And that's what happened. When it comes to actually promoting your business, and making sure that it's well marketed. I think the very good strategy that it's being promoted lately is to learn public and build in public because basically encourages other people to start their own journeys, it's like free way to market things, especially in the era of Twitter, when you have this kind of infinite leverage, you basically can show your work in progress, you can share your ideas, you can share some thoughts. And this is this is this is free marketing. And I feel like for instance, you or Alex DeBrie, or other people that are quite famous in an AWS space, understand this perfectly, the Twitter is a super powerful tool in that regard. And it can help you create this initial pool of super engaged people that are interested in your job and they are going, they are likely to convert into paying customers, and maybe they will even become your ambassadors for your product. So that's super good. You can not also forget about SEO. I don't like this part. Because you know, when it comes to building SEO, you need to write a lot of articles. So for instance, you need to explain like what is query, what is scan, but you also have to, for instance, create an article about how to run an offline distribution of DynamoDB. There are probably hundreds articles like that. There, you know, it was already well explained. But you have to do that in order to be ranked by the search engines. It's kind of a dirty game. But it is what it is. No one said that starting your own business is only only a pleasure and only building product. There is a lot of value extraction than just value creation. I think it's also super important to just, you know, build a good product, build something that you would like to use, build something that you would give to for instance, your friends, and that it will be useful to them, it's kind of silly idea. But some people I feel like they are neglecting that in some way, they are sometimes putting too much focus on the marketing or just trying to this trying to hustle trying to pretend how busy they are, instead of just in fact, shipping the features, which is, you know, actually the core essence of the product. And also, if you're starting a small job, the small small project, small startup, you can be honest, you can tell that, you know, hey, it's just me, it's... you don't have to disguise yourself under the name of big corporation or this is like some kind of company. You can be very personal with people, you can engage in personal relations, you can joke, there's a whole new level of things, how you can market yourself and how you can market the product and promote that. Lastly, I think it's also important to charge more, because of the very beginning Dynobase was a free tool that had the premium version which had some extra features. And basically no one paid for that. Once we just disabled the free tier, we made Dynobase paid only and we raised our pricing by 300%. We started earning much, much, much more. And the reasoning behind that if you are going to create a tool that is going to save you for instance just five minutes per day. If you're going to multiply that by the work days in a year that is going to pay off massive dividends in time savings. So especially when you're dealing with IT professionals or or other domains that are well paid, you can really charge people more.


Yan Cui: 35:01  

Again, so many small, but really good stuff to talk about because you've mentioned Daniel Vassallo’s small bet strategy, something that I've actually accidentally also end up doing, even though I didn't realise that's what I was doing. But then when, when I read about stuff that he was posting, it makes perfect sense, because that's basically how I've been structuring my business, my consulting business, across workshops, across the consulting advisory work, as well as the video course, as well as actually doing hands on development work with clients, and also doing some contract work with Lumigo as a developer advocate. And for now, it's been almost a perfect split in terms of revenue across all five of them. And yeah, it's not only the risk, but also massively increased my revenue stream, makes a nice mix of passive as well as the active incomes from all different activities and and definitely in terms of being independent, that is a much better and safer way to succeed. And to reach your financial independence compared to if you just put all of your eggs into one basket. And hopefully, that one idea flies off the shelves which, oftentimes, probably won't be the runaway success that you hoped for. I guess that's the truth to startups that there's a very small chance of succeeding, at least succeed wildly. You also talked about the whole idea of charging more, and that's something that Corey Quinn, and many of the consultant books I've read, they all say the same thing as well. And you ever see, you can apply that to your products as well as to your services. And that's something I've been sort of gradually working into my practice as well in terms of high negotiate price and understanding the value I provide, versus the actually the cost saving that's going to bring to the customers. So awesome, all very good stuff. Oh, yeah. Now the whole learning public thing, Shawn, and many others, has been very vocal advocate. I think Nader is also another one that's also often talked about learning in public and also building in public as well. Like you said, that's something that I've also been doing for a long time as well. And it has definitely helped a lot in terms of creating, for now at least, pretty successful, a new video course in terms of adoption as well in terms of revenue stream with the AppSync masterclass. So I want to circle back to the host of UX thing, because that's where essentially, I think you can, a lot of people can still make a living. In fact, there's a massive market for SAS providers that essentially compete against Amazon on UX or DX developer experience. So what is your strategy for staying at least one step ahead of AWS? Because they are trying to catch up constantly, even though it’s a question about how good a job they are doing. So do you have some specific strategy in mind in terms of how you want to stay one step ahead of AWS?


Rafal Wilinski: 38:05  

I think it's kind of hard to stay ahead of AWS because basically, they create DynamoDB, right? They have an internal roadmap, they know majority of the use cases, they are probably the biggest client of DynamoDB, because amazon.com is running on DynamoDB. So it's hard to predict the next steps. But because we are just two engineers working on one product, we can be much more agile, we can be faster, we can react to things super quickly, instead of Amazon which is like this big corporation with six, nine or twelve plans with leadership meetings with massive plans. They cannot do that. Every step must be calculated, must be planned. There is a big overhead when it comes to implementing even the smallest features. While we can do it in a day, for instance, there was a case where AWS announced the support for PartiQL or actually they haven't announced it, but it was leaked in change log notes for for CloudFormation or maybe for SDK. So when I saw that, the newest AWS SDK is using PartiQL, I've actually managed to implement that before the public release was announced. So this is kind of the benefit that you can you can use when fighting against the Goliaths. I would say you can be this this small warrior that can do that can make some dirty tricks in order to outsmart or be just faster. Apart from that, you can be also the thing that I've mentioned before we can be more direct. You can have more personal relations with people, with developers. And yeah, I think that's pretty much it.


Yan Cui: 40:06  

Yeah, I think that I love your point about agility, because as much as people talk about how fast AWS move. They move fast in the context of being this 5000 pound gorilla, or like you said, Goliath, that they are really fast for a really large company but there's nothing compared to how fast you can move as as much smaller operator in this case, like, as two engineers working on these two yourselves. You can move 100 times faster than AWS, they will take however many meetings to decide how to do something when you could have just done it, and then shift it to your customers. And of course, like you said you can be much more personal, in terms of how you interact with your customers as well. And I guess that kind of brings me to, to where I hear a lot of customers talk, a lot of people talk about vendor lock-in and how the only way to compete with AWS is not use AWS and things like that which was just silly because the way to compete with AWS is use the best tools that you can get and sometimes that is using AWS, and you compete by being faster and more agile than they are. And that's and you can do that much, you can do that very easily if you don't have, you know, 10,000 people that you have to go through to get approval on something. I mean try to get AWS people to come and talk about what they're doing and stuff like that it requires so much so many levels of approvals for something to happen, and similar to the product side of things as well as there is a lot of meetings a lot of time has to go past before they can decide to act on something. And yeah, and you can be much more agile and that's your best weapon against a massive gorilla like Amazon. So, I guess with that, what's next for Dynobase? Are you working on any exciting new features at the moment?


Rafal Wilinski: 41:49  

So Dynobase is pretty much already quite established I mean, if you look at it, there are not so many of these features that are lacking. We can always improve the UX a bit, UI we can always fix some bugs. We are working for Codegen support for more languages. So right now when you, when you issue a query or scan, I've mentioned before that we are generating a code in JavaScript or TypeScript that you can copy and paste into your application and it works. So we would like to add support for more languages, for instance, I've seen that PHP is being used with serverless more and more. There is also Go which pretends to be the language of the cloud or Rust is getting traction. So these are all the languages that we may also auto generate the code for the user to make work with DynamoDB, a little bit easier. And circling back to the single table design, I feel like there's a still big gap between the performance and the benefits of the single table design and how it is being used or structured. And what I mean by that is that when you're using single table design you probably have the partition key named as PK, sort key names as SK, then you have many, many keys like GSI1PK, and it all sounds super cryptic. It's not, it's not intuitive, it's not easy to use. It all kind of reminds me like using an assembly language, where you have to target an exact register in the memory by its identifier. To me it seems like there is some kind of obstruction missing between the application layer and database layer. And also, this whole idea of putting every, all entities into one table, I get it, it works. It works very well, but at least intuitively for me, even after two years of using that technique, it kind of feels wrong. You know, so I feel like there is something that needs to be put in between those two worlds, and we are trying as Dynobase to invent some kind of pattern, maybe maybe facets, maybe something like virtual tables, which is going to present that information in a more digestible format. So, it can be better understood by the developers, especially those that are just onboarding to DynamoDB.


Yan Cui: 44:33  

Yeah, I have so much gripe about single table designs in DynamoDB. And when you talk to people like Alex and even Rick Hullihan as well, and they're all very clear about oh you know when you should be using single table designs but often that specific nuance is missing in a lot of talks, at least at re:Invent where oftentimes I hear people come to me and say, Yeah, but that's what AWS tells us to do, that's the, that's the best practice, right? You use a single table designs, completely oblivious to the context in which single table designs would work, and the benefits that it brings, which, for most people, you're just not operating at large enough scale for any of that benefit to actually mean anything to you, in terms of performance, in terms of the actual cost optimization and cost savings. But you are going to feel the complexity, the pain, every day. So there's no return on investment for most companies, unless you're operating at a very large traffic website or application. But, yeah, that's probably something that I've, I've ranted about too many times on various different platforms. I love to grab Alex one day and just have a one to one with him on on this podcast, maybe. But yeah, thank you so much Rafal. And look forward to the idea of virtual tables and see how you guys, pull that off, because that's a very interesting idea. And hopefully that would, well, I guess I have to wait until I see the execution, to see how well that helps in terms of simplifying the complexity overhead of using single table designs.


Rafal Wilinski: 46:18  

Yeah, I will make sure that you will be one of the first engineers to get the alpha preview for that feature.


Yan Cui: 46:24  

Thank you. And, yeah. So again, thank you so much for taking the time to talk to us today. And, yeah, best of luck with Dynobase and Stedi. And say hi to everyone as Stedi for me.


Rafal Wilinski: 46:35  

Yes, thank you, bye.


Yan Cui: 46:37  

Cheers. Bye bye. 


Yan Cui: 46:51  

So that's it for another episode of Real World Serverless. To access the show notes, please go to realworldserverless.com. If you want to learn how to build production ready serverless applications, please check out my upcoming courses at productionreadyserverless.com. And I'll see you guys next time.