Streaming Audio: Apache Kafka® & Real-Time Data

Capacity Planning Your Apache Kafka Cluster

August 30, 2022 Confluent, founded by the original creators of Apache Kafka® Season 1 Episode 231
Streaming Audio: Apache Kafka® & Real-Time Data
Capacity Planning Your Apache Kafka Cluster
Show Notes Transcript Chapter Markers

How do you plan Apache Kafka® capacity and Kafka Streams sizing for optimal performance? 

When Jason Bell (Principal Engineer, Dataworks and founder of Synthetica Data), begins to plan a Kafka cluster, he starts with a deep inspection of the customer's data itself—determining its volume as well as its contents: Is it JSON, straight pieces of text, or images? He then determines if Kafka is a good fit for the project overall, a decision he bases on volume, the desired architecture, as well as potential cost.

Next, the cluster is conceived in terms of some rule-of-thumb numbers. For example, Jason's minimum number of brokers for a cluster is three or four. This means he has a leader, a follower and at least one backup.  A ZooKeeper quorum is also a set of three. For other elements, he works with pairs, an active and a standby—this applies to Kafka Connect and Schema Registry. Finally, there's Prometheus monitoring and Grafana alerting to add. Jason points out that these numbers are different for multi-data-center architectures.

Jason never assumes that everyone knows how Kafka works, because some software teams include specialists working on a producer or a consumer, who don't work directly with Kafka itself. They may not know how to adequately measure their Kafka volume themselves, so he often begins the collaborative process of graphing message volumes. He considers, for example, how many messages there are daily, and whether there is a peak time. Each industry is different, with some focusing on daily batch data (banking), and others fielding incredible amounts of continuous data (IoT data streaming from cars).  

Extensive testing is necessary to ensure that the data patterns are adequately accommodated. Jason sets up a short-lived system that is identical to the main system. He finds that teams usually have not adequately tested across domain boundaries or the network. Developers tend to think in terms of numbers of messages, but not in terms of overall network traffic, or in how many consumers they'll actually need, for example. Latency must also be considered, for example if the compression on the producer's side doesn't match compression on the consumer's side, it will increase.

Kafka Connect sink connectors require special consideration when Jason is establishing a cluster. Failure strategies need to well thought out, including retries and how to deal with the potentially large number of messages that can accumulate in a dead letter queue. He suggests that more attention should generally be paid to the Kafka Connect elements of a cluster, something that can actually be addressed with bash scripts.

Finally, Kris and Jason cover his preference for Kafka Streams over ksqlDB from a network perspective. 

EPISODE LINKS

Kris Jenkins: (00:00)
Hello, you're listening to the Streaming Audio podcast. And today we're talking about the realities of going into production with an old friend of Streaming Audios and an old friend of mine, Jason Bell. One of the things that makes Jason interesting is he spent a lot of time with Kafka from an unusual angle to me. He doesn't so much use it as he makes sure it's available to be used. He's the operations guy. He spends his time planning clusters and setting them up, maintaining them as they live and grow. And understandably, he's developed some hard one knowledge about where it can go wrong and what you need to watch out for and what you should plan in advance.

Kris Jenkins: (00:41)
So we thought we'd get him in for some advice. And he has battle scars, yes indeed. He has scars for the whole system from brokers and topics to Kafka streams and connectors and ksqlDB. And we start with one of my personal favorite access to grind, understanding your data model. We should probably actually ask him to write a blog post about that one day. It would make a good guest post on Confluent Developer, which is our education site. We put everything from blog posts and old episodes of this podcast to free courses covering Kafka internal, security, stream processing, and lots more. You can check it out at developer.confluent.io, but for now let's get the episode started. I am your host, Kris Jenkins. This is Streaming Audio. Let's get into it.

Kris Jenkins: (01:36)
My guest today is Jason Bell. Jason, welcome back to Streaming Audio.

Jason Bell: (01:40)
Thank you, Chris. Thanks for having me.

Kris Jenkins: (01:43)
Good to have you. You're a return guest to Streaming Audio.

Jason Bell: (01:46)
This is my third time.

Kris Jenkins: (01:48)
Third time. But your first time with me, so behave.

Jason Bell: (01:52)
Let's try. Let's see where we go. Is this the reason that Tim left? He just ran off?

Kris Jenkins: (01:59)
There is rumor that Tim Berglund was broken by [crosstalk 00:02:03].

Jason Bell: (02:05)
Understandable.

Kris Jenkins: (02:06)
We'll find out if I'm made of more resilient stuff on the Jason Bell access and we'll see how it goes.

Jason Bell: (02:15)
Considering how much time we're already into this, in the edit, we're doing well.

Kris Jenkins: (02:23)
We're doing well. But you and I have a background.

Jason Bell: (02:24)
We do.

Kris Jenkins: (02:25)
It goes beyond Kafka. Because we first met years ago at Clojure Exchange.

Jason Bell: (02:31)
2016, if I believe.

Kris Jenkins: (02:33)
16, that's a long time in internet years.

Jason Bell: (02:36)
It is a very long time in internet years. And it was around about the time when I started using Kafka.

Kris Jenkins: (02:43)
Really?

Jason Bell: (02:44)
Yeah, of course. I'm quite old.

Kris Jenkins: (02:52)
No. I was a bit too slow on that. No, you're not old, honest.

Jason Bell: (02:55)
Stop. You're old. So 2016, I was doing the talk at ClojureX about a technology called Onyx, which was a peerless distributed system.

Kris Jenkins: (03:09)
Sounds familiar.

Jason Bell: (03:11)
And it could read from Kafka and that's why I was interested in it. And it was built in pure Clojure, which was really, really, really, really interesting to me. And as you are well aware, I was working with a lovely gentleman and still a dear friend of mine, Bruce Durling.

Kris Jenkins: (03:26)
That would be amazing.

Jason Bell: (03:29)
Yes, the amazing Bruce Durling. We're trying to get that changed by [inaudible 00:03:33].

Kris Jenkins: (03:34)
If you're listening to this and you don't know Bruce, he has been the beating heart of the London Clojure scene for a long time and he's a great guy.

Jason Bell: (03:41)
There's a good group of people around all of that stuff. And I still use Clojure on a daily basis for my own stuff. So I'm still involved, kind of. So that's how I know... and you were sat in the front row, not heckling. That came later.

Kris Jenkins: (04:02)
I'm a front row pay attention kind of guy.

Jason Bell: (04:04)
In all offense, most of that audience were because that was the first time I did that talk. And then three years later they were heckling me. It was hilarious. Because I actively encourage heckling in all my talks as you are fully aware. So it's all good fun.

Kris Jenkins: (04:24)
So take me on the journey from 2016, you first discovered Kafka to where you are today?

Jason Bell: (04:31)
So it was actually working with Bruce that I got into Kafka. So we were supporting a customer who was sending quite a lot of data through and then using Kafka for it. So it's just one of those things I picked up and this was early Kafka. This was log messages still being stored in Zookeeper Kafka.

Kris Jenkins: (04:52)
Oh right. Okay.

Jason Bell: (04:53)
So we're going back. And there was no streaming API, there was no Kafka streams. There was no Kafka connect. There was no KSQL or anything like that. It was just here's Brokers, here's Zookeepers. Messages didn't even have keys at that point.

Kris Jenkins: (05:09)
Gosh.

Jason Bell: (05:10)
That's how far back we're going. So it's been interesting. My journey through Kafka... and I think Tim and I spoke about this as well. I've watched these technologies come and get added to the ecosystem and it's been really interesting. Considering things like the streaming API was really important to me. From the streaming API, it was one of the first jobs that I built. And I actually did another ClojureX talk on it. It was, How I Bled All Over Onyx, I think is... that's what we called the talk. Because I was pushing way too much data through Onyx to fair. I was breaking it left right center. And Kafka Connect didn't exist at that point. So I was using the streaming API for writing persistence out to sync stores like S3 or databases, that kind of thing. So they were all handwritten at the time. And then Kafka Connect came out and then KSQL came out. And I saw Michael Molyneaux, the KSQL demo strata data conference, the O'Reilly conference in London. And he managed to write off 80% of my streaming API jobs in one talk. I was like, that's my career over.

Kris Jenkins: (06:29)
That's a mixed blessing when those things come along. It's like, all my work's going to disappear and that's good and bad.

Jason Bell: (06:36)
Interestingly, I became more known for being able to run Kafka than necessarily do development work on it. So I came to a fork in the road where I was doing less development and more advising on putting the clusters together. And that's how I landed the gig with Digitalis in 2019 as a Kafka expert. So I've been working with them for the last three years and I just finished last week. And I'm now working for a company called DataWorks, who don't use Kafka they use Pulsar.

Kris Jenkins: (07:12)
Oh interesting. So we're going to stay on the topic of Kafka for now, unsurprisingly.

Jason Bell: (07:20)
Well it's contractually the way isn't it?

Kris Jenkins: (07:23)
It's not a hundred percent guaranteed, but we do have a natural gravity towards that topic.

Jason Bell: (07:29)
I think the shareholders may approve of that.

Kris Jenkins: (07:33)
In the long run. But the question is, do the listeners approve? That's the big question, that we always focus on. So we're going to talk about Kafka. Tell me about Digitalis first. Why were they using Kafka? What did they use it for?

Jason Bell: (07:47)
They were a managed services company, so a lot of their clients were using Kafka.

Kris Jenkins: (07:50)
Oh, okay.

Jason Bell: (07:53)
So that's how I landed with them.

Kris Jenkins: (07:56)
So you must have ended up planning a lot of different clusters for them?

Jason Bell: (08:00)
Yeah and a lot of throughput conversations and a lot of design work around data. Once we started talking, and you'll probably get the picture that I'm quite a pen on paper guy and I need to know the process from one side to the other. Where does the data start and where does it end and where does Kafka fit in the middle? That's the way I see things. So we planned a lot of things around that. We'd done customer development work as well. And just as I was leaving Debezium was coming up in conversation an awful lot as the next thing for change data capture stuff as well. So I had a great time with Digitalis, they're a great bunch, but I decided it was time for a new challenge and time to move on.

Kris Jenkins: (08:49)
We've already hit that point in our careers. The best thing you can get is you're moving on for good reasons.

Jason Bell: (08:55)
Absolutely. And I am on this one. Definitely, definitely. I love the team at Digitalis, they're a fantastic bunch. And they're one of the rare companies where I cannot say... hand on heart, I never had a disagreement with anyone in three years. It was like everyone was professional, knew their stuff and just put the customers first. It was brilliant. And that's rare, I've found in my career anyway.

Kris Jenkins: (09:21)
It can be a roller coaster at times.

Jason Bell: (09:22)
It can be. I've worked with some very good companies, I've worked with some bad ones as well. That's the way it goes.

Kris Jenkins: (09:29)
Let's not go onto that because-

Jason Bell: (09:30)
Let's not, let's not.

Kris Jenkins: (09:31)
... whenever you get enough-

Jason Bell: (09:32)
Because you'll get me to start naming names and then I'll have lawyers and things like that.

Kris Jenkins: (09:38)
Programmers have those conversations a lot, but they're not recorded and they're not broadcast.

Jason Bell: (09:44)
There should be a speakeasy for developers somewhere.

Kris Jenkins: (09:47)
There probably is, somewhere.

Jason Bell: (09:48)
There probably is. We don't know about it then.

Kris Jenkins: (09:51)
If we did, we couldn't say.

Jason Bell: (09:54)
Or if you know about it, what have you said about me?

Kris Jenkins: (09:57)
Moving on, moving on. Because this is what I really want you to tell me. I want you to teach me about planning a cluster.

Jason Bell: (10:04)
Ah right. Haven't [crosstalk 00:10:07].

Kris Jenkins: (10:09)
No, come on.

Jason Bell: (10:10)
Any questions?

Kris Jenkins: (10:12)
A great many questions. Thank you very much. Where do you start? New client comes along, they've got to plan a cluster, where do you even begin?

Jason Bell: (10:24)
I deep dive on actually what the data is. I go a step before and say, What is it we're actually dealing with here?" And that's actually quite a relevant conversation now based on just what I said about Debezium. If we want to acquire data from the database then, Kafka Connect's coming first then with Debezium obviously being a connector. And I go down, I drill down to the very core components of, what's the message, what is it? Is it adjacent thing? Is it just a straight piece of text? It could be an image. Nothing to stop you sending an image through Kafka. And have those conversations first. And then it becomes, how many you're sending? Because I've noticed... and I did a Cleveland meetup during lockdown because Dave Klein kept asking me to do meetups in Cleveland. Just so I could say, "Hello Cleveland."

Kris Jenkins: (11:31)
Of course that's why you did it, of course.

Jason Bell: (11:34)
Had to. I just have to, it was great. No, I think three people got the joke, but it was great. I was happy.

Kris Jenkins: (11:41)
With a bit of luck on this podcast another three people just got that joke.

Jason Bell: (11:44)
We'll put a link in there.

Kris Jenkins: (11:46)
Which brings your annual audience total up to six. Seven if you include your wife.

Jason Bell: (11:50)
I did that talk in Northern Ireland. Someone fell off the chair. There was one person in the room who got it and they fell off. It was hilarious. Anyway. So data, what is it? What size is it and how is it shaped? How many are you sending? And at that point you start getting a picture. And one thing I have learned is there's a lot of instances where the first question is, and please don't take this the wrong way, "Do you actually need Kafka?"

Kris Jenkins: (12:22)
It's a very fair question.

Jason Bell: (12:27)
Because volume... we're talking about a volume game here. And we're usually talking megabytes per second. And I know a lot of institutions, companies and clients that have not gone through that volume at all. And the question then remains, why are you doing it like this? What's the base reason for doing it? And some of them are legitimate. It's like, we have this source of data, but we have all these different departments that want to consume this data. Or there is the argument of, there's this data, we want to transform it. We want to do X, Y, and Z with it and we want to persist out here. So that's where I start. I try and get an end to end of what's actually involved. Because like I was saying at the start, Kafka's evolved over the last seven, eight years, especially with the amount of tooling that's been built around the core concepts of the brokers. So when I came on board originally it was, there's these brokers and there's these messages and they go through and you monitor Zookeeper with your life and then that's it. You may laugh. When it goes wrong it goes wrong.

Kris Jenkins: (13:49)
No, I can believe.

Jason Bell: (13:53)
So when things like streaming API came along, it was like, well that means we can do this. And then there's transforms and filtering. Now it gets interesting. We can [crosstalk 00:14:04]. And then with Kafka Connect it's like, we can sync out to this. That's great but that comes with a whole different set of considerations to make. Especially when the side effecting things like database connections and hasty TPM points with rate limiting and all that kind of thing. So there's lot of things that people don't talk about. We had clients that would persist out to Splunk, for example, via HTTP. And we used to just DDOS Splunk.

Kris Jenkins: (14:35)
I can see how you'd manage that.

Jason Bell: (14:37)
It's not the done thing. It's quite impolite. You lose friends very quickly. It's a lot of Christmas cards to write later on in the year, that kind thing.

Kris Jenkins: (14:49)
I think there's a connector for that.

Jason Bell: (14:51)
There probably is now.

Kris Jenkins: (14:52)
A Christmas card sync. That would be perfect.

Jason Bell: (14:53)
Christmas card sync, because of the source connector. The source connector of how many people I've offended this year will automatically create the Christmas card for me and send it out. I like the sound of that already. There's probably a talk in that somewhere.

Kris Jenkins: (15:08)
Yeah actually. No, actually you probably technically could do that with something like Moonpig.

Jason Bell: (15:15)
Hey, that'd be fab.

Kris Jenkins: (15:16)
That would be a really good curve ball conference talk.

Jason Bell: (15:23)
Keep talking. I've got a VC in mind already. That comes... anyway. So going back to cluster planning.

Kris Jenkins: (15:35)
Start with the context. That's your number one tip.

Jason Bell: (15:37)
Absolutely. And start with the customer, start with the customer context and be honest, if you think it's not a thing for Kafka then say so.

Kris Jenkins: (15:44)
There should be a good reason for all the technology we use.

Jason Bell: (15:49)
Absolutely. And I think pre Confluent Cloud, obviously it's now a lot easier in order to scale up something on the cloud and be metered gigabyte per month or however, but I'm so used to OnPrem stuff and cost to factor. So even a base cluster would be three Zookeepers, so you've got quorum, three or four brokers and then you'd have two category connect nodes, one active, one back, distributed standby to take the load. Same with schema registry if you're in to schema registry, you'd have those behind a low balancer, one active, one on standby. And then there's Prometheus monitoring, Grafana, alerting. And so the next thing, what seems to be a fairly simple thing of one or two, where you think is, we'll spin up a Kafka cluster, has now become 15 nodes and is costing X amount a month.

Kris Jenkins: (16:50)
If you want the whole thing to be redundant, then-

Jason Bell: (16:52)
It's going to cost.

Kris Jenkins: (16:54)
... those numbers add up.

Jason Bell: (16:55)
They do. I'm not going to say it's something we don't talk about. It's just something, I don't think, we highlight enough. Put it in the cloud. The five words that I hate most of all are, put it in the cloud.

Kris Jenkins: (17:14)
You got to justify that, come on.

Jason Bell: (17:19)
It's just a bunch of servers, isn't it? How can I put it lightly? No, the clients I've worked with have all got sensitive data, so it's not going to go onto the cloud. It has to be run on prep with big concrete walls and men stood next to it and people, not men. With spanners and axes and spades and stuff just protecting the data. I don't think that actually works in banking, but it's [crosstalk 00:17:51].

Kris Jenkins: (17:51)
I don't think that's how data security is generally done. I think you're thinking of Dungeons and Dragons, but that's another podcast.

Jason Bell: (17:59)
So anyway, we rolled the D20 to see how many brokers we need and that was swift. You've got to admit that was swift.

Kris Jenkins: (18:08)
I like that. I've heard of worse planning strategies.

Jason Bell: (18:14)
It's not a bad one.

Kris Jenkins: (18:17)
I'm going to push you onto some serious planning strategies, with broker numbers.

Jason Bell: (18:27)
Four's always a good start. Three's a good starting point. Obviously you need a leader and a follower.

Kris Jenkins: (18:34)
Yes.

Jason Bell: (18:37)
And then add another one because if you've got two and the follower goes or the leader goes, you've got one leader, but no follower. It's not going to end well, it won't end well. So three is obviously your magic number to start with. Same with Zookeeper, quorum, three at the minimum

Kris Jenkins: (18:54)
Same with the monarchy, same with the monarchy, heir and a spare. That's what they say.

Jason Bell: (18:58)
I'd never heard that before.

Kris Jenkins: (18:58)
Have you not?

Jason Bell: (18:58)
No.

Kris Jenkins: (19:04)
That used to be the rule for having Royal children, heir and a spare.

Jason Bell: (19:07)
So that's why he went to California. Where were we before you said that?

Kris Jenkins: (19:14)
Three being the magic number.

Jason Bell: (19:16)
Three being the magic number. This is going so well isn't it?

Kris Jenkins: (19:22)
We're roughly on the rails.

Jason Bell: (19:25)
We are, I'm against them at the minute, but anyway. So three, three, back to three, that was it.

Kris Jenkins: (19:31)
Sorry.

Jason Bell: (19:32)
Leader. Two followers can take a follow out and still maintain a cluster. It can still work. That's what we're after here. Things like stretch clusters, 2DC's. So data center A, data center B is fine. You can have two brokers on each and take a data center out and you'd be okay. There's topic considerations to take into account, minimum [inaudible 00:19:59] replicas of two and all that kind of thing. Which when we go through the planning, we go through all of that as well. The conversation then really... when you have two data centers is, what happens with Zookeeper? So you do actually need a third data center for your quorum node. So you can have two zookeepers on the one data center, two on the other, and then a third, the fifth or a third, or have your quorum one in a third data center somewhere. And whether this is OnPrem or on the cloud doesn't really matter it's just the numbers have to work out in that way. So that's the conversations we end up having first. And that [crosstalk 00:20:37].

Kris Jenkins: (20:37)
Do you always do it that way? Would all your clients want that level of high availability?

Jason Bell: (20:41)
Yes.

Kris Jenkins: (20:42)
Always. Fair enough.

Jason Bell: (20:45)
It's the nature of the data. And this is it. So going back when I was working on Bruce, for example, we put a project together for a company. It was a proof of concept. I've done a talk about it actually. I've done another confluent meetup talk. I can't even remember what I called it. Anyway, this was originally the, How I Bled All Over Onyx, talk. This was a talk about putting this cluster together that completely failed. I rewrote everything in Kafka streams. I didn't tell Bruce about it till the day after because that's what you do with CTOs, you just don't tell them.

Kris Jenkins: (21:24)
Need to know basis.

Jason Bell: (21:26)
Absolutely. I'll tell you what you need to know, it's working now. Now I'll tell you.

Kris Jenkins: (21:34)
I can neither confirm nor deny that I've employed that strategy.

Jason Bell: (21:44)
That says it all really doesn't it?

Kris Jenkins: (21:44)
We've all been there. We've all been there.

Jason Bell: (21:44)
We've all been there. It's the nature of the job. It's just, we don't tell customers, oh, too late. So we had Onyx jobs running. They were failing and it was my fault entirely. I was just pushing too much data through and these files were Gzip files of CSV data. Now you don't know how much CSV data, because of the nature of the data. It was flight search data. So you don't know if it's a small airport or a big airport, so you might get three rows or 20,000 rows of Gzip. So therefore your message size concertinas in and out per message. And Onyx couldn't hack it and I'm not surprised Onyx couldn't hack it. And I spoke to Michael and Lucas about this quite a lot. And bless them, they spent the time with me going through everything. And I learned so much about Clojure and the air on binary transfer protocol, which is also well written. And there was all these little settings. This is how you get into the nitty gritty, the same with Kafka, is when you start for better, want for better and bleeding over everything and it's broken and you're trying to figure out what's going on, this is when you do most of your learning.

Kris Jenkins: (23:04)
That's exactly how I learnt Nixxes by bleeding all over it and scar issue.

Jason Bell: (23:09)
Exactly. And this is how I learned Kafka. It was Coface stuff. And like I say, at that time, the cluster revolved around the messages being stored on Zookeeper, which is also another complete pain point for Kafka 0.8 back then. So Zookeeper would fill up and you'd go, "What's going on? Why is it all tied?" And it was in MISys, so all we did was just tear the whole thing down, bring it all back up again. Anyway, the point I'm getting to is SLA's basically. The SLA for this proof of concept was only nine till five, Monday to Friday. Which is all very well. That's what they're paying for. It doesn't bother me. They just have to be aware of, what happens when something goes down? If the database connection or S3... because S3 drops, Amazon endpoints drop, they do, it's a fact of life. What do you want to do if this happens? And we sat down and went back to them. Bruce and I went back and said, the biggest period of time you've got between something failing at five o'clock is Easter. So you got Thursday... sorry, you got Thursday night, Friday, Saturday, Sunday, Monday and then we're back in the office nine o'clock Tuesday morning.

Jason Bell: (24:32)
If it was failing between that point and that point, you want to save the data. And we calculated the amount of storage that they'd need in order to volume. And the volume they were predicting because of the nature of the data was 12 terabytes a day.

Kris Jenkins: (24:46)
Crikey.

Jason Bell: (24:47)
Yes. It was quite a lot.

Kris Jenkins: (24:49)
This is all flight data?

Jason Bell: (24:50)
Yes. It's all flight search data. So I won't say from whom, and I won't say for whom, but that's what it was. So it was... and then you've got replication to take into account and... the next thing is we need around about 280 terabytes for four days. That's going to cost.

Kris Jenkins: (25:13)
That's in 2016.

Jason Bell: (25:13)
Yeah. 2016, 2017.

Kris Jenkins: (25:18)
Even today that's a cost, but back then.

Jason Bell: (25:21)
It's a cost and it has to be taken into account. If you had a problem one minute past five on a Thursday night and you have 280 terabyte to deal with when you powered everything back up on Tuesday morning, you've then got the, this has all got to go through again before... so you've then got this back pressure problem as well. So it's all those kinds of considerations. We've moved on now. Obviously topic data's held on volumes, not in Zookeeper, but you still have to plan that kind of thing. I have alerts set up at 50% disc volume because I'm paranoid. Because what we found at Digitalis is if someone does stress testing, they don't know what they're stress testing against. If you default to seven days retention, you can fill up brokers really quick just by stress testing. I'm going to send a million messages through, even though my prediction is 4,000 a day. Fine, but you're going to fill my test brokers up and I'm not going to be too chuffed about that. And it has happened.

Kris Jenkins: (26:28)
I suppose it's in the nature of stress testing that you're trying to find out what breaks first?

Jason Bell: (26:33)
Yeah. Usually my resolve. I'm good for a quote today. It's those kinds of things you need to take into account. And the stress testing does come with its own set of problems. And now we know in Kafka there's the producer performance test tool and the producer... sorry, and the consumer performance test tool. And they're great, but something I said in the very first podcast, which I think might have surprised Tim at the time when we were talking about it is, not to assume that everyone knows how Kafka works. You have production teams, software production teams that are building apps to produce and consume from Kafka but they may not necessarily know how it works. You don't need to know how it works. It's like, here's my producer, it sends a message. I want to send 20 million and see what happens. Do I get X back for all of those or is it just find and forget, and those kinds of things. And that's all very well, it would be handy if you told us.

Kris Jenkins: (27:44)
So you've got to do capacity planning for the people that don't know what they're actually sending?

Jason Bell: (27:49)
Absolutely. So what we ended up doing at Digitalis was a sheet of message size. How many a day? Is there a peak time? This is my other giant slide from the meetup was this graph of volume over time or volume over... anyway. Volume against time. Let's call it that. And I think the misconception is when you're using Kafka, everything is just flat like that, it's streaming data platform. It does this. It's all over [crosstalk 00:28:25].

Kris Jenkins: (28:24)
Yeah, of course.

Jason Bell: (28:25)
It's all over the shop, unless it's [crosstalk 00:28:27].

Kris Jenkins: (28:26)
And different industries have different natural peaks.

Jason Bell: (28:30)
Exactly. And so I'm using the full width of my screen here as a graph, X and Y. So [crosstalk 00:28:39].

Kris Jenkins: (28:38)
That's going to be great for people listening to this.

Jason Bell: (28:41)
It is isn't it?

Kris Jenkins: (28:41)
Jason is drawing a graph with his finger folks.

Jason Bell: (28:46)
I'm very sorry. I'll stop doing that because that's just not really going to work. Retail, for example, you'll have nothing till nine o'clock in the morning, you'll have peak traffic at three o'clock or lunchtime or what have you and then it drops off at six o'clock in the evening.

Kris Jenkins: (29:00)
Do know what I was thinking about recently, which I just don't know how they planned this, is the music festival Glastonbury?

Jason Bell: (29:07)
It's immense isn't it?

Kris Jenkins: (29:08)
Yeah. They sell nothing all year and then there's a 10, 20 minute window where they sell 200,000 tickets.

Jason Bell: (29:15)
How?

Kris Jenkins: (29:15)
And it's the most bursty traffic I've ever heard of.

Jason Bell: (29:20)
So I wrote a blog post in 2002 on that.

Kris Jenkins: (29:29)
Oh really?

Jason Bell: (29:29)
And I can't even remember what I said. It was about Ticketmaster and it might have been Michael Jackson tickets because people were struggling to get them, because obviously systems just break. Ever an advert for elastic computing is that.

Kris Jenkins: (29:44)
2002 though, you didn't have that many options?

Jason Bell: (29:47)
No, you didn't, you didn't, you really didn't.

Kris Jenkins: (29:49)
I remember 2002 as some people still advocating shell scripts as your main web server. That's how dire things were.

Jason Bell: (29:58)
I've never come across that before.

Kris Jenkins: (30:00)
And your traffic was limited by the rate at which you could fork a new shell.

Jason Bell: (30:05)
I've never come across that before.

Kris Jenkins: (30:08)
Afraid so, afraid so.

Jason Bell: (30:09)
Wow, I'm still a Pearl hacker at the end of the day as well. No, I'm serious. For big search and replaces in large files I will still crack up Pearl, I won't use Oracle. I still use Pearl, it's just in my head.

Jason Bell: (30:24)
So anyway, so it's this elastics, in terms of volumes going through, banking's an interesting one. It's like a [inaudible 00:30:34] but it just goes whoop and that's it, it's [inaudible 00:30:36] for the day. A huge burst of traffic through batch times. And are we treating this as batch or are we treating it as streaming. And traditionally you'll find a lot of organizations will just dump data into Kafka in one massive swoop then let the brokers deal with the backlog and process everything through. Fine, it's fine. It works. It works. Things like high OT data, car data, is just a huge volume game streaming continually while the car is moving at high velocity. There's loads of it.

Kris Jenkins: (31:10)
Not just the car, but the data coming out of the car is at high velocity.

Jason Bell: (31:15)
Yeah. Telemetry data coming from cars is just insane. Loads of [crosstalk 00:31:24].

Kris Jenkins: (31:23)
I can believe.

Jason Bell: (31:24)
And there'll be more of it. [crosstalk 00:31:24].

Kris Jenkins: (31:24)
And that's two very bursty rush hours, that's got to spike massively.

Jason Bell: (31:28)
Bursty hour, edge computing basically.

Kris Jenkins: (31:32)
So what have you done to deal with those sorts of spikes in different industries?

Jason Bell: (31:37)
I usually end up weeping in a skip.

Kris Jenkins: (31:44)
It's a sound strategy. I was hoping for something slightly more technical.

Jason Bell: (31:49)
It goes back to testing. It does go back to testing. We sat down with teams and said, "You're going to have to stress test all of this stuff out." But what we do is we change the retention time of the topic so you weren't impacting volumes of the discs. So I don't delete topics I flush them out. What I'll do is I'll change the retention time down to three milliseconds, wait for Kafka to do its work and rebalance everything. And then bring the retention time back up to something sensible.

Kris Jenkins: (32:20)
That's an interesting strategy. So for stress testing, you just make it a short lived but identical system?

Jason Bell: (32:27)
An hour. Did the data go through, did the consumers on the other side pick it up? It's those kinds of testing strategies. And also what I've impressed on development teams is when they're writing... so what I've found is, especially in banking, obviously all these different departments, it might be credit, it might be mortgages, it might be... all this stuff that the department writing a producer will not necessarily be testing what's coming out the other side.

Kris Jenkins: (33:04)
Yes.

Jason Bell: (33:05)
And it's the little things. It's not so much the technicalities of Kafka at this point it's just human contact. It's, "Jase, can you tell me how many messages have gone through?" No I can't. I can't give you a definitive number of this amount. I can do that per partition, yes, but I can't give you an absolute total. There's only one way really to do that. Rob Moffitt did a post on it, which was, use SKSQL to stream in and do an aggregated count on the messages going through.

Kris Jenkins: (33:36)
Count them as they go through.

Jason Bell: (33:36)
That's the way to do it, but I can't do that. Plus it's banking data, the nature of the messages. I'm actually not privy to the data and I should not be privy to the data. I can't be. I've signed NDAs, that kind of thing. So I can't see Mrs. so and so's mortgage payments go through. I just don't see that. I'm not allowed to. So the responsibility's on both teams to measure what goes out and what goes in and that's another conversation as well.

Kris Jenkins: (34:14)
Because that is a tricky thing to watch for. There's a great thing that you can decouple producers and consumers. One department can write, many other departments can read, but it does introduce the problem that you've actually got to have someone in the writing department checking that it's useful.

Jason Bell: (34:29)
Yes. Is the data correct? Did it deserialize properly? And once you start putting these things in place, then latency becomes interesting as well because then have you got performant consumers? Do you have enough consumers running against the partitions? If I want... and we don't think... while the main metric for Kafka is megabytes per second, I don't think developers think of it that way. They think of how many messages I can process, but they don't think in megabytes per second or something like that. So there's no real consideration about network traffic going through in terms of saying, I need to process three megabytes a second or a gigabyte per second or whatever. I need 50 consumers. No one really thinks like that. I can guarantee you, most people will only put 10 consumers up at the most and the partition count will be fairly low to start off with. Because obviously you can go up, but you can't go down. These all feel like old school things, but they're still really important to me.

Kris Jenkins: (35:38)
Because I think as programmers, we have a tendency to assume that it's an entirely logical level analysis until we get problems. The network is perfect until you find it isn't and then you start thinking about it.

Jason Bell: (35:52)
Yes. So I always think about latency in terms of, you've got compression on the producer side. If the compression doesn't match on the consumer side, then that adds latency. It's also fun from a broker's point of view because it has to obviously decompress the message, or invite the metadata [inaudible 00:36:15] key, add a timestamp, compress it back down again, then send it through. Anna McDonald and I have spoken quite a lot about this. And then you obviously got schema registry stuff. And then we get to Kafka Connect.

Kris Jenkins: (36:29)
Oh yeah.

Jason Bell: (36:32)
On the sync side is an interesting conversation completely.

Kris Jenkins: (36:41)
And for those of you not watching, Jason has just completely broken down at the thought and fallen off his chair. And he's back.

Jason Bell: (36:49)
Hi.

Kris Jenkins: (36:49)
Tell me about that?

Jason Bell: (36:56)
I'm here. I'm okay. Source connectors are not a major problem. Never really had any problem with source connectors and things like the... the main connector I ended up working with obviously was the JDBC connector. People like reading stuff from databases and sending it to somewhere else. It seems to be a hobby.

Kris Jenkins: (37:16)
It's an obvious use case.

Jason Bell: (37:18)
It is, it is, absolutely. So Kafka Connect, sync side, there's a bunch of considerations to take into account. Not necessarily on the capacity side, just generally. If it's schema'd, then what do we do on message failure, what we do on conversion failure? Because obviously if you're running DLQ's, if you're running dead letter topics, [crosstalk 00:37:44] that's a different argument. What are we going to do with those messages if they end up in there? I know a lot of people that put messages in dead letter queues and then never read them again.

Kris Jenkins: (37:57)
Find out six months later, what are these 200,000 messages doing?

Jason Bell: (38:02)
And also they were quite important because it was a process of open account, send money to account, close account. We didn't realize they closed the account. It's really interesting. So the two conditions and I think the API may have changed since, I know there was a kit for it, I'm fairly sure there was. Conversion and transformation were kind of the two things where if there was an exception thrown, then the message would go to the DLQ, fine. Question one what are you going to do with the DLQ? Are you actually going to read it or are you just going to ignore it? We used to set up Prometheus alerts on DLQ so if more than 10 messages went in, it would actually alert us and say, there's a problem here. And the reason for that was if you have something side effecting, like writing to a database table, your connectors obviously connected to the database via JDBC. Networks fail.

Kris Jenkins: (39:06)
They do, they very much do.

Jason Bell: (39:07)
And they don't necessarily come back up the way you want them to. And therefore Kafka Connect is flailing around and going, "Yeah, yeah." Because error tolerance equals all within the Kafka configuration is great, but everything just carries on like nothing had happened. And there was no capture. Now this may have changed, hence me mentioning a kit because I remember seeing it. That when you push your message to the sync, if it threw an exception there, you really didn't have much choice on what was going to happen next. If your error tolerance was all, it would just process the batch and keep going. So if the database was throwing an SQL exception back, it didn't care, the offset would get updated and the data wasn't written to the database. So anyway, retry logic, yes, that's fine. But you can go through 10 or 20 retries pretty quickly and the whole thing's collapsed.

Jason Bell: (40:10)
If you let your retries carry on over a minute, then obviously you're starting to get back pressure from the brokers. Not a big deal, but what I found is monitoring Kafka Connect is not everyone's biggest priority. They tend to just focus on the brokers and not the connectors. So when a connector fails, which is actually another interesting point, the login for Kafka Connect is central to the node, it's not to the worker. If we could branch out worker level metrics and worker level logging, it would be far more effective. Because then I can look at the logs for that worker and go, it failed because of this. Because what happens, say you have 20 workers running on Kafka Connect, your log files are massive at that point. And then to go and route through. I know people that won't pay for elastic search. So my superpower is to be able to read through raw log files with grip.

Kris Jenkins: (41:14)
Everyone has that in their back pocket, sooner or later.

Jason Bell: (41:18)
You need it. You need it. And my advice to anyone is to go with the tools that are available to you for free, without paying full and learn those first. Learn the CLI, learn to read the logs in a terminal window when you need to. Because when all the cloud services vanish and disappear and that's all you've got and you need to fix it, that's where you start. So I've always made those things my friend first. And me and Kafka Connect logs are really friendly now, we know where to look and we know where to go. We know where things are hidden. And bringing up connectors and just watching everything go through to make sure everything settles down is actually quite an important job. And it's one that not many people talk about that I'm aware of. Or if they do, they don't blog about it or they don't shout about it or anything like that. They're probably just weeping in the same way that I do.

Kris Jenkins: (42:08)
Do you have any good tips for monitoring connectors?

Jason Bell: (42:16)
Bash scripts do work. No, I'm not joking actually. I would write a bash script to go through the active connect and names. So was it slash connectors? So yeah, yeah. Curl to just slash connectors. And I'll tell you the connectors that are running. And then use Said to then parse out those names, the connector names themselves. So it meant if connectors were being added and removed, then you weren't having to update files all the time. That was the reasoning behind it. And then do a four loop on each of those connector names, do a curl again. So slash connector, slash status, sorry slash connector slash the name slash status and that would tell you how many tasks are running and the state of those tasks and the state of the worker. And they should say, "Running." If they say, "Failed," something's happened. So we had a job that would do that every two minutes. And then if it failed, we got alerted then we could go and have a look. And then if it was... that was the best time to go and have a look at the logs because otherwise you're 30,000 lines down the shoot at that point. Then figure out whether to halt the connector, tell the client, what do they want to do? Because interestingly we found in a couple of cases, once a year, this would happen that database tables fill up.

Kris Jenkins: (43:40)
Yes, they do. I've had that a few times.

Jason Bell: (43:45)
So your connectors are fine and writing data around, but the database isn't getting your data at all. So we used to get a request once a year about rolling back the offset to when the fault happened on the database side and then having to [crosstalk 00:43:58]. It does happen. And that's an art form.

Kris Jenkins: (44:04)
And this is the thing, you can test Kafka, you can test the connector, but sometimes testing the external database having problems is really tricky.

Jason Bell: (44:13)
Network failure, always test for network failure. Network failure on the brokers, network failure on Zookeeper, network failure on any side effect in databases, syncs, sources, whatever. This is it. And it goes back to what I said at the start. If we're talking about end to end, Kafka's are the bit in the middle, as far as I'm concerned. It's a tool, it's a great tool. I love it. It's done me well as a career for the last six, seven years. But we have to think about the end to end and everything that affects it. And even coming back to capacity planning, now we've got the transaction API, which obviously creates the topic which then is replicated and there's more partitions therefore it's more data. If it's a high volume topic with transactions, then you have to take that into account as well. RocksDB on state full transactions and streaming API and in KSQL. Global K tables copy across networks. Not like, yay. There's a lot not talked about that we learn the hard way. And I'm only scratching the surface based on this conversation, I think. And it is by a case by case basis. There has to be a bunch of KPIs to say, we need to know that... most important metric to monitor, consumer lag.

Kris Jenkins: (45:36)
Why is that the most important?

Jason Bell: (45:43)
I want to know how far behind my consumers are. Because if they are far behind... and we're talking, not difference between one or two messages, that I can tolerate. It's when it's 40,000 messages that usually says to me is the problem. So bash scripts for Kafka Connect, connectors, workers, but also run the consumer group list job and describe each consumer group and that will give you the clients running, the offset positions and the lag. And I used to do that every day. That's a chron job to run every day and it would email me so I could go-

Kris Jenkins: (46:22)
You ran it in batch.

Jason Bell: (46:26)
Well I'm not allowed to touch the cluster. My job is to make sure the cluster's up not write to it.

Kris Jenkins: (46:33)
Black box testing of Kafka clusters.

Jason Bell: (46:35)
My methods may seem archaic, but that's the only way we could do it.

Kris Jenkins: (46:40)
I know there's someone who's mentally rewriting that script in Python right now. And there's someone else wondering why the API doesn't use GraphQL and you can get it all in one query. But in the meantime, a bash script will do the job.

Jason Bell: (46:55)
I could say something really controversial, but I'm not even going to go there because I will never get left alone.

Kris Jenkins: (47:01)
You've dangled it out now.

Jason Bell: (47:05)
I have haven't I. GraphQL, really? Sorry, said it. Python really?

Kris Jenkins: (47:20)
Oh, that's a much harder one to argue with when you're championing bash scripts. Let's not descend into a language flame war because that's a-

Jason Bell: (47:31)
I learn whatever I need for the job. Data works I'm letting go. I won't go now, I really like it. It took me a bit of a left at the lights with my brain because it's so functional programming with Clojure for so long.

Kris Jenkins: (47:47)
You've got to go back to imperative.

Jason Bell: (47:49)
Yeah. It's like, oh this is a bit weird. I actually like it, really like it, anyway.

Kris Jenkins: (47:55)
Just for balance, for journalistic balance, I'm going to say, Go, really?

Jason Bell: (48:00)
Really. Rust, really?

Kris Jenkins: (48:04)
I've had some fun with Rust. I haven't haven't dabbled in Go yet, but I've had some good fun with Rust.

Jason Bell: (48:10)
You're a big TypeScript guy aren't you?

Kris Jenkins: (48:13)
Am I?

Jason Bell: (48:15)
Most of the talks I've seen you do recently they've involved that kind of thing haven't they?

Kris Jenkins: (48:19)
I guess my heart's in pure script, but when you want to interact with the JavaScript world, which I often do, then TypeScript is where I head.

Jason Bell: (48:28)
Do you want to know a secret?

Kris Jenkins: (48:29)
Yeah.

Jason Bell: (48:30)
I installed Emax on Windows yesterday.

Kris Jenkins: (48:34)
Different listeners will find different parts of that to be sinful.

Jason Bell: (48:39)
I don't care. Anyway, where were we?

Kris Jenkins: (48:46)
We were about to talk about the problems with streams because that's another part of the picture you haven't talked about.

Jason Bell: (48:53)
Streams are just as interesting as Kafka Connect really because their area of tolerance is defaulted to all. It's very difficult to stop a stream. You have to have a really, really good reason for doing so. So DLQ's work in the same kind of way. Now there's an exception mechanism to decide what you're going to do with a message but it's that long since I've looked at it. This is the interesting part of not doing a huge amount of development work for so long, is when we did Kafka Jeopardy, so that was Neil Buesing, Anna McDonald and myself. They knew all the development stuff and I knew nothing, weird. But they didn't know the port number to Zuki. So I came last, but I don't care.

Kris Jenkins: (49:40)
This is why it's an interesting perspective though because you're coming at it from a, it's got to stay up, but it's almost a black box to you.

Jason Bell: (49:47)
It is a black box to me. It is a black box to me. And it's like, can we have a look at this message? And it's like, no, go and consume it. Some support tickets, people just don't like me. They think I'm being difficult. It's like, no, this is really hard to do. Running Kafka with a set of blinkers on basically. It's like that. Sorry, for those watch... not watching, I'm putting my hands up in blinker fashion.

Kris Jenkins: (50:21)
He's miming a horse.

Jason Bell: (50:23)
Nay. That could have gone a bit left there, Chris, to be fair.

Kris Jenkins: (50:30)
Oh, I'm not sure it could.

Jason Bell: (50:38)
Go on, you can do it. I've totally lost my train of thought now. Streams, streams, that was it.

Kris Jenkins: (50:46)
Streams.

Jason Bell: (50:46)
That was it. That was it. Stream's been difficult. I found difficult. Actually Neil Buesing's probably a really good person to ask and so is Anna, on how they would handle it. So Anna, Neil, how would you handle it?

Kris Jenkins: (51:02)
We'll splice their answer in later.

Jason Bell: (51:04)
Leave a comment.

Kris Jenkins: (51:08)
Let me ask you this then, do you think... have you found that streams or KSQL work better from an operational point of view?

Jason Bell: (51:23)
I prefer streams.

Kris Jenkins: (51:25)
Not the answer I was expecting.

Jason Bell: (51:27)
I see the logic and the appeal of KSQL, do not get me wrong. I like streams in small container services run separately. Because if you have a stream that's hogging the resources of a node, the other application tends to struggle, so you may have a network heavy streaming job. And then which also may be CPU heavy depending on the transformations and things that are going on. And if you have another one that's just doing some fairly basic aggregations on a fairly low level, low velocity topic, it gets put back at the queue.

Kris Jenkins: (52:12)
So you're saying that it's easier to manage individual streams processes if you split them out that way?

Jason Bell: (52:19)
Yes, I think so.

Kris Jenkins: (52:21)
Because KSQL it's all sitting there in one lump.

Jason Bell: (52:24)
And you have that same problem that if you have a KSQL queue that's really intensive, then it tends to hog the resources.

Kris Jenkins: (52:33)
I had not thought of that, but there is that to be balanced with the convenience of KSQL?

Jason Bell: (52:38)
It is there. I won't call it an issue it's just a consideration. And it all comes back to design again. It's knowing the data that's passing through the system. And if you can start with that and you know the domain. And actually good point in question, if you bring domain knowledge in and don't assume that you can have an easier ride with all this. If you're in retail, how many transactions come from a POS system per hour? Oh right. How many basket items is that? It's this many and it's this size. We can model that then, we can work it out. Even with a calculator you can work it out.

Kris Jenkins: (53:23)
I would've thought a lot of customers don't actually know that data.

Jason Bell: (53:28)
Some do some don't. But there must be a way of finding out. The data's got to come from some point. There's a starting point somewhere. And it's just a case of knowing which person to ask. Not a case of it just not knowing. Let's take an example. POS data is actually probably a good one. Within a basket there's N number of items. That could be two K's in size or 10K in size, but we've got a lower and up and bound that we can work with. I'm now doing dancing with my hands. Sorry, but we've got a lower and upper bound of size. And even if you then say, let's double it so we've got some contingency. This is another thing where people go, it's not going to be any more than 10K. That's fine and then we start seeing 30K messages being pushed through.

Kris Jenkins: (54:22)
This is the old thing of order of magnitude planning.

Jason Bell: (54:25)
Exactly. So I always put a 50% in minimum contingency for sizing, especially when it's high volume and across a large cluster. And the other thing to keep in mind is, I've probably said that a hundred times in this podcast so far, the other thing to keep in mind, the other thing to keep in mind.

Kris Jenkins: (54:46)
There's a lot of things to keep in mind.

Jason Bell: (54:48)
There are a lot. Something to write down then and not keep it in mind, because that's the worst place you can keep it. If it's a shared service, you have multiple departments writing to the same Kafka cluster. Some departments are going to write more data than others. And it's very difficult to know with the performance of the cluster when there's so many different tenants of that cluster. Not everyone has the luxury of having their own cluster basically. Quotas are important, but that's a post-production thing. You measure it afterwards. Anna might actually argue with me on that one because she's a quota. She's a quota queen. Let's call her the quota queen.

Kris Jenkins: (55:46)
I'll check with her how she feels about that nickname.

Jason Bell: (55:48)
She's going to throttle me after that one.

Kris Jenkins: (55:51)
That seems like a quota issue, throttling. I went there.

Jason Bell: (55:56)
You went there. You went there. That was really good. That's good. This is turning into The Mighty Boosh, isn't it? It's just like, oh. So quotas are important, but I find them very difficult to monitor in the first instance. You won't know until the team's gone live into production. And then you're looking at the network traffic. Hence Grafana is really important.

Kris Jenkins: (56:21)
For getting those messaging graphs in real time.

Jason Bell: (56:23)
Bikes in and out. And seeing how things actually look, that will govern the quotas. So topic level quotas don't exist anymore. Well they're being deprecated so it's really on the producer and the consumption side. Mainly on the producer side to stop all that stress testing volume going through. You do get bursts. You have to plan for bursts happening. At least with the quotas, it will throttle, it won't just shut down. You have to give it some fairly amount of data in order for a quota to completely block you out and throw an exception. But that's where the performance producer tool comes in on the command line. You can at least say, I want a million messages every 10 seconds of a hundred K in size, go at it and see how it performs.

Kris Jenkins: (57:16)
To wrap this up into a TLDR, if we can, we're saying, as much as you can get the picture of your data and your data model before we even begin?

Jason Bell: (57:28)
Absolutely, absolutely. It's an end to end process. It is a process. I like to think of it as a graph. Point to point to point to point to point. And a direct graph is nice in the sense that things will split out. You might have a process where the data splits out in KSQL into a new stream or streaming API job into another topic for that kind of thing. And they all have considerations down the chain. So things like RocksDB on the streaming side. And if you're creating a new topic out of streaming API, how's that replicated, how's that partitioned, what's the throughput of that, what's the volume of that going to be? Is it going to be a 10th of what's going through the streaming API job? We built some streaming API jobs to look at source data, because it was say 10 million messages today going in and we were only interested in seven of them. It does happen.

Kris Jenkins: (58:27)
I can believe that.

Jason Bell: (58:30)
There's two options at that point. You're either going to say, I want to filter it with a streaming API job or I'm going to let Kafka Connect do all the hard work. I wouldn't want Kafka Connect to be consuming like that. I'd let the streaming API job do the work for me and then only persist what I need to persist.

Kris Jenkins: (58:49)
Until the day that you find that there's another type of message that you need to filter out as well.

Jason Bell: (58:55)
But it's a new evolutionary process at that point. It always is. I don't think anything sits still for too long. Data [crosstalk 00:59:01].

Kris Jenkins: (59:01)
The whole of software is reacting to changing requirements.

Jason Bell: (59:03)
Precisely. Precisely. And these requirements can come... it might be a fortnightly sprint. I know some projects that take two years to turn around. It's the nature of the business really more than whether we do agile or anything like that.

Kris Jenkins: (59:22)
So know your domain, plan it out as far as you can, load test it before you put it in production and write some bash scripts. Those are your four tips?

Jason Bell: (59:35)
Now you've put it like that, I'm going to reconsider my career options. Thanks.

Kris Jenkins: (59:43)
If the bash scripts don't work out, you can always play those Chapman sticks in the background.

Jason Bell: (59:50)
I love playing them. They're great.

Kris Jenkins: (59:50)
We'll get you back in to record the outro music, but for now Jason Bell, thank you very much for joining us.

Jason Bell: (59:56)
Chris, it's been an absolute honor. It's good to see you.

Kris Jenkins: (01:00:00)
Cheers to you.

Jason Bell: (01:00:00)
Thank you.

Kris Jenkins: (01:00:00)
Bye.

Jason Bell: (01:00:00)
Take care.

Kris Jenkins: (01:00:00)
And there we leave it. You know, I'm going to confess, I have this dream that one day, maybe at a future Kafka conference, Jason and I will be on stage playing a gig to close out the night and he'll be on the Chapman stick and I'll be on the Theremin. I don't actually play the Theremin yet, but I can buy one and I will learn it if there is that slot available, you got to have a dream. For now, we're going to have to leave Jason to his cluster maintenance. Before you go, if you've enjoyed this episode now is an excellent time to let us know. My Twitter handle's in the show notes or you can always leave a comment or a like or whatever. We always appreciate hearing from you.

Kris Jenkins: (01:00:40)
Between now and the next episode, if you want to learn more about Kafka and how it can help you get your data moving Confluent Developer is here to help you. Head to developer.confluent.io, where you'll find everything from getting started guides to architectural patterns. It's all free and it's written by some truly great minds that I'm very happy to work with, so check it out. And if you feel ready to use Kafka in production, but you don't want to manage it yourself, head to Confluent Cloud, which is our managed service for Apache Kafka. You can sign up and get a cluster running in minutes. And if you add the code, PODCAST100, to your account, you'll get $100 of extra free credit to run with. And with that, it remains for me to thank Jason Bell for joining us and you for listening. I've been your host, Kris Jenkins, and I will catch you next time.

Intro
Kafka Cluster Capacity Planning—where to begin
Put it in the cloud
Three is the magic number: Kafka leader and 2 followers
Multi-data center architectures
Extensive testing is necessary
Kafka message volumes
Kafka Connect sink connectors
Kafka Streams vs ksqlDB for DevOps
It's a wrap!