Kafka and RabbitMQ blog posts I wrote elsewhere in 2019

Since I started working at companies that run Messaging-as-a-service (84codes) or actually build the messaging systems themselves (VMware, Splunk) I have been writing blog posts but not on my own blog. I don’t want the confusion of double posting so I’m just going to start posting links this content on my blog and perhaps add some commentary. So here goes for 2019:

Why I'm Not Writing Much On My Blog These Days

Firstly, I joined the RabbitMQ core team which is a demanding job that takes most of my energy, and the second reason is that I pretty much only blog about RabbitMQ now and those posts go on the RabbitMQ blog. So if you are interested in my writing about RabbitMQ, then please head over to our blog.

I also have posts I’d like to write about Apache Pulsar, Apache Kafka, Pravega, Redis and NATS. But I don’t have much time and while I think I would be impartial, I wouldn’t expect others to think so. I have skin in the game now.

But I still spend time understanding how other systems work and how they are positioned in the market. Knowing how the industry evolves and what customers expect help us evolve RabbitMQ while keeping it “rabbity”. RabbitMQ will always aim to be a general purpose message broker, not a data platform nor a big data complex event processing system. But just like object oriented languages have benefited from incorporating some functional language paradigms, RabbitMQ can benefit from incorporating aspects of other messaging paradigms - but without losing its soul or the reasons why users already love it.

Back to writing… blog posts can be a bit like benchmarks: if it’s one vendor vs another then your scepticism level should go through the roof, probably into orbit. Not only might it be an apples to oranges comparison, but a biased one. Likewise if I am writing about why I don’t like some aspect of another messaging system, is that biased or is it an impartial analysis? So I’ll stick to RabbitMQ for now.

If you like my writing about RabbitMQ, I will be posting at least monthly on the RabbitMQ blog about things that I find interesting and that I think will be valuable to the community. Feel free to suggest subjects to me that you’d like me to cover.

A Look at Multi-Topic Subscriptions with Apache Pulsar

A Look at Multi-Topic Subscriptions with Apache Pulsar

This is a sister post to one I am writing about multi-topic subscriptions with Apache Kafka that you can read soon on the Cloud Karafka blog (link coming soon). I will provide a summary of those results before we get started with Apache Pulsar. The run the same tests in my tests of both technologies.

The objective is to get an understanding of what to expect from multi-topic subscriptions, specifically we are testing message ordering. Message ordering is a fundamental component of messaging systems and even though cross topic ordering is not guaranteed by Pulsar or Kafka, I find it interesting and useful to know what to expect.

Building A "Simple" Distributed System - It's the Logs Stupid

Building A "Simple" Distributed System - It's the Logs Stupid

In previous posts we covered designing the protocol and verifying it with TLA+. Then we designed the implementation with Apache ZooKeeper. In this post we’ll look at a very important prerequisite for testing and release to production - good logging. The links to the rest of the series are at the bottom of this post.

It’s the Logs Stupid

Without good logging, you’re in for a world of pain and wasted hours trying to figure out why something failed. Forget the debugger, put it to one side and embrace logging as part of the development and test process. The logs will be the way in which you can identify what was going on in the environment and in each node at the time of failure. Your code will fail, over and over again, in new and surprising ways until finally towards the end of the development process you start to see it cope with everything you throw at it. We’ll be throwing a lot of nasty behaviour at the code and it will need to handle it.

Building A "Simple" Distributed System - The Implementation

Building A "Simple" Distributed System - The Implementation

In previous posts we’ve identified the requirements for our distributed resource allocation library, including one invariant (No Double Resource Access) that must hold 100% of the time no matter what and another (All Resources Evenly Accessed) that must hold after a successful rebalancing of resources. We documented a protocol that describes how nodes interact with a central registry to achieve the requirements, including how they deal with all conceived failure scenarios. Then we built a TLA+ specification and used the model checker to verify the designed protocol, identifying a defect in the process.

In this post we’ll tackle the implementation and in the next we’ll look at testing.

Building A "Simple" Distributed System - Formal Verification

Building A "Simple" Distributed System - Formal Verification

In the last post, we described a protocol that should satisfy the requirements and invariants established in the first post. Today we will look at formal verification with TLA+.

Formal verification is just another (niche) tool in the toolbox. Some tools require more skill than others to use. Some tools are more expensive than others. It is up to the practioner to decide if/when/how to use them.

The hard part is that you won't necessarily know if it is beneficial to a given problem you face, if you aren't already skilled in it. If a tool is very difficult to learn, then you might never invest in it enough to be able to make that call. Or you might invest a lot of time into it, to find it isn't a great match for your problem. At which point it gets stowed in your toolbox where it may or may not get used again. I expect many software engineers see learning formal methods as a difficult (it is) and high risk venture.

So, given the above, my aim of this post is for software engineers without prior experience of TLA+ to be able to get the gist of the spec and see why it was useful for this project. Please give me feedback if I succeeded or not.

Building A "Simple" Distributed System - The Protocol

Building A "Simple" Distributed System - The Protocol

In the last post we covered what our distributed resource allocation library, Rebalanser, should do. In this post we’ll look at a protocol that could achieve those requirements, always respecting our invariants (described in the last post).

A protocol is basically a set of rules which govern how each node in a Rebalanser group acts in order to achieve the desired behaviours. Each node must communicate with the others in such a way that it can achieve consensus about the resource allocations and also guarantee that it does not start accessing a resource until another node has stopped accessing it.

Building a "Simple" Distributed System - The What

Building a "Simple" Distributed System - The What

This is a blog series where I share my approach and experience of building a distributed resource allocation library. As far as distributed systems go, it is a simple one and ideal as a tool for learning about distributed systems design, programming and testing.

The field of distributed systems is large, encompassing a myriad of academic work, algorithms, consistency models, data types, testing tools/techniques, formal verification tools and more. I will be covering just the theory, tools and techniques that were relevant for my little project.