NServiceBus has excellent features and while not free can lower the total cost of ownership if you have a large messaging based platform. In this first part of this series on NServiceBus and the RabbitMqTransport, we'll look at the default RabbitMq topologies generated by NServiceBus. All source code is on Github.
How to Deal with Unroutable Messages - RabbitMq Publishing Part 3
In Part 2 we saw how we can detect that a message was unroutable, in this part we'll look at how you can deal with that situation.
RabbitMq offers us the Alternative Exchange for this purpose. When we declare an exchange we can specify the name of an alternative exchange that messages will be forwarded to when a message is unroutable. We just need to make sure that we bind a queue to that exchange that accepts all messages.
Sending Messages in Bulk and Tracking Delivery Status - RabbitMq Publishing Part 2
This is a console application that will create an exchange and queue for you, and allow you to send messages in bulk with message delivery status tracking.
First we'll look at the following design decision you will likely encounter when performing reliable bulk send operations.
Performance/Duplication Trade-Off
When sending messages in a bulk operation you want both decent performance and want best effort reliability - you have two conflicting concerns. When I say reliability I meant that you need every message to get delivered and you ideally want to avoid message duplication.
Types of Publishing Failures - RabbitMq Publishing Part 1
In this first part of the series we'll just go over the different failure scenarios on publishing messages and how they can be detected. In the following parts we'll look at example code for tracking message delivery status when performing bulk send operations and single message send operations. We'll also take a look at performance and message duplication.
There are many scenarios where things can go wrong when publishing messages to RabbitMq - the connection can fail, the exchange might not exist, no queue may be bound, a queue might be full, an Erlang process could crash etc. This post goes through the various scenarios and how you can detect them or not.
Have you got chronic eye pain like me?
I have suffered chronic eye pain for about 12 years and to this day I have no diagnosis. I'm writing this post in case there are other people out there who have had a similar experience and who might want to get in contact. I have been pretty lazy for the last two years and have given up on my hunt for answers, at least for now.
So if you or someone you know has had constant chronic eye pain for years, and no neurologist, ophthalmologist, pain specialist or optometrist has been able to help, then read on
new SqlConnection - The requested Performance Counter is not a custom counter
Building Synkronizr - A SQL Server Data Synchronizer Tool - Part 1
Origins of Synkronizr
In a recent post I described a method I recently used at work for synchronizing a SQL Server slave with a master. Because the master is hosted by a partner and that partner does not offer any replication, mirroring or log shipping I opted for a replication technique loosely based on how some distributed NoSQL databases do it - the generation and comparison of hash trees (Merkle Trees).
Generating SQL from a Data Structure
How to Kill a Keep Alive with a Weak Reference (C#)
Taskling.NET uses a keep alive or heartbeat to signal that it is still running. This is useful because when running batch jobs in unstable hosts like IIS the process can be killed off with a ThreadAbortException and the job isn't always able to log it's demise. With a keep alive we know that the job really died if a few minutes pass without a keep alive and the status of the job is "In Progress".
But one problem is how do you reliably kill a keep alive?
How Row Locking Makes Taskling Concurrency Controls Possible
Your Taskling jobs can be configured with concurrency limits and those jobs will never have more than the configured number of executions of that job running at any time.
Some batch and micro-batch jobs need to be singletons, there to be only one execution running at any point in time. This may be to avoid data consistency issues when persisting results or because only a single session can be opened to a third party service etc. Other batch processes need more than one execution running at the same time in order to cope with the data volume but have a concurrency limit in order to not overwhelm downstream systems or third party services.