Building Synkronizr - A SQL Server Data Synchronizer Tool - Part 1 — Jack Vanlightly

Building Synkronizr - A SQL Server Data Synchronizer Tool - Part 1

Origins of Synkronizr

In a recent post I described a method I recently used at work for synchronizing a SQL Server slave with a master. Because the master is hosted by a partner and that partner does not offer any replication, mirroring or log shipping I opted for a replication technique loosely based on how some distributed NoSQL databases do it - the generation and comparison of hash trees (Merkle Trees).

The data set is large but not Big Data size, in the hundreds of Gigabytes range, big enough that I couldn't simply extract all the data from both databases and compare. The hash tree method means that only a few kilobytes of data are sent over the network during the data comparison phase. Anyway, best you read the post that describes the general strategy.

Synkronizr

I've decided it would be a fun exercise to build a tool that is able to show you the differences in data between two tables or two databases and then perform a synchronization if requested. I have created an empty repo in GitHub called it Synkronizr.

The synchronization process that I built at work is pretty generic but does have some custom logic related to the data stored in the database that gets replicated. Over the next few weeks I will be building a port of that code and also making it totally generic. The first order of the day is to build a totally generic block partitioner.

Partitioning a Table into Consistent Blocks

We need to be able to partition a table into a number of blocks so that we can hash each one. We need the following characteristics from our blocks:

  1. The block sizes are more or less evenly sized

  2. Block meta data used to identify the bounds of the block are sufficient to guarantee that each block is isolated (no overlap).

  3. Block meta data used to identify the bounds of the block are consistent between both data sources. For example if we have a table Person then the record for Bob cannot be in Block 1 in one data source and Block 2 in the second.

Let's forget heaps (tables without primary keys) for now and see how we could partition a table with a primary key. I will use the example of the imaginary TripLeg table which contains data about legs of trips. For example a trip to Paris might have various legs involving bus to ferry port, ferry journey and then a train to Paris.

For this example, let's just say we have a 3 million rows and we want 300 blocks, so go for a 10000 block size limit. We can generate our 300 blocks and the primary key values of the bounds of each block. The TripLeg table primary key is comprised of three columns to make this example more interesting.

We'll go for a two stage tactic:

  1. Use ROW_NUMBER to assign a number to each row, ordered by the primary key

  2. Partition that data set into the 300 blocks and get the primary key values of the bounds of each block


DECLARE @BlockSize int = 10000;

WITH RowNumbers AS (
	SELECT ROW_NUMBER() OVER (ORDER BY TripId,LegId,PassengerId ASC) AS RowNo,
		TripId,
		LegId,
		PassengerId
	FROM dbo.TripLeg
),
Blocks AS (
	SELECT RowNo / @BlockSize AS BlockNo
            ,COUNT(*) AS BlockSize
			,MIN(TripId) AS StartPk1Val
			,MIN(LegId) AS StartPk2Val
            ,MIN(PassengerId) AS StartPk3Val
            ,MAX(TripId) AS EndPk1Val
			,MAX(LegId) AS EndPk2Val
			,MAX(PassengerId) AS EndPk3Val
	FROM RowNumbers
	GROUP BY RowNo / @BlockSize
)

SELECT *
FROM Blocks
ORDER BY BlockNo

This will give 300 rows. The screenshot below shows the last 20 or so of those blocks

We now have the blocks and their bounds that we can use to generate our hash trees. The number of blocks will depend on the size of the tables but the above approach will work for any table or block size.

I'll have to think about how we achieve the same with heaps, given that we only have the RID which is not consistent. For the same row the RID will be different in each database and so cannot be used for our purposes.

Now that we have a generic block partitioning method I'll need to start porting the rest of my original code to Synkronizr.

NOTE: I have no yet started work on this. It is on my todo list but I am busy working on Docker and ASP.NET Core work, but I do intend to port the code I wrote for a client to this open source project. Watch this space...patiently.

NEW NOTE: It would be fun to work on this but I have no current plans. So many projects, so little time.