Jack Vanlightly

Incremental Jobs and Data Quality Are On a Collision Course - Part 2 - The Way Forward

Incremental Jobs and Data Quality Are On a Collision Course - Part 2 - The Way Forward

So what should we do instead?

This is less of a technology problem and more of a structural problem. We can’t just add some missing features to data tooling; it’s about solving a people problem, how we organize together, how team incentives line up, and also about applying well-established software engineering principles that are still to be realized in the data analytics space.

Share

Incremental Jobs and Data Quality Are On a Collision Course - Part 1 - The Problem

Incremental Jobs and Data Quality Are On a Collision Course - Part 1 - The Problem

Big data isn’t dead; it’s just going incremental

If you keep an eye on the data space ecosystem like I do, then you’ll be aware of the rise of DuckDB and its message that big data is dead. The idea comes from two industry papers (and associated data sets), one from the Redshift team (paper and dataset) and one from Snowflake (paper and dataset). Each paper analyzed the queries run on their platforms, and some surprising conclusions were drawn – one being that most queries were run over quite small data. The conclusion (of DuckDB) was that big data was dead, and you could use simpler query engines rather than a data warehouse. It’s far more nuanced than that, but data shows that most queries are run over smaller datasets. 

Why?

Share

Forget the table format war; it’s open vs closed that counts

Forget the table format war; it’s open vs closed that counts

Apache Iceberg is a hot topic right now, and looks to be the future standard for representing tables in object storage. Hive tables are overdue for a replacement. People talk about table format wars: Apache Iceberg vs Delta Lake vs Apache Hudi and so on, but the “war” at the forefront of my mind isn’t which table format will become dominant, but the battle between open vs closed – open table formats vs walled gardens.

Share

The teacher's nemesis

The teacher's nemesis

A few months ago I wrote Learning and Reviewing System Internals - Tactics and Psychology. One thing I touched on was how it is necessary to create a mental model in order to grok a codebase, or learn how a complex system works. The mental model gets developed piece by piece, using a layer of abstractions.

Today I am also writing about mental models and abstractions, but from the perspective of team/project leaders and their role in onboarding new team/project members. In this context, the team lead and senior engineers are teachers and how effective they has a material impact on the success of the team. However, there are real challenges and leaders can fail without being aware of it, with potentially poor outcomes if left unaddressed.

Share

The curse of Conway and the data space

The curse of Conway and the data space

Conway’s Law:

"Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure."

This is playing out worldwide across hundreds of thousands of organizations, and it is no more evident than in the split between software development and data analytics teams. These two groups usually have a different reporting structure, right up to, or immediately below, the executive team.

This is a problem now and is only growing.

Share

Table format interoperability, future or fantasy?

Table format interoperability, future or fantasy?

In the world of open table formats (Apache Iceberg, Delta Lake, Apache Hudi, Apache Paimon, etc), an emerging trend is to provide interoperability between table formats by cross-publishing metadata. It allows a table to be written in table format X but read in format Y or Z.

Cross-publishing is the idea of a table having:

  • A primary table format that you write to.

  • Equivalent metadata files of one or more secondary formats that allow the table to be read as if it were of that secondary format. 

Share

Table format comparisons - Change queries and CDC

Table format comparisons - Change queries and CDC

This post, and its associated deep dives, will look at how changes made to an Iceberg/Delta/Hudi/Paimon table can be emitted as a stream of changes. In the context of the table formats, it is not a continuous stream, but the capability to incrementally consume changes by performing periodic change queries.

These change queries can return full Change Data Capture (CDC) data or just the latest data written to the table. When people think of CDC, they might initially think of tools such as Debezium that read the transaction logs of OLTP databases and write a stream of change events to something like Apache Kafka. From there the events might get written to a data lakehouse. But the lakehouse table formats themselves can also generate a stream of change events that can be consumed incrementally. That is what this post is about.

Share