Greenplum Going to Hybrid Transactional Analytical Processing

POSTSDATABASESGREENPLUM | 5 MINUTE READ

I ran across this research paper from the folks at VMWare where they propose Greenplum as a Hybrid Database for Transactional and Analytical Workloads (HTAP). I had no idea that Greenplum was going in that direction (granted I don’t follow Greenplum that closely). What’s more interesting, to me, is the HTAP possibly becoming a thing and the implications for the industry at large.

A little bit about HTAP

Hybrid Transactional/Analytical Processing was a phrase coined by Gartner back in 2014. In the research abstract, the authors claim that by being able to do transactional and analytical processing in the same place, businesses will be able to produce insights and more robust products, faster. A more recent Gartner report suggests that traditional architectures, meaning the Extract-Transform-Load workflows, used to transfer data from transactional databases to analytical databases results in businesses missing out on critical moments when serving their own customers. Gartner had even come up with the phrase analytical latency to describe the very slow time-to-insight delivered by these traditional architectures.

I get it. The argument for HTAP Databases provides an allure for business leaders by telling them that all their data doesn’t have to move around and they can get rid of costly and brittle ETL workflows to help speed decision making and product features. This sounds like a panacea, reduce your infrastructure complexity and get more value out of it at the same time!

Let’s talk Greenplum & HTAP

If you’re not familiar with Greenplum, it’s a Massively Parallel Processing (MPP) style database. Its sweet spot has typically been in analytical workloads (OnLine Analytical Processing). In MPP-style systems, the data is distributed amongst a number of worker machines within a cluster with the main node (also known as a coordinator) that oversees the entire cluster. The job of the main node is to make the query plan and send the instructions down to the workers. Typically in these types of databases, the data is stored in a columnar format. The columnar format allows the database to skip entire swaths of information thereby increasing the speed at which it can execute your query. AWS Redshift and Google BigQuery are similar systems.

Unbeknownst to me, with did a little bit of digging (basically searching for Greenplum and HTAP), I found that Greenplum has been touting storing transactional data as of November 2019! The authors are taking a bold approach asserting that they can bring OLTP capabilities to an OLAP system when most of the industry has come from a place where they’re adding OLAP capabilities to an OLTP system.

How Greenplum plans to get there

  1. Getting rid of the two-phase commit

    Well… not entirely. Since Greenplum is set up with a coordinator and worker nodes, the proposal is that if all of the data that’s being updated happens to fully reside on a single worker, then that transaction is a candidate for a one-phase commit. The distribution key is what determines the type of commit.

    The example in the paper illustrates this nicely:

    CREATE TABLE t (c1 int, c2 int) DISTRIBUTED BY (c1);
    INSERT INTO t (c1, c2) SELECT 1, generate_series(1,10);
    

    Because the value of c1 is the same throughout the entire statement (e.g. 1), the one-phase commit would be used.

    By going to a one-phase commit, this will reduce the number of network hops between the coordinator and the worker thereby increasing the speed of the transaction.

  2. Detect global deadlocks

    Given Greenplum’s distributed nature, coordinator and workers, deadlocks across all of the workers can become more of an issue since data may need to be mutated across different workers. The proposal in the paper has the coordinator spawning an extra process that will monitor wait-for graphs across all of the workers, checks to see if the global deadlock happens, and then break the deadlock. The exact algorithm used can be found in Algorithm 1 in the paper.

  3. Provide resource isolation

    To me, this is probably one of the biggest areas of concern. Analytical systems tend to have very long-running processes that can consume large amounts of resources. Imagine a greedy query that starts to impact transactional processing! Interestingly, the approach the team has taken is to leverage cgroups to do CPU isolation and a custom implementation of Vmemtracker to do memory isolation. Disk and Network IO weren’t considerations as the team tied those concerns to CPU usage (/shrug). The mechanics of making sure that queries were following certain isolation levels was to tie Resource groups to roles in the database like so:

    CREATE RESOURCE GROUP olap_group WITH (CONCURRENCY=10, MEMORY_LIMIT=35, MEMORY_SHARED_QUOTA=20, CPU_RATE_LIMIT=20);
    CREATE RESOURCE GROUP oltp_group WITH (CONCURRENCY=50, MEMORY_LIMIT=15,  MEMORY_SHARED_QUOTA=20, CPU_RATE_LIMIT=60);
    
    CREATE ROLE dev1 RESOURCE GROUP olap_group;
    ALTER ROLE dev1 RESOURCE GROUP oltp_group;
    

At the end of the paper, the authors do acknowledge that they’re in for a long road ahead:

We acknowledge that this work is just the first step and more work is needed to make a system like Greenplum, designed for long-running analytical queries, to achieve performance comparable to dedicated OLTP databases, such as PostgreSQL.

Overall, the changes that they’re proposing seem reasonable and this might make Greenplum able to handle transactional and analytical workloads.

Final Thoughts

When taking a step back and thinking about what HTAP represents, I can appreciate the push to make one database rule them all from a business perspective. This means applications and tooling would be built to support everything that happens in that single database. But, I think there are many more problems to solve in the respective domains, analytical and transactional, before trying to bring both worlds together. For example, Vitess and CockroachDB are a class of transactional systems that are handling very real problems when you have an application being used all over the world. That’s a hard problem to solve and is way more valuable than trying to bring two different types of workloads together in the same place.

Maybe the time will come where being able to do OLAP and OLTP in the same place will make a lot of sense until then, I prefer to see databases purpose-built for the specific workload with something like change data capture being leveraged to move data from one place to the next.