Will the Real "Modern Data Stack" Please Stand Up?
November 08, 2021

Colleen Tartow
Starburst Data

Over the past few years the "modern data stack” has entered the vernacular of the data world, describing a standardized, cloud-based data and analytics environment built around some classic technologies. In its simplest form, this looks like:

1. A data pipeline (ETL or ELT) moving data from its source into an analytics-focused environment

2. A target data warehouse or data lake

3. An analytics tool for creating business value out of the data

This technology stack is based on the fundamental idea that data must be moved away into a centralized location in order to gain value from it. One thing to note, however, is that what we call the "modern data stack” is essentially a re-envisioned cloud-and-SaaS version of the "legacy data stack'' with better analytics tools. What started out as a stack with a database + enterprise ETL tool + analytics-focused storage + reporting system became a modern version of the same functional process.


Despite new cloud-based and SaaS tools, the paradigm remains the same

A Flawed Paradigm

The "modern data stack” is a reimagining of the legacy data flow with better tools. The original stack was largely driven by hardware limitations: production transactional systems simply weren't designed to support an analytics workload. By moving the data from the production system into a replicated analytics-focused environment, you can tailor your data for reporting, visualizations, modeling, etc. However, there are still some pretty serious flaws in both versions:

By moving data away from the source, there is an inherent latency introduced along with complex and fragile data pipelines. Getting back to "real time analytics” can be incredibly challenging and involve large data engineering efforts.
While modern cloud-based data warehouses and data lakes allow for the separation of storage and compute resources and both horizontal and vertical scaling, the true separation of these resources (meaning private storage and shared compute) remains a challenge.
Complex enterprise environments with many operational systems struggle with the idea of bringing all data together in a cloud data warehouse with a common data model - in practice, this rarely works.
The recent focus on tools, rather than functionality, ultimately leads to vendor lock-in and blocks optionality. The bottom line is that there is still a large disconnect between the source data and the final business value.

In thinking about these flaws inherent in the modern data stack, we've started to instead wonder: What is a truly modern data stack?

Imagine you're dropped into a company and asked to build a system to easily access data for the purpose of deriving business value. Your employees are data literate - they understand SQL and want to use that and maybe a visualization tool like Tableau to answer business questions using data. You've got today's modern infrastructure, your production transactional databases are all in the cloud, and you want to separate storage and compute. You're not hardware-bound at all. What would you build?

Introducing the Four S's

Let's focus on what the business user wants:


The 4 Ss of data: speed, scalability, simplicity, and SQL

In the end, these are the goals, and you want to focus your architecture and data ecosystem on these principals. If you can achieve each of these "four S's” with your new data system, you'll be a hero!

Building something truly modern

To build a system that meets your goals while focusing on simplicity, start with a product that allows you to run SQL directly on your source data, wherever it lives, without the need for a large effort around data infrastructure. Then add in the analytics layer with a visualization tool, machine learning, etc. This actually isn't drastically different from the modern data stack, but the distance between the data and the value is shortened; in a complex and scaling organization, this efficiency gain can be significant.


The truly modern data stack focuses on the four Ss, reduces latency and complexity, and is vendor-agnostic, ultimately shortening the path between the data and the business value derived from it

Benefits of this simpler stack include reduced batch processing, and the capability to use live or cached source data will reduce latency. Governance including data lineage will also be more transparent with fewer intermediary tools and datastores. The smaller number of tools and storage requirements, not to mention the true separation of storage and compute resources, also lends itself to a streamlined and more cost-effective data ecosystem.

Enter Data Mesh

As complexity and the need for data maturity grows at an organization, often an enterprise will have many different domains with their own unique data ecosystems and analyses. However, when data is a primary factor in business strategy, it's the analysis of data across the organization that brings true exponential power. At this point, companies need to think about a global data strategy, and proactively treat data as a first-class business product, rather than a happy afterthought. The business goal is to embrace agility in data in the face of complexity, and accelerate the time-to-value for data.

Organizationally and architecturally, Data Mesh marries the ideas of a truly modern data stack with the concept of data as a top-tier product. Data producers treat data consumers as a first-class stakeholder of their work, and the consolidation of the technologies for data consumption brings about a revolutionary simplicity of the data and analytics model at scale. With its guiding tenets, Data Mesh is firmly cementing its place as the future of the business data ecosystem:


Core principles of Data Mesh

While Data Mesh defines a global socio-technical architecture for an enterprise's overall data strategy, there is a place for the "truly modern data stack” within this architecture, as each domain will be required to pull data from the operational plane, transform it for analytics, and provide access in the analytical plane. The transitioning and transformation of that data is itself a data stack, driving the concept of data as a first-class product of a domain at the same level of importance as code. This is a key driver of Data Mesh and an incredibly important concept for the business as it cements the idea of data as a primary concern for both the business and the domain. The "truly modern data stack” can be considered a key piece of the data infrastructure for domains to provide data products in the analytical plane; global data governance marries together these domains' stacks through access control.

What's Next?

The so-called "modern data stack” has its roots in outdated architectures built for antiquated hardware, and stands to be reimagined. The combination of the "four S's” and the four driving tenets of the Data Mesh provides a framework for simplicity and resiliency within a data ecosystem, as well as providing optionality across domains. As many organizations mature their data and analytics strategy, considering all of the data stacks within the company as a whole is an important step.

The goal is to architect a solution that can be used both within the domains as a data product creation technology, and across the domains as an analytical query engine, to create a data ecosystem that combines the "truly modern data stack” with the Data Mesh. A solution that provides a self-service data infrastructure along the one defined as a pillar of Data Mesh can be used cross-functionally in an organization. With a flexible data environment to create data products, query across data products, and even create additional data products from existing data products - the possibility of mirroring a more complex ecosystem is straightforward within this type of product ecosystem.


A Data Mesh incorporating the "truly modern data stack” can raise the bar, streamlining the path between data producers and business value. Providing direct SQL access to a wide array of data sources is key to unlocking the power of data to drive business strategy.

Colleen Tartow, Ph.D., is Director Of Engineering at Starburst Data
Share this

Industry News

April 25, 2024

JFrog announced a new machine learning (ML) lifecycle integration between JFrog Artifactory and MLflow, an open source software platform originally developed by Databricks.

April 25, 2024

Copado announced the general availability of Test Copilot, the AI-powered test creation assistant.

April 25, 2024

SmartBear has added no-code test automation powered by GenAI to its Zephyr Scale, the solution that delivers scalable, performant test management inside Jira.

April 24, 2024

Opsera announced that two new patents have been issued for its Unified DevOps Platform, now totaling nine patents issued for the cloud-native DevOps Platform.

April 23, 2024

mabl announced the addition of mobile application testing to its platform.

April 23, 2024

Spectro Cloud announced the achievement of a new Amazon Web Services (AWS) Competency designation.

April 22, 2024

GitLab announced the general availability of GitLab Duo Chat.

April 18, 2024

SmartBear announced a new version of its API design and documentation tool, SwaggerHub, integrating Stoplight’s API open source tools.

April 18, 2024

Red Hat announced updates to Red Hat Trusted Software Supply Chain.

April 18, 2024

Tricentis announced the latest update to the company’s AI offerings with the launch of Tricentis Copilot, a suite of solutions leveraging generative AI to enhance productivity throughout the entire testing lifecycle.

April 17, 2024

CIQ launched fully supported, upstream stable kernels for Rocky Linux via the CIQ Enterprise Linux Platform, providing enhanced performance, hardware compatibility and security.

April 17, 2024

Redgate launched an enterprise version of its database monitoring tool, providing a range of new features to address the challenges of scale and complexity faced by larger organizations.

April 17, 2024

Snyk announced the expansion of its current partnership with Google Cloud to advance secure code generated by Google Cloud’s generative-AI-powered collaborator service, Gemini Code Assist.

April 16, 2024

Kong announced the commercial availability of Kong Konnect Dedicated Cloud Gateways on Amazon Web Services (AWS).

April 16, 2024

Pegasystems announced the general availability of Pega Infinity ’24.1™.