Check Point® Software Technologies Ltd.(link is external) announced that its Quantum Firewall Software R82 — the latest version of Check Point’s core network security software delivering advanced threat prevention and scalable policy management — has received Common Criteria EAL4+ certification, further reinforcing its position as a trusted security foundation for critical infrastructure, government, and defense organizations worldwide.
Over the past few years the "modern data stack” has entered the vernacular of the data world, describing a standardized, cloud-based data and analytics environment built around some classic technologies. In its simplest form, this looks like:
1. A data pipeline (ETL or ELT) moving data from its source into an analytics-focused environment
2. A target data warehouse or data lake
3. An analytics tool for creating business value out of the data
This technology stack is based on the fundamental idea that data must be moved away into a centralized location in order to gain value from it. One thing to note, however, is that what we call the "modern data stack” is essentially a re-envisioned cloud-and-SaaS version of the "legacy data stack'' with better analytics tools. What started out as a stack with a database + enterprise ETL tool + analytics-focused storage + reporting system became a modern version of the same functional process.
Despite new cloud-based and SaaS tools, the paradigm remains the same
A Flawed Paradigm
The "modern data stack” is a reimagining of the legacy data flow with better tools. The original stack was largely driven by hardware limitations: production transactional systems simply weren't designed to support an analytics workload. By moving the data from the production system into a replicated analytics-focused environment, you can tailor your data for reporting, visualizations, modeling, etc. However, there are still some pretty serious flaws in both versions:
By moving data away from the source, there is an inherent latency introduced along with complex and fragile data pipelines. Getting back to "real time analytics” can be incredibly challenging and involve large data engineering efforts.
While modern cloud-based data warehouses and data lakes allow for the separation of storage and compute resources and both horizontal and vertical scaling, the true separation of these resources (meaning private storage and shared compute) remains a challenge.
Complex enterprise environments with many operational systems struggle with the idea of bringing all data together in a cloud data warehouse with a common data model - in practice, this rarely works.
The recent focus on tools, rather than functionality, ultimately leads to vendor lock-in and blocks optionality. The bottom line is that there is still a large disconnect between the source data and the final business value.
In thinking about these flaws inherent in the modern data stack, we've started to instead wonder: What is a truly modern data stack?
Imagine you're dropped into a company and asked to build a system to easily access data for the purpose of deriving business value. Your employees are data literate - they understand SQL and want to use that and maybe a visualization tool like Tableau to answer business questions using data. You've got today's modern infrastructure, your production transactional databases are all in the cloud, and you want to separate storage and compute. You're not hardware-bound at all. What would you build?
Introducing the Four S's
Let's focus on what the business user wants:
The 4 Ss of data: speed, scalability, simplicity, and SQL
In the end, these are the goals, and you want to focus your architecture and data ecosystem on these principals. If you can achieve each of these "four S's” with your new data system, you'll be a hero!
Building something truly modern
To build a system that meets your goals while focusing on simplicity, start with a product that allows you to run SQL directly on your source data, wherever it lives, without the need for a large effort around data infrastructure. Then add in the analytics layer with a visualization tool, machine learning, etc. This actually isn't drastically different from the modern data stack, but the distance between the data and the value is shortened; in a complex and scaling organization, this efficiency gain can be significant.
The truly modern data stack focuses on the four Ss, reduces latency and complexity, and is vendor-agnostic, ultimately shortening the path between the data and the business value derived from it
Benefits of this simpler stack include reduced batch processing, and the capability to use live or cached source data will reduce latency. Governance including data lineage will also be more transparent with fewer intermediary tools and datastores. The smaller number of tools and storage requirements, not to mention the true separation of storage and compute resources, also lends itself to a streamlined and more cost-effective data ecosystem.
Enter Data Mesh
As complexity and the need for data maturity grows at an organization, often an enterprise will have many different domains with their own unique data ecosystems and analyses. However, when data is a primary factor in business strategy, it's the analysis of data across the organization that brings true exponential power. At this point, companies need to think about a global data strategy, and proactively treat data as a first-class business product, rather than a happy afterthought. The business goal is to embrace agility in data in the face of complexity, and accelerate the time-to-value for data.
Organizationally and architecturally, Data Mesh marries the ideas of a truly modern data stack with the concept of data as a top-tier product. Data producers treat data consumers as a first-class stakeholder of their work, and the consolidation of the technologies for data consumption brings about a revolutionary simplicity of the data and analytics model at scale. With its guiding tenets, Data Mesh is firmly cementing its place as the future of the business data ecosystem:
Core principles of Data Mesh
While Data Mesh defines a global socio-technical architecture for an enterprise's overall data strategy, there is a place for the "truly modern data stack” within this architecture, as each domain will be required to pull data from the operational plane, transform it for analytics, and provide access in the analytical plane. The transitioning and transformation of that data is itself a data stack, driving the concept of data as a first-class product of a domain at the same level of importance as code. This is a key driver of Data Mesh and an incredibly important concept for the business as it cements the idea of data as a primary concern for both the business and the domain. The "truly modern data stack” can be considered a key piece of the data infrastructure for domains to provide data products in the analytical plane; global data governance marries together these domains' stacks through access control.
What's Next?
The so-called "modern data stack” has its roots in outdated architectures built for antiquated hardware, and stands to be reimagined. The combination of the "four S's” and the four driving tenets of the Data Mesh provides a framework for simplicity and resiliency within a data ecosystem, as well as providing optionality across domains. As many organizations mature their data and analytics strategy, considering all of the data stacks within the company as a whole is an important step.
The goal is to architect a solution that can be used both within the domains as a data product creation technology, and across the domains as an analytical query engine, to create a data ecosystem that combines the "truly modern data stack” with the Data Mesh. A solution that provides a self-service data infrastructure along the one defined as a pillar of Data Mesh can be used cross-functionally in an organization. With a flexible data environment to create data products, query across data products, and even create additional data products from existing data products - the possibility of mirroring a more complex ecosystem is straightforward within this type of product ecosystem.
A Data Mesh incorporating the "truly modern data stack” can raise the bar, streamlining the path between data producers and business value. Providing direct SQL access to a wide array of data sources is key to unlocking the power of data to drive business strategy.
Industry News
Postman announced full support for the Model Context Protocol (MCP), helping users build better AI Agents, faster.
Opsera announced new Advanced Security Dashboard capabilities available as an extension of Opsera's Unified Insights for GitHub Copilot.
Lineaje launched new capabilities including Lineaje agentic AI-powered self-healing agents that autonomously secure open-source software, source code and containers, Gold Open Source Packages and Gold Open Source Images that enable organizations to source trusted, pre-fixed open-source software, and a software crawling and analysis engine, SCA360, that discovers and contextualizes risks at all software development stages.
Check Point® Software Technologies Ltd.(link is external) launched its inaugural AI Security Report(link is external) at RSA Conference 2025.
Lenses.io announced the release of Lenses 6.0, enabling organizations to modernize applications and systems with real-time data as AI adoption accelerates.
Sonata Software has achieved Amazon Web Services (AWS) DevOps Competency status.
vFunction® announced significant platform advancements that reduce complexity across the architectural spectrum and target the growing disconnect between development speed and architectural integrity.
Sonatype® introduced major enhancements to Repository Firewall that expand proactive malware protection across the enterprise — from developer workstations to the network edge.
Aqua Security introduced Secure AI, full lifecycle security from code to cloud to prompt.
Salt Security announced the launch of the Salt Model Context Protocol (MCP) Server, giving enterprise teams a novel access point of interaction with their API infrastructure, leveraging natural language and artificial intelligence (AI).
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of in-toto, a software supply chain security framework developed at the NYU Tandon School of Engineering.
SnapLogic announced the launch of its next-generation API management (APIM) solution, helping organizations accelerate their journey to a composable and agentic enterprise.
Apiiro announced Software Graph Visualization, an interactive map that enables users to visualize their software architectures across all components, vulnerabilities, toxic combinations, blast radius, data exposure and material changes in real time.
Check Point® Software Technologies Ltd.(link is external) and Illumio, the breach containment company, announced a strategic partnership to help organizations strengthen security and advance their Zero Trust posture.