Jellyfish announced the launch of Jellyfish Benchmarks, a way to add context around engineering metrics and performance by introducing a method for comparison.
Rockset unveiled a major product release that makes real-time analytics on streaming data from sources like Apache Kafka, Amazon Kinesis, Amazon DynamoDB, and data lakes a lot more accessible and affordable for every enterprise.
With this launch, customers can use standard SQL to perform real-time data transformations and pre-aggregations continuously as new data is ingested from any source. This significantly reduces engineering effort on real-time data pipelines, while cutting both storage and compute costs for real-time analytics at cloud scale. As a result, any developer can build real-time, interactive dashboards and data-intensive applications on massive data streams in record time, at a fraction of the cost.
For today’s digital disruptors striving to harness the power of streaming data, the old way of preparing and loading data into a traditional database and manually tuning each query no longer works. The ability to onboard new streaming data sets quickly and easily using continuous SQL transformations and rollups means developers do not need to manage complex real-time data pipelines. Eliminating this complexity makes real-time analytics more accessible for anyone who speaks SQL. When combined with Rockset’s unique indexing approach, which delivers low latency analytics no matter the shape of the data or type of query, developers can iterate faster and innovate more on streaming data applications. Until now, analyzing high volume streaming data in real time has been prohibitively expensive in the industry, but with this release new data can be transformed and pre-aggregated as it arrives, so the cost of storing and querying that data is reduced by a factor of 10-100x.
Imagine you’re a payment processor, handling millions of payments between thousands of merchants and millions of customers. You would need to monitor all those transactions in real time and run advanced statistical models to detect anomalies and catch fraud. Storing raw events and constantly recalculating metrics would mean your storage footprint grows at an alarming rate and queries become prohibitively slow and expensive. Instead, with this release Rockset allows you to “rollup” data as it arrives, so your data is still queryable in real time, but at a fraction of the cost and with better performance.
Built by the team behind the online data infrastructure that powers Facebook Newsfeed and Search, Rockset is inspired by the same indexing systems that power real-time analytics at cloud scale. Rockset automatically indexes all fields in a Converged Index™, delivering fast SQL queries on fresh data, for cloud-native speed, scale, and flexibility in real-time analytics. This is revolutionary across a broad range of digital platforms and apps, including e-commerce, logistics and delivery tracking, gaming leaderboards, fraud detection systems, health and fitness trackers, and social media newsfeeds.
“Your modern cloud data stack is incomplete without a real-time database purpose-built for ingesting, transforming, and analyzing streaming data. Warehouses simply don’t cut it — they are built for batch analytics and become prohibitively slow and expensive for high volume streaming data,” said Venkat Venkataramani, CEO and co-founder at Rockset. “Transforming massive torrents of raw data streams to accurate high-quality aggregates is essential for achieving real-time analytics at cloud scale. With this release, Rockset makes building massively scalable real-time aggregations as simple as writing a simple SQL query, and a lot more budget-friendly.”
New features available now on Rockset’s cloud service include the ability to:
- Continuously transform during ingestion: Customers can use SQL to transform streaming data as it is ingested, eliminating time and effort required to maintain complex real-time data pipelines.
- Rollup data during ingestion: Customers can use SQL to pre-aggregate streaming data as it is ingested, reducing the cost of storing and querying data by 10-100x.
- Set time-based partitioning and retention: Customers can set highly efficient data retention policies for time series and streaming data, enabling automatic deletion of aging data for reducing costs.