AWS Announces Series of New Database Capabilities
November 29, 2018

Amazon Web Services (AWS) announced significant new Amazon Aurora and Amazon DynamoDB capabilities along with two new purpose-built databases.

New Amazon Aurora Global Database offers customers the ability to update a database in a single region and have it automatically replicated to other AWS Regions for higher availability and disaster recovery. Amazon DynamoDB’s new On-Demand feature provides read/write capacity provisioning which removes the need for capacity planning and enables customers to only pay for the read/write requests they consume, while the launch of DynamoDB Transactions enables developers to build transaction guarantees for multi-item updates, making it easier to avoid conflicts and errors when developing highly scalable business critical applications. AWS also announced two new purpose-built database services, Amazon Timestream, a fast, scalable, and fully managed time series database for IoT and operational applications and, Amazon Quantum Ledger Database (QLDB), a highly scalable, immutable, and cryptographically verifiable ledger.

“Hundreds of thousands of customers have embraced AWS’s built-for the-cloud database services because they perform and scale better, are more cost effective, can be easily combined with other AWS services, and offer freedom from restrictive, over-priced, and clunky, old guard database offerings,” said Raju Gulabani, VP, Databases, Analytics, and Machine Learning, AWS. “Today’s announcements make it even easier for AWS customers to scale and operate cloud databases around the world. Whether it is helping to ensure critical workloads are fully available even when disaster strikes, instantly scaling workloads to Internet-scale, maintaining application data consistency, or building new applications for emerging use cases like time series data or ledger systems of record, we are giving customers the features and purpose-built databases they need to support the most mission critical workloads at lower cost, better operational performance, and diminished complexity."

Amazon Aurora MySQL now supports Global Database (available today)

Amazon Aurora, the fastest-growing service in AWS history, is a MySQL and PostgreSQL-compatible relational database built for the cloud and used by tens of thousands of customers around the world. Amazon Aurora Global Database allows customers to update a database in a single AWS Region and automatically replicate it across multiple AWS Regions globally, typically in less than a second. This allows customers to maintain read-only copies of their database for fast data access in local regions by globally distributed applications, or to use a remote region as a backup option in case they need to recover their database quickly for cross-region disaster recovery scenarios.

Amazon DynamoDB introduces On-Demand and Transactions Capabilities (available today)

Amazon DynamoDB is a fully managed, key-value database service that offers reliable performance at any scale. More than a hundred thousand AWS customers use Amazon DynamoDB to deliver consistent, single-digit millisecond latency for some of the world’s largest applications. Many of these customers run large-scale applications that receive irregular and unpredictable data access requests or have new applications for which the usage pattern is unknown. These customers often face a database capacity planning dilemma, having to choose between over-provisioning capacity upfront and paying for resources they will not use, or under-provisioning resources and risking performance problems, and a poor user experience.

For applications with unpredictable, infrequent usage, or spikey usage where capacity planning is difficult, Amazon DynamoDB On-Demand removes the need for capacity planning, by automatically managing the read/write capacity, and customers only pay-per-request for what they actually use. Amazon DynamoDB On-Demand delivers the same single-digit millisecond latency, high availability, and security that customers have come to expect from Amazon DynamoDB.

Amazon DynamoDB powers some of the world’s most high-scale applications that run globally. Sometimes, developers building those applications need support for transactions and have to write custom code for error handling that can be complex, error prone, and time consuming. Amazon DynamoDB Transactions enables developers to build transactions with full atomicity, consistency, isolation, and durability (ACID) guarantees for multi-item updates into their DynamoDB applications, without having to write complex client-side logic to manage conflicts and errors, and without compromising on scale and performance.

Amazon Timestream provides a fast, scalable fully managed time series database (available in preview)

Developers are building IoT and operational applications that need to collect, synthesize, and derive insights from enormous amounts of data that changes over time (known as time-series data). Common examples include DevOps data that measures change in infrastructure metrics over time, IoT sensor data that measures changes in sensor readings over time, and Clickstream data that captures how a user navigates a website over time.

This type of time-series data is generated from multiple sources in extremely high volumes, and needs to be collected in near-real time, in a cost-optimized and highly scalable manner, and customers need a way to store and analyze all this data efficiently. To do this today, customers are using either their existing relational databases or existing commercial time-series databases. Neither of these options are attractive because none have been built from the ground up as time-series databases for the scale needed in the cloud.

Relational databases have rigid schemas that need to be pre-defined and are inflexible if new attributes of an application need to be tracked. They require multiple tables and indexes to be created that lead to complex and inefficient queries as the data grows over time. In addition, they lack the required time series analytical functions such as smoothing, approximation, and interpolation. When you look at existing open source or commercial time series DBs, they are difficult to scale, do not support data retention policies, and require developers to integrate them with separate ingestion, streaming/batching, and visualization software.

To address these challenges, AWS is introducing Amazon Timestream, a purpose-built, fully managed time series database service for collecting, storing, and processing time series data. Amazon Timestream processes trillions of events per day at one-tenth the cost of relational databases, with up to one thousand times faster query performance than a general purpose relational database. Amazon Timestream makes it possible to get single-digit millisecond responsiveness when analyzing time series data from IoT and operational applications. Analytics functions in Amazon Timestream provide smoothing, approximation, and interpolation to help customers identify trends and patterns in real-time data. And, Amazon Timestream is serverless, so it automatically scales up or down to adjust capacity and performance, and customers only pay for what they use.

Amazon QLDB: A high performance, immutable, and cryptographically verifiable ledger database service (available in preview)

Amazon QLDB is a new class of database that provides a transparent, immutable, and cryptographically verifiable ledger that customers can use to build applications that act as a system of record, where multiple parties are transacting within a centralized, trusted entity. Amazon QLDB removes the need to build complex audit functionality into a relational database or rely on the ledger capabilities of a blockchain framework. Amazon QLDB uses an immutable transactional log, known as a journal, which tracks each and every application data change and maintains a complete and verifiable history of changes over time. All transactions must comply with atomicity, consistency, isolation, and durability (ACID) to be logged in the journal, which cannot be deleted or modified. All changes are cryptographically chained and verifiable in a history that customers can analyze using familiar SQL queries. Amazon QLDB is serverless, so customers don’t have to provision capacity or configure read and write limits. They simply create a ledger, define tables, and Amazon QLDB will automatically scale to support application demands, and customers pay only for the reads, writes, and storage they use. And, unlike the ledgers in common blockchain frameworks, Amazon QLDB doesn’t require distributed consensus, so it can execute two to three times as many transactions in the same time as common blockchain frameworks.

Share this

Industry News

April 25, 2024

JFrog announced a new machine learning (ML) lifecycle integration between JFrog Artifactory and MLflow, an open source software platform originally developed by Databricks.

April 25, 2024

Copado announced the general availability of Test Copilot, the AI-powered test creation assistant.

April 25, 2024

SmartBear has added no-code test automation powered by GenAI to its Zephyr Scale, the solution that delivers scalable, performant test management inside Jira.

April 24, 2024

Opsera announced that two new patents have been issued for its Unified DevOps Platform, now totaling nine patents issued for the cloud-native DevOps Platform.

April 23, 2024

mabl announced the addition of mobile application testing to its platform.

April 23, 2024

Spectro Cloud announced the achievement of a new Amazon Web Services (AWS) Competency designation.

April 22, 2024

GitLab announced the general availability of GitLab Duo Chat.

April 18, 2024

SmartBear announced a new version of its API design and documentation tool, SwaggerHub, integrating Stoplight’s API open source tools.

April 18, 2024

Red Hat announced updates to Red Hat Trusted Software Supply Chain.

April 18, 2024

Tricentis announced the latest update to the company’s AI offerings with the launch of Tricentis Copilot, a suite of solutions leveraging generative AI to enhance productivity throughout the entire testing lifecycle.

April 17, 2024

CIQ launched fully supported, upstream stable kernels for Rocky Linux via the CIQ Enterprise Linux Platform, providing enhanced performance, hardware compatibility and security.

April 17, 2024

Redgate launched an enterprise version of its database monitoring tool, providing a range of new features to address the challenges of scale and complexity faced by larger organizations.

April 17, 2024

Snyk announced the expansion of its current partnership with Google Cloud to advance secure code generated by Google Cloud’s generative-AI-powered collaborator service, Gemini Code Assist.

April 16, 2024

Kong announced the commercial availability of Kong Konnect Dedicated Cloud Gateways on Amazon Web Services (AWS).

April 16, 2024

Pegasystems announced the general availability of Pega Infinity ’24.1™.