Formerly known as Wikibon

The intensifying battle between AWS and Oracle for enterprise database supremacy

Enterprise databases have evolved so rapidly on so many levels in the cloud computing era. If you’ve been following tech industry news recently, you can’t help noticing that the war of words between Oracle Corp. and Amazon Web Services Inc. has reached a fever pitch.

Databases are at the heart of it. Though they compete in many segments, each company knows that its continued momentum in the enterprise market depends in part on showing that its database portfolio is robust, scalable and flexible enough to handle the most demanding cloud workloads. Some have referred to this status as that of a “tier one” database, though there are no formal definitions or certifications in this regard.

“Tier one” in the enterprise database world often comes down to marketing messages. More often than not, these play on information technology professionals’ perceptions of which platforms are most robust, scalable and performant in the market at any point in time.

Oracle has long prided itself on being perceived in that way, not just as the enterprise database market share leader but as being in the tip-top tier in robustness. Clearly, its claims to that effect have a lot of substance to support them. But the cloud era has seen the development of new claimants for this status — especially AWS — and Oracle is pulling out all the stops to keep its brand from fading.

What’s noteworthy about this competitive back-and-forth is that longtime market leader Oracle is citing its database’s enterprise track record as its chief advantage, while its upstart rival, AWS, is trying to portray that background as a disadvantage, as if Oracle’s field-proven engineering were a sign of encroaching obsolescence.

Anybody who’s following Oracle knows full well that it’s continuing to invest heavily in evolving its flagship database for autonomous scalability, reliability and performance in the cloud. Making sure that the market doesn’t forget that, Oracle Chairman and Chief Technology Officer Larry Ellison delivered an extended keynote at OpenWorld in late October in which he argued that his firm’s flagship database is more robust, secure, functional, faster and efficient that AWS’.

A few days later, Ellison in a television interview explained why he feels that Oracle’s long track record in the database market will help it fend off challenges from AWS, the most confrontational of which is AWS’  plan to migrate away from use of Oracle databases in its retail operations:

“Amazon runs their entire business on top of Oracle, on top of Oracle database. They have been unable to migrate to AWS. Because it’s not good enough….It took us a while to build a secure cloud. It’s really hard to build a secure cloud. It took us a while. We think we’re there now….We think we have [a] 10- to 20-year lead on Amazon in databases. We’ve been in this business for 20 years constantly making our database better.”

By almost anybody’s reckoning, AWS is already now a powerhouse of enterprise databases, albeit primarily in the pay-as-you-go public cloud (let’s overlook its growing portfolio of hybrid-cloud offerings, for the purpose of this discussion). And AWS has begun to eat its own database dogfood in the most public way, as its way of demonstrating that it’s betting its business (literally) on these claims.

Here’s Ellison’s counterpart (in his CTO role), Amazon.com Inc.’s CTO Werner Vogels, in his keynote at AWS re:Invent a month later:

“My happiest day of this year was actually November 1 [a few days after Ellison’s public statements]. This was the moment we switched off one of world’s largest, if not the largest, Oracle data warehouse. [There are] massive improvements in performance because we know how our customers are using our systems and that can drive the way we do innovation forward. Even in the past six months, [Amazon’s] Redshift [cloud data warehouse] has become 3.5 times faster. It is amazing that we can do that because we have that feedback loop in how our customers are using our systems. These [relational] databases [such as Oracle Database] are not cloud-native. They are not good, fundamental building blocks for database innovation, and are definitely not [equipped] for really massive scale. So when we started thinking about how can we build a database that would be the future of database innovation, basically we needed to move away from the models we created in the ’80s and ’90s for databases and go to true cloud-native modern database.”

Vogels went on to discuss Aurora, the cloud-scale relational database management system that is front-and-center in the company’s ongoing migration away from Oracle databases. Aurora, which went live in 2015, can support internet workloads that no other database can support, according to Vogels.

In many ways paralleling Ellison’s deep technical keynote in OpenWorld, Vogels went in-depth on specific engineering features in Aurora that make it highly reliable, resilient, scalable, manageable and fast. He and other AWS execs, most notably S3 Vice President and General Manager Mai-Lan Tomsen Bukovec, exhaustively described how Aurora and the rest of AWS’ sprawling cloud data platform portfolio have been engineered for the most demanding hyperscaling requirements.

To reinforce its “we’re tier-one” message at re:Invent, AWS made several announcements around the expanding capabilities of its growing database portfolio, including many new features within Aurora, AWS’ DynamoDB key-value cloud database service, its S3 storage service and other data platforms. Chief among these announcements were:

  • Running a robust, high-performance global relational database in the cloud: The company announced general availability of Amazon Aurora Global Database, which enables users to update Aurora in a single AWS Region and automatically replicate the update across multiple AWS Regions globally in less than a second.
  • Managing a global key-value database cost effectively and with transactional guarantees: AWS announced general availability of DynamoDB On-Demand, which offers reliable performance at any scale.
  • Automating bulk cloud storage management: It announced Amazon S3 Batch Operations, which, when it becomes available in early 2019, will automates management of thousands, millions or billions of data objects in bulk storage.
  • Archiving data securely and inexpensively in the cloud: It announced Amazon S3 Glacier Deep Archive, which, when it become available in early 2019, will provide a new secure storage class for users to archive large data sets cost-effectively while ensuring that their data is durably preserved for future use and analysis.
  • Intelligent tiering for nondisruptive storage cost optimizationAWS introduced Intelligent-Tiering, a new S3 storage class that helps customers optimize storage costs automatically when their data access patterns change.
  • Boosted efficiencies for storage of infrequently access files: Speeding cloud data synchronization: AWS announced availability of DataSync, a new online service that simplifies, automates and accelerates data transfers in Elastic File System.

Just in case anyone missed the point about AWS having learned how to migrate away from Oracle database, Vogels noted that Amazon had used its own AWS Databases Migration Service to move its retail business’ Items & Offers data from 24 Oracle databases to DynamoDB.

To rub in the fact that it has learned a few lessons in this regard, Vogels also presented AWS’ “Well-Architected Framework” and corresponding “Well-Architected Tool,” which provide customers guidance for building reliability, performance and other robust features into their database apps in its public cloud.

Another Ellison counterpart, AWS Chief Executive Andy Jassy, delivered the coup de grace in his own keynote. Specifically, Jassy stated:

“Old guard databases have been a miserable world over the past couple of decades for most enterprises, and that’s because these old guard databases – like Oracle and [Microsoft] SQL Server–are expensive, they are high lock-in, they’re proprietary and they’re not customer-focused. Forget the fact both of them will constantly audit and fine you, but they also make capricious decisions overnight that are good for them, but not so good for you. If Oracle decides they want to double the cost of Oracle software running on AWS or Azure, that’s what they do.”

Clearly, AWS is not backing down from this challenge. It will be interesting to see how Oracle responds going forward.

Check out Jassy’s comments on theCube at re:Invent:

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content