Formerly known as Wikibon
Close this search box.

Oracle CloudWorld (OCW) Unveils Ubiquitous Generative AI + Other Transformational Cloud Quakes


Oracle CloudWorld (OCW) 2023 was at the Venetian Expo Center in Las Vegas, Nevada from the 18th through the 21st of September. Roughly 17,000 people showed up to hear about the latest Oracle advances and innovations. They were not disappointed.

Most IT administrators are current or previous users of Oracle’s unparalleled comprehensive databases, middleware, and applications. Even when they are not or have not been users, they are very much aware of them. But many are amazed to learn how Oracle has leveraged those peerless assets into the most technically advanced state-of-the-art public cloud transformation. Oracle Cloud Infrastructure (a.k.a. OCI) is unique compared to other public clouds.

As a Gen2 cloud, OCI’s uniqueness starts with its RDMA based, highly portable cloud architecture. It’s not built like any other cloud. OCI is designed to quickly implement, be operational, and scale up. OCI is fundamentally different in servers and interconnect fabric from other public clouds. There are three distinct networks within OCI. The first network connects customers to OCI cloud services. The second network connects OCI native and customer applications to database cloud services and storage. The third network is for completely separate computers running OCI control software. This third network is completely isolated from the customer’s software. That protects both OCI and the customer. Customers can’t tamper with or manipulate the control software and Oracle can’t see the customer’s data.

The OCI architecture is a fundamental reason behind OCI’s consistently higher performance than all other clouds. This has enabled more OCI public cloud regions than AWS, Azure, or GCP. It has also made OCI infrastructure portable. These advantages will require other public clouds to rebuild their cloud infrastructure. So far, none has.

This inimitable portability enables Oracle to offer a comprehensive portfolio of public cloud services in customer premises and co-locations. That portfolio offers:

  • OCI Dedicated Region
    • A complete OCI region dedicated to the customer on the customer premises.
    • All OCI services included at the same pricing as in the OCI public cloud including:
      • Autonomous Database services, Exadata Database Service, Oracle Base Database, MySQL HeatWave, Fusion Cloud applications such as ERP, SCM, CX, HCM, and Fusion Analytics.
    • Implemented, operated, and managed by Oracle on the customer’s premises.
  • Exadata Cloud@Customer
    • The latest Exadata X10M, using 4th generation AMD EPYC™ CPU in compute and storage servers, on the customer’s premises.
    • Runs Autonomous Database services and Exadata Database Service with software included and bring your own license (BYOL) pricing options.
    • Same pricing as Exadata services in OCI.
    • Implemented, operated, and managed by Oracle on the customer’s premises.
  • Compute Cloud@Customer
    • Utilizes the latest 4th generation AMD EPYC™ CPU servers, Oracle storage, and up to 800Gbps high speed direct interconnect to Exadata Cloud@Customer.
    • Implemented, operated, and managed by Oracle on the customer’s premises.

That OCI portability also enables the paradigm shifting multi-cloud partnership with Microsoft Azure. Initially that partnership provided high-speed interconnect between OCI and Microsoft Azure data centers allowing Azure users direct access to Oracle databases in OCI running on the Exadata platform. The partnership then added flexibility by allowing Azure users to provision, configure, and operate those OCI services through the Azure portal. To the Azure users, the OCI database services look like Azure options.

Right before OCW, Oracle and Microsoft expanded their partnership to deploy OCI infrastructure including Exadata hardware, Exadata Database Service, and Autonomous Databases within Azure data centers. Putting Exadata into the Azure data center reduces application to database latencies by at least 10x. The reduced latency drastically improves application response times, noticeably enhancing user productivity, and shortening time-to-actionable-insights, time-to-market, time-to-revenues, and profits.

Oracle’s industry shifting and evolving partnership with Microsoft Azure sets a new standard for multi-cloud. However, it doesn’t take away from the highly advanced services within OCI as seen in the chart below. Oracle is just as innovative and market disruptive with many cloud services as revealed at OCW 2023.

Oracle Cloud Infrastructure (OCI)
Networking Hardware Storage Portability DB Cloud Services Software Oracle Cloud SaaS Apps Multi-Cloud Security AI & ML Other Services
RDMA Servers Block Exadata C@C Autonomous Databases APEX ERP OCI – Azure Partnership End-to-End Cohere GenAI (Beta) IaaS
Lowest latency Virtual Servers File Compute C@C Exadata Database Service Java Financial Mgmt. Procurement
Project Mgmt.            

Risk Mgmt. & Compliance
Enterprise Perf Mgmt.
MySQL HeatWave on AWS   ML & Vector DB within Oracle DB PaaS
High BW Exadata Object (S3)   Base Database Linux, KVM, Kubernetes, Containers VMware Cloud   ML & Vector DB within HeatWave DevOps DevSecOps
  NVIDIA GPUs     BYODB NoSQL SCM Gov’t Cloud   AI Integrated into
SaaS Apps
  Intel CPUs     MySQL HeatWave Lakehouse   Supply Chain Planning
Inventory Mgmt.

Product Lifecycle Mgmt.
More SCM Apps
Hybrid Clouds    
  AMD CPUs     Data Lake   46 OCI Regions      
  Ampere CPUs (ARM)     PostgreSQL          
  Bare Metal     Big Data   CX        
            Human Resources
Talent Mgmt.   
Workforce Mgmt.  
            Fusion Data Intelligence Platform        
            Fusion Analytics        

Top 3 Takeaways from OCW 2023

The top 3 takeaways from OCW 2023 are:

  1. Ubiquitous Generative AI Large Language Models (LLMs) and retrieval-augmented generation (RAG) with vector search in Oracle Database and MySQL HeatWave that let enterprises combine internal data with LLMs to deliver more accurate results.
  2. APEX low-code to no-code application development for enterprise applications.
  3. Seamless multi-cloud.

Make no mistake, each of these takeaways represents significant innovation and technology advances for Oracle and the industry. A deeper look shows how.

OCI AI Hardware Foundation

It starts with the Oracle Cloud Infrastructure (OCI). As previously noted, OCI has the most advanced public cloud infrastructure in the market. That also includes the cloud industry’s most advance AI infrastructure called Superclusters. OCI Superclusters and AI infrastructure deliver a high bandwidth and super low latency RDMA interconnect providing ultrafast communications linking servers with NVIDIA GPUs, HPC storage, and OCI bare metal compute instances for AI and HPC cluster networking.

OCI Superclusters support from tens to tens of thousands NVIDIA GPUs and up to 1600 Gbps bandwidth. Those NVIDIA Tensor Core GPUs include H100, L40S (imminent), A100, A10, V100, and P100 versions. This makes OCI Superclusters ideal for predictive, classification, regression, process, and generative AI LLMs. This is why so many LLM startups are using OCI Superclusters.

Oracle Database Cloud Services Foundation

Oracle has offered machine learning (ML), predictive, classification, regression, and process AI for many years. It’s been an embedded part of Oracle’s signature industry leading converged Oracle Database. And more recently, in their MySQL HeatWave database service. It’s an indispensable part of all of OCI’s database cloud services.

For Autonomous Database, it’s not just a tool. Oracle uses ML and predictive AI modeling to make those databases self-provisioning, self-operating, self-healing, self-tuning, and self-securing.

For MySQL HeatWave databases, it has been an essential integral aspect of the service since its inception in 2020. MySQL Autopilot uses ML to automate many of the most critical aspects in achieving high query performance at scale. These include provisioning, data loading, query execution, and failure handling. Autopilot also makes it easier for users to manage HeatWave Lakehouse by automating schema inference, sampling data to generate and improve query plans, predicting load time, and adapting to the performance of the underlying object store.

AutoML automates the ML lifecycle. A lifecycle that includes:

  • Model/algorithm selection.
  • Intelligent data sampling for model training using extremely small data samples.
  • Feature selection.
  • Hyperparameter optimization.
  • Automated explanation of how the model made decisions.

Oracle Database added AutoML in its 21c release two years ago. Introduced more recently in MySQL HeatWave, AutoML is extremely simple, intuitive, and completely transparent in how it explains its conclusions. As far as automating DBA’s tasks MySQL HeatWave is not completely autonomous like Oracle Autonomous Databases but is getting much closer.

All of this makes the OCW AI announcements a natural evolution.

Oracle Ubiquitous Generative AI LLM and RAG OCW Announcements

Database Vector Search

Oracle is injecting generative AI capabilities into its Oracle Database tools and cloud services to enhance developer and user productivity. This enables developers to incorporate vector search functionality in the generative AI applications they develop. Vector search plays a key role in augmenting the accuracy of output from generative AI applications. In the upcoming Oracle Database 23c release update AI Vector Search, a new set of features that include a new vector data type and new vector indexes will be available for Oracle Databases to efficiently store vectors and to perform very fast vector similarity searches.

A major problem with Generative AI is the phenomenon known as LLM hallucination. This can be attributed to the fact foundational LLMs are trained on a large but general set of data at a point-in-time. The applications that solely use these LLMs to generate responses can deliver plausible-sounding false information. In other words, they can make up wrong answers.

Many Generative AI LLM vendors recommend what’s called fine-tuning to minimize hallucinations. Fine-tuning is the process that refines the LLM’s parameters to improve performance and accuracy in specific tasks. It frequently requires additional or modified ML techniques. The problem with fine-tuning is that unless it’s an ongoing process—typically time consuming and costly, it will become increasingly inaccurate over time. And candidly, few organizations want their proprietary data built into public LLMs that others can use.

Oracle concentrated on solving how businesses can still use proprietary in-house data with Generative AI LLMs without making that data public while mitigating LLM hallucinations. That technique is called Retrieval Augmented Generation (RAG). RAG enables organizations to get answers to natural language questions by augmenting the LLM with private business data without the LLM storing that data.

Implementing RAG requires the augmenting data (which can be structured business data, text documents, images, videos, graphics, or audio clips) needs to be encoded as vectors. These vectors are arrays of numbers that represent the semantic meaning of each piece of data. These vectors are then stored in a vector database and retrieved when they are needed. Determining which of these augmenting vectors should be provided to the LLM depends on the input or prompt given to the LLM. RAG typically sends the input/prompt to a vector embedding AI model which converts the input into a vector. The input vector is then sent to the vector database which searches for similar vectors. Similar vectors are pieces of data that contain similarities with the input. The input vector along with the augmenting vectors are then sent to the LLM, which uses that content with its trained general knowledge to enhance and provide a more accurate, informed answer. This noticeably reduces the LLMs “hallucinations.”

Bear in mind that little to no data originates in a vector database. That means it has to be copied from somewhere else, vectorized, and placed in the vector database. That translates into major time and cost problems that escalate as data scales. Since much of data already reside in Oracle databases, both in on-premises data centers as well as in the cloud, storing vectors in Oracle Database, alongside the source data for the vectors, means no copies, no extra storage, no data movement, no time lag, and no extra costs. By incorporating vector store and search, Oracle Database solves those substantial problems and helps organizations implement RAG-enabled solutions.

This may sound like the Oracle Database would get bogged down. It doesn’t. In fact, the Oracle Database accelerates AI vector searching in three ways. The first is with sophisticated vector indexes. The second comes from transparent vector processing scalability across all of the database instances in a Real Application Cluster (RAC). The third comes from Oracle Exadata where vector searches are transparently offloaded to its smart storage server cores. This greatly accelerates the search while freeing up resources in the Exadata database servers for more queries and applications.

AI Vector Search also benefits and takes advantage of many other core Oracle Database capabilities such as end-to-end security, anonymizing PII and other proprietary data, sharding, transactions, parallel SQL, analytics, and disaster recovery. These are critically important to enterprises.

Cohere Partnership

Oracle doubled down by emphasizing its strategic partnership with Cohere at OCW 2023. The reason they chose Cohere is because their Generative AI LLMs are specifically designed for Enterprises, requires much less data to train, uses RAG, and is more efficient requiring much less hardware infrastructure.

OCI is now offering its Generative AI Service based on Cohere. The key to this service and what differentiates it from all other public cloud Generative AI services is that it is designed for businesses, enterprises, and governments; tailored to the customer’s data; comes with production hardened security and privacy; and best of all it has predictable performance and pricing. Currently the OCI Generative AI Service is in limited availability.

One fundamental specific value from the user perspective, the data used is their data. It’s not available to anyone else unless they choose to share it. Oracle personnel also do not have access to the users’ data. The models also belong only to them.

Embedded Generative AI in Fusion Applications

Oracle made it clear to all attendees that they’re focused on providing AI value to their customers. That principle is behind Oracle’s efforts to embed Generative AI LLMs with RAG in their Fusion Cloud SaaS applications. Oracle’s goal is to make them as transparent and intuitive as they have with AI ML, predictive AI, classification AI, and anomaly detection AI.

Twenty-five Fusion applications embedded with Generative AI are in preview in OCI, with 25 more imminent. The plan is to make it part of every Fusion application. Based on the demos Oracle presented, it’s a massive productivity game changer.

And they’re not stopping with Fusion applications. Cerner applications are also being rewritten to take advantage of embedded Generative AI LLMs to reduce user tasks, improve outcomes, and once again radically improve productivity.

One example Oracle gave was a built-in medical scribe. Physicians today spend up to 70% of their time writing reports for every examination, treatment, recommendations, follow-up instructions, etc. There are many screens that have to be filled out for every task or a scribe accompanies the physician and takes notes. Having a built-in scribe that listens to the entire appointment when activated means those reports are automatically filled in. The physician only needs to edit them or add comments. That enables physicians to see more patients and/or spend more time with each patient. The productivity gained is enormous.

Oracle has also infused the Fusion Data Intelligence Platform (also in preview) with Generative AI LLMs, RAG, and Synthesia, which turns text into realistic avatar-based videos. It’s an incredible use of AI to increase productivity in many industries.

Oracle is distinctly on its way to making Generative AI LLMs and RAG practical for companies and governments of all sizes. In other words, ubiquitous AI.

APEX Low-Code To No-Code Application Development

APEX low-code to no-code application development has been around for quite some time. It’s currently the most popular low code enterprise application development platform. However, it keeps getting better and easier. APEX enables rapid development of cloud and mobile applications, turns spreadsheets into applications, modernizes Oracle forms, SaaS extensions, E-Business Suite (EBS) extensions, external data sharing, and datamart reports. It’s intuitive to use for most developers as well as non-technical users. Oracle made it clear that soon, APEX will use AI to convert natural language prompts into SQL queries and then creating the desired app components, further accelerating the development process.

But as the saying goes, the proof is in the pudding. Oracle used APEX to write applications for the USA NIH and CDC during the Covid-19 pandemic to help those agencies manage the crisis. And Oracle has been using APEX on its own applications. In fact, ever since Oracle acquired Cerner it’s been rewriting all of the Cerner healthcare applications with APEX. The Cerner customers have been so delighted with increased application usability and performance that Oracle is using APEX for all future application development, including Fusion applications.

Besides being markedly simpler to develop applications than Java or more complex programming languages, APEX applications also run much faster because they reside in the database. Application performance is up to 50x faster according to Oracle at OCW 2023, although Oracle generally claims APEX applications are 20x faster to develop with 100x less code. And because those applications are running in the Oracle Database, there is no need to write security, scalability, or disaster recovery into the applications because it’s already there in the database.

APEX is changing the way applications are developed. It makes development faster, easier, more secure, and more scalable while increasing application performance. It’s a win-win for everyone.

Seamless Multi-Cloud

Several of the public cloud service providers give lip service to multi-cloud. The problem is they hold the customer’s data hostage. It’s called egress fees. If an application is not in the same vendor cloud as the data, then the public cloud provider will likely charge an egress fee every time the application reads or transfers the data from the public cloud. Not exactly a stinging endorsement of multi-cloud.

Oracle has been actively working to break down those barriers. It starts with its partnership with Microsoft Azure. That partnership initially allowed Azure customers to use OCI database services with Azure applications via a 10Gbps interconnect between Azure and OCI data centers. Egress fees were eliminated. The partnership evolved to where Azure users purchase, configure, and use OCI database services as if they were Azure services. OCI database services appeared to the Azure users like any other Azure service with similar latencies of ≤ 2ms.

And the evolution continues with Oracle Database Service@Azure, the first true multi-cloud implementation. Now, Oracle is deploying OCI infrastructure, including Exadata X10M platforms, in Azure data centers. Exadata X10M is the best and fastest server and storage platform for Oracle Database available anywhere. It runs both the Oracle Database 23c, Autonomous Database, and Exadata Database Service. This move reduces application to database latencies by an order of magnitude. That’s highly significant in much faster application response times. And, of course, no egress fees.

One more thing. Purchasing Oracle Database services on Exadata X10M in Azure through Azure now applies to the customer contractual spend requirements. If purchased through OCI it applies to the OCI customer contractual spend requirements.

That’s not the only thing Oracle is doing to eliminate multi-cloud barriers. It has also made the fastest growing database cloud service – MySQL HeatWave– available on AWS. AWS has not yet agreed to play nice in a multi-cloud environment. It still has the highest egress fees in the industry. To make it simpler and lower cost for AWS application users to take advantage of the peerless performance MySQL HeatWave Lakehouse, Vector Search, AutoML, Autopilot, analytics, and more, Oracle developed a native implementation for AWS. All of the feature functionality and performance of MySQL HeatWave on OCI is also available on AWS, for a slightly higher cost, but still lower than the functionally equivalent—but not performance equivalent—Amazon Aurora and Redshift database services.

Oracle is serious about making multi-cloud transparent an industry-wide standard without egress fees. Expect more announcements about this in the future.

Conclusion – OCW 2023 Final Takeaway

OCW 2023 delivered exceptional information and a lot of it. There were nearly four dozen announcements that were highly innovative, thought-provoking technology and industry advances, and enlightening user experiences.

However, the three biggest takeaways from OCW 2023:

  1. Ubiquitous Generative AI Large Language Models (LLMs) and RAG.
  2. APEX low-code to no-code application development.
  3. Seamless multi-cloud.

The age of business practical Generative AI is here, and Oracle is all in.

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content