Formerly known as Wikibon

Wikibon AWS re:Invent 2017 Trip Report: AWS Sets the Pace in the Public Cloud

With Stu Miniman, David Floyer, and George Gilbert

Amazon Web Services Inc. continues to dominate the public cloud market. At its sixth annual re:Invent conference this past week in Las Vegas, AWS discussed how it differentiates through deepening investments in its core infrastructure-as-a-service and cloud database offerings. Going forward, the company made it clear that it intends to stoke ongoing growth through strategic investments in game-changers such as artificial intelligence, streaming media and the “internet of things.”

In his re:Invent 2017 Day One keynote, AWS Chief Executive Andy Jassy (pictured) highlighted AWS’s staggering growth and runaway momentum in the global cloud market. In the past year its revenue rose 42 percent to an annual run rate of $18 billion. It now has millions of active customers, ranging from successful startups who have built business from scratch on AWS to large private sector, government, academic and other customers in every vertical.

Behind this momentum is the fact that many users are beginning to adopt public cloud for their core applications and workloads. In addition, customers are putting new workloads such as machine learning and deep learning in AWS’s cloud. To address the ongoing growth in demand everywhere on the planet, Jassy announced AWS’s plans to roll out its cloud services to more regions worldwide and to more availability zones per region. In addition, much of the mainstage discussion was of AWS’s significant financial commitment to expanding its partner ecosystem to deliver cloud-based solutions in every region, vertical and application scenario.

As discussed by Jassy and other mainstage presenters, AWS is making strategic investments in the following core areas:

Infrastructure As A Service

With its myriad IaaS-related announcements at re:Invent 2017, AWS deepened its core cloud-computing value proposition. It also moved more forcefully into platform as a service to contend with Microsoft Corp., IBM Corp., Google Inc., and others. But did not diversify into software-as-a-service segments to contend directly with Microsoft, Oracle Corp., Salesforce.com Inc. and others. And it did not offer any credible new announcements focused on multiprovider, multicloud scenarios, though that may not yet be a huge customer priority, because very few have more than one primary cloud (usually, it’s AWS).

AWS has moved well beyond its prior focus on encouraging enterprise clients to migrate legacy apps, data and workloads to the cloud, and is now more keenly focused on enabling client development of greenfield, high-value and disruptive cloud applications. To that effect, it did, at this year’s re:Invent, place a correspondingly greater emphasis on developers and on its independent software vendor ecosystem. To the extent that AWS is responding to the growing challenge from Azure and Google Cloud Platform in public cloud IaaS, it’s doing so indirectly by aggressively emphasizing its innovations in development tools — most notably, machine learning, deep learning, AI and analytics — where those public cloud rivals are particularly well-positioned.

At re:Invent 2017, AWS announced several new instances for its core EC2 infrastructure as a service. These are designed to provide improved price-performance on customers’ wide range of infrastructure as a service workloads:

  • Amazon EC2 P3 Instances: These instances support general-purpose graphics processing unit computing instances for deep learning and other AI workloads. AWS provides up to eight NVIDIA Volta GV100 GPUs to accelerate customers’ most advanced workloads at lower cost and with greater agility.
  • Amazon EC2 H1 Instances: These provide customers with high-speed sequential access to multiple terabytes of data, more virtual central processing units, and more memory per terabyte of local magnetic storage, as well as up to 25 gigabits per second of network bandwidth with logical groupings or clusters of instances in the selected AWS region.
  • Amazon EC2 M5 Instances: These support twice as many floating point operations per core with superior price-performance relative to M4 instances.
  • Amazon EC2 T2 Unlimited Instances: These provide high CPU performance over any desired timeframe, with the price covering all interim usage spikes.
  • Amazon EC2 Bare Metal Instances: These allow an operating system to run directly on the underlying hardware while retaining access to all AWS cloud benefits.

In addition, AWS announced streamlined access to EC2 spot instance capacity, which it claims can help save as much as 90 percent off on-demand pricing by helping customers provision smaller chunks of cloud instances for the new generation of containerized, serverless and other cloud microservices.

AWS also launched its new Systems Manager. AWS Systems Manager provides a unified dashboard that helps customers to operate and manage EC2 infrastructure at scale. It supports logical grouping of compute and storage resources, automates common deployment and administration workflows, and enables secure management of cloud infrastructure.

One thing that Wikibon expected, but AWS didn’t address in its announcements, was incorporation of AI into its service management tools to drive more fine-grained, dynamic monitoring and optimization of IaaS, containers, databases and other cloud resources. This missing piece was puzzling, considering how extensively AWS has integrated ML and other AI features across its sprawling solution portfolio.

VMware-based Hybrid Cloud

Customers want hybrid-cloud consistency between public clouds and on-premises data centers. The key to that is using the same software to manage infrastructure on both ends.

That explains why AWS’s partnership with VMWare is so strategic for both companies. In support of expanded AWS/VMware public/private hybrids, the companies announced expanded regional availability of VMware Cloud on AWS. Joint customers in the U.S. East region can now use VMware Site Recovery and VMware vMotion to move, run and protect production cloud workloads at scale.

VMware Cloud on AWS supports 32 host clusters and multiple software-defined data centers per organization today and will support 10 clusters per SDDC soon. This will enable a single customer to support environments as large as tens of thousands of VMs. Customer SDDC environments run on a high-performance, dedicated and highly secure next-generation AWS hardware infrastructure.

At re:Invent, the partners also announced the following new functionality to boost hybrid-cloud functionality, performance, availability, migration and flexibility:

  • AWS and VMware are expanding the scale, network connectivity, and security capabilities of VMware Cloud on AWS to further support the most resource intensive applications such as Oracle, Oracle RAC, Microsoft SQL Server, Apache Spark and Hadoop.
  • The new VMware vSphere vMotion service, along with new L2 stretched networking features and AWS Direct Connect, enables customers to migrate applications from on-premises VMWare clusters into VMware on AWS without any disruptions to the application, and without having to make any changes to the network configuration.
  • The new VMware Hybrid Cloud Extension add-on SaaS offering for VMware Cloud on AWS supports large-scale migration between on-premises environments running vSphere 5.0+ and VMware Cloud on AWS with no replatforming, retesting or change in tooling.
  • The new VMware Wavefront cloud service allows customers to visualize, alert upon, and troubleshoot applications running on VMware Cloud on AWS. It provides an open API platform supporting more than 80 integrationsto collect time-series data from application metrics collectors such as for Java, Ruby, Python and Go, to service metrics collectors for MySQL, Pivotal, Kubernetes, AWS and more.

Cloud-Native Computing Services

AWS significantly beefed up its platform as a service offerings for customers looking to run containerized microservices, function as a service and other cloud-native applications on EC2. In this way, AWS is positioning itself as a more full-featured platform as a service cloud provider.

The most important announcement in this regard was the new Amazon Elastic Container Service or ECS for Kubernetes, which it markets under acronym EKS. As an alternative to AWS’s existing ECS offerings, EKS runs a fully managed Kubernetes orchestration service on ECS without the need to setup, operate and maintain Kubernetes clusters. AWS is working closely with the Cloud Native Computing Foundation to ensure close alignment of the provider’s Kubernetes support with the standard open-source codebase.

Related to this was the launch of Amazon Fargate, which supports streamlined deployment and management of containers on ECS and EKS. Fargate enables scaling of the orchestration to tens of thousands of containers in a matter of seconds without the need to manage the underlying infrastructure. With Fargate, AWS customers no longer have to provision, configure and scale clusters of virtual machines to run containers. Instead, they can upload container images and specify resource requirements, with Fargate launching containers instantaneously.

Filling out the PaaS and middleware capabilities in AWS’s portfolio, the company announced a new security-threat detection service (Amazon GuardDuty), new preconfigured application-security rules (AWS Web Application Firewall Partner Managed Rules), a new message brokering service (Amazon MQ) and a serverless application discovery resource (AWS Serverless App Repository).

For developers, AWS expanded its support for simplified programming of stateless, event-driven microservices that run in its AWS Lambda serverless cloud. It launched AWS Cloud9, an integrated development environment that runs inside a browser and supports collaborative coding, execution and debugging of Lambda functions. Cloud9 provides a preconfigured software development kit with libraries, plug-ins and a shared repository for team-based development of complex cloud serverless apps.

This new offering should be welcome news to the hundreds of thousands of AWS customers who are using serverless functions, who in aggregate have boosted AWS Lambda usage by around 300 percent over the past year. The fact that AWS has embedded Lambda across the board in many of its services should stimulate deeper customer adoption of the capability into their cloud applications.

Data Storage, Processing, and Management Services

AWS’s Aurora relational cloud database, which is fully compatible with various open-source engines (such as PostgreSQL and MySQL) is the fastest-growing Amazon service.

At re: Invent, AWS announced significant feature enhancements to Amazon Aurora and other existing cloud databases, added a new specialized cloud database for graph analysis, and introduced other data protection and management features into its service portfolio. These launches filled out an already impressed range of data management offerings that AWS customers are leveraging for a wide range of high-performance, flexible cloud-computing services.

AWS announced two new Aurora-based services. The new Aurora Multi-Master supports scale-out of database reads and writes across multiple data centers, ensuring zero application downtime from failure of any AWS instance or Availability Zone. This enables applications to read and write data to multiple database instances in a cluster The new Amazon Aurora Serverless provides on-demand database auto-scaling for applications with variable workloads. It starts up on demand, shuts down when not in use and automatically scales with no instances to manage. Customers pay per second only for the database capacity they use.

On its Amazon DynamoDB cloud NoSQL database, the company introduced two new services:

  • DynamoDB Global Tables: This creates multi-master tables that automatically replicate across two or more AWS regions. The benefits include the ability to build high performance globally distributed cloud applications, support for low-latency reads and writes to locally available tables, and ensured application availability through multiregion redundancy. These features are easy to set up and don’t require application rewrites.
  • DynamoDB Backup and Restore: This automates on-demand and continuous backups of hundreds of terabytes of data instantaneously with no performance impact on applications. It also enables point-in-time data restore.

The newly launched Amazon Neptune graph database is for applications — such as recommendation engines, anti-fraud, and social networking – that work with highly connected datasets. Neptune can store billions of graph relationships, auto-scale capacity, support low-latency queries, replicate data across Availability Zones and support full backup-and-restore. It enables graph queries in SparQL and implements the Apache TinkerPop and W3C RDF graph models.

On its core Amazon S3 data lake service, AWS introduced the new S3 Select API. This enables applications to retrieve data subsets, which can significantly improve performance on many applications by eliminating the need to retrieve entire objects when all that’s required is processing of subsets of the content.

Other new cloud data management features include server-side database encryption and automatic synchronization of mobile app data to the AWS cloud.

Analytics, Machine Learning, Deep Learning, And Artificial Intelligence

The banner announcements at re:Invent were AWS’s many additions to its portfolio of analytics, ML, DL and AI offerings. Many of these announcements were designed to accelerate the development of an open ecosystem for developers building sophisticated AI applications around the AWS cloud. In addition, AWS announced a new program of research grants and a laboratory to stimulate development and commercialization of ML in the cloud.

The most noteworthy new AI-related product launch was the Amazon SageMaker. This cloud offering launches AWS into the growing market for AI development tools that incorporate built-in DevOps workflows. The fully managed service provides an abstraction layer for teams of data scientists and developers to collaborate on building and deployment sophisticated AI-driven apps. SageMaker enables developers to pull data from their S3 data lake, leverage a library of preoptimized algorithms, build and train models at scale, optimize them through ML-driven hyperparameter optimization capabilities, and deploy them in real-time into production EC2 cloud instances. SageMaker is configured for MXNet and TensorFlow. Amazon SageMaker automatically configures and optimizes TensorFlow and Apache MXNet,, so customers don’t have to do any setup to start using these frameworks. However, customers can bring any framework to Amazon SageMaker by building the framework into a Docker container stored in the Amazon EC2 Container Registry.

It is agnostic to the underlying development framework and runtime libraries that are used to build and train models. Developers access SageMaker through hosted Jupyter notebooks, can run with it with their choice of AI modeling frameworks (including MXNet, TensorFlow, CNTK, Caffe2, Theano, Torch or PyTorch), and can take advantage of built-in autoscaling of their deployed models in EC2.

Recognizing that natural language processing is the core of many AI applications, AWS announced several new services that build on and extend the Amazon Polly (text-to-speech), Amazon Lex (automatic speech recognition and natural language understanding) and Amazon Rekognition (computer vision) offerings it announced at re:Invent 2016. The new Amazon Transcribe, in preview, performs speech-to-text on audio objects stored in S3, recognizing different speakers, supporting custom vocabularies, ensuring correct punctuation and formatting and applying timestamps to the outputs. The new Amazon Translate performs real-time neural machine-driven translation between human languages. And the new Amazon Comprehend is a NLP service that uses ML to identify entities, key phrases, topics and sentiment in text.

In another set of announcements that build on the previous year’s re:Invent announcements in AI, AWS launched Amazon Rekognition Video. This new service uses DL to perform object and activity detection, person tracking, face recognition, content moderation and celebrity recognition on streaming and at-rest video content.  In a related announcement, the newly announced AWS DeepLens (available for pre-order) provides a fully programmable video camera that developers can use – along with SageMaker, prebuilt models and code examples – to build and train video analytics for streaming in the AWS cloud.

Streaming Media

Much of AWS’s AI focus is on real-time processing of media streams. To provide the complementary EC2 back-end for that new generation of low-latency rich-media applications, AWS upgraded an existing streaming offering and launched a new family of streaming infrastructure services.

AWS added the new Video Streams offering to its Kinesis real-time stream computing solution. This new offering streams video and time-encoded data to enable low-latency ML, DL, and other analytics to be applied to those objects, either at rest or in motion. Kinesis Video Streams simplifies development of video-enabled cloud services. It can stream video from millions of  devices, it provides secure, durable, searchable storage of time-index media and other content objects, and it can be programmed through serverless Lambda functions.

The new Elemental family of infrastructure services drives origination, development, publishing, optimization and administration of monetizable video assets in the cloud. The new solutions include AWS Elemental MediaStore (repository that presents a consistent URL for media content), AWS Elemental MediaPackage (just-in-time packaging of streaming media objects), AWS Elemental MedialLive (real-time video loading and compression), AWS Elemental MediaConvert (file-based video processing) and AWS Elemental Media Tailor (AWS Elemental MediaTailor (personalize and monetize video content with server-side ad insertion).

Internet Of Things

Many new cloud applications are accessed in edge scenarios, including mobile, embedded and internet of things devices. With this week’s re:Invent, AWS has significantly expanded its IoT portfolio to support device management, security, analytics and other edge-enabling infrastructure services.

In terms of edge analytics, AWS clearly is positioning its serverless Lambda capabilities as fundamental enablers for developers building edge applications while maintaining a degree of centralized control over all things within the public cloud. However, as with the rest of AWS’s announcements, these new IoT and edge initiatives don’t extend to private, hybrid or multicloud environments.

In IoT-related announcements at re:Invent 2017, AWS launched enhancements to Greengrass for more sophisticated edge deployments. The new AWS Greengrass ML Inference enables ML models to be deployed directly to devices, where they can drive local inferencing whether or not a device is currently connected to the cloud. With this release, AWS Greengrass now supports device-level Lambda functions to load models and do inferencing locally. In addition, AWS Greengrass now supports enhanced data and state synchronization, device security and over-the-air updates. In addition, Greengrass now includes a protocol adapter for OPC-UA and deployment of optimized edge-based ML models to target devices running on Intel Corp. or Nvidia Corp. hardware.

To support development, deployment, optimization and management of AI-infused edge applications, AWS announced the following new offerings:

  • Amazon FreeRTOS: This new free open-source microcontroller OS makes small lower-powered edge devices easier to program, deploy, secure and maintain. It connects easily to AWS Greengrass core devices and includes local AWS Greengrass, IoT Coreand security libraries. It includes a prepackaged microcontroller driver from Microchip, TI, ST and NXP.
  • AWS IoT 1-Click: This new service simplifies building of new IoT applications that leverage local and cloud-based Lambda functions. Through single-click programming, developers can build customized serverless capabilities that trigger various embedded device actions.
  • AWS IoT Device Management: This new service helps businesses rapidly onboard, efficiently organize, continuously monitor and remotely manage connected IoT devices. It supports smart batch device onboarding, over-the-air updating, metadata and state indexing, fine-grained monitoring, logging and fleetwide search.
  • AWS IoT Device Defender: Coming in 2018, this service will enforce IoT fleetwide auditing, protection, authentication, encryption and policy assurance. It enables fleetwide device audits, security configuration, behavior baselining and monitoring, vulnerability assessment, anomaly detection, alerting and remediation.
  • AWS IoT Analytics: In preview now, this new service supports easy analysis of IoT device data. It can collect IoT data from multiple devices and other cloud sources, preprocess and enrich this data, store it in raw or time-series formats in the AWS cloud, and support ad-hoc queries as well as more sophisticated analytics and visualizations through AWS’ QuickSight solution. Analysts can explore and model this data in-depth with pre-built Jupyter notebooks. AWS IoT Analytics includes those notebooks, as well as an optimized SQL query engine and automatic integration with AWS SageMaker.

Device-Centric Cloud Solutions

Some of the splashier announcements at re:Invent 2017 focused on new solutions that converge sophisticated new devices with AWS’s suite of data-driven AI cloud services.

Most notably, AWS announced Alexa for Business. This new offering consists of tools for incorporating Alexa devices, skills and users securely and at scale into business applications. It includes an API for building context-aware voice skills for knowledge worker applications, such as calendaring, meetings and database queries. It supports centralized enrollment of employees’ own personal Alexa devices into their accounts within an Alexa-using business environment. It also includes prepackaged Alexa skills that can be customized by developers to conform to organizational requirements.

As noted above, the newly released AWS DeepLens, in limited preview, provides a fully programmable video camera that developers can use – along with SageMaker, prebuilt models, and code examples – to build and train video analytics for streaming in the AWS cloud.

Last but not least, AWS previewed Amazon Sumerian, a toolkit that enables, within a browser, a developer to create and run virtual reality, augmented reality and 3D applications quickly and easily without requiring any specialized programming or 3D graphics expertise. Sumerian enables composition of highly immersive and interactive scenes that run on popular hardware such as Oculus Rift, HTC Vive, and iOS mobile devices, with support for Android ARCore imminent. Sumerian’s integration with Amazon Lex and Amazon Polly lets developers construct engaging spoken interactions between virtual characters and human users.

For in-depth color commentary on all this from AWS, its partners, customers, the venture capital community and analysts, check out theCUBE interviews from re:Invent 2017. And check out SiliconANGLE’s exclusive in-depth interview with Andy Jassy.

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content