Formerly known as Wikibon

Breaking Analysis: Moore's Law is Not Dead & AI is Ready to Explode

Contributing Author: David Floyer

Moore’s Law is dead right? Think again. While the historical annual CPU performance improvement of ~40% is slowing, the combination of CPUs packaged with alternative processors is improving at a rate of more than 100% per annum. These unprecedented and massive improvements in processing power combined with data and AI will completely change the way we think about designing hardware, writing software and applying technology to businesses. Every industry will be disrupted. You hear that all the time. Well it’s absolutely true and we’re going to explain why and what it all means. 

In this Breaking Analysis we’re going to unveil some data that suggests we’re entering a new era of innovation where inexpensive processing capabilities will power an explosion of machine intelligence applications. We’ll also tell you what new bottlenecks will emerge and what this means for system architectures and industry transformations in the coming decade. 

Is Moore’s Law Really Dead? 

We’ve heard it hundreds of times in the past decade. EE Times has written about it, the MIT Technology Review, c/net and even industry associations that marched to the cadence of Moore’s Law. But our friend and colleague Patrick Moorhead got it right when he said:

Moore’s Law, by the strictest definition of doubling chip densities every two years, isn’t happening anymore. -Patrick Moorhead

And that’s true. He’s absolutely correct. However he couched that statement saying “by the strictest definition” for a reason… because he’s smart enough to know that the chip industry are masters at figuring out workarounds. 

Historical Performance Curves are Being Shattered

The graphic below is proof that the death of Moore’s Law by its strictest definition is irrelevant.

The fact is that the historical outcome of Moore’s Law is actually accelerating, quite dramatically. This graphic digs into the progression of Apple’s SoC developments from the A9 and culminating in the A14 5nm Bionic system on a chip. 

The vertical axis shows operations per second and and the horizontal axis shows time for three processor types. The CPU, measured in terahertz (the blue line which you can hardly see); the GPU, measured in trillions of floating point operations per second (orange); and the NPU (neural processing unit) measured in trillions of operations per second (the exploding gray area). 

Many folks will remember that historically, we rushed out to buy the latest and greatest PC because the newer models had faster cycle times, i.e. more GHz. The outcome of Moore’s Law was that performance would double every 24 months or about 40% annually. CPU performance improvements have now slowed to roughly 30% annually, so technically speaking, Moore’s Law is dead. 

Apple’s SoC Performance Shatters the Norm

Combined, the improvements in Apple’s SoC since 2015 have been on a pace that’s higher than 118% annual improvement. Higher because 118% is the actual figure for these three processor types shown above. In the graphic, we’re not even counting the impact of the DSPs and accelerator components of the system, which would push this higher. 

Apple’s A14 shown above on the right is quite amazing with its 64-bit architecture, multiple cores and alternative processor types. But the important thing is what you can do with all this processing power – in an iPhone!. The types of AI continue to evolve from facial recognition to speech/NLP, rendering videos, helping the hearing impaired and eventually bringing augmented reality to the palm of your hand. 

Quite incredible. 

Processing Goes to the Edge – Networks & Storage Become the Bottlenecks

We recently reported Microsoft CEO Satya Nadella’s epic quote that we’ve reached peak centralization. The graphic below paints a picture that is telling. We just shared above that processing power is accelerating at unprecedented rates. And costs are dropping like a rock. Apple’s A14 costs the company $50 per chip. Arm at its v9 announcement said that it will have chips that can go into refrigerators that will optimize energy use and save 10% annually on power consumption. They said that chip will cost $1. A buck to shave 10% off your electricity bill from the fridge. 

Processing is plentiful and cheap. But look at where the expensive bottlenecks are. Networks and storage. So what does this mean? 

It means that processing is going to get pushed to the edge – wherever the data is born. Storage and networking will become increasingly distributed and decentralized. With custom silicon and processing power placed throughout the system with AI embedded to optimize workloads for latency, performance, bandwidth, security and other dimensions of value. 

And remember, most of the data – 99% – will stay at the edge. We like to use Tesla as an example. The vast majority of data a Tesla car creates will never go back to the cloud. It doesn’t even get persisted. Tesla saves perhaps 5 minutes of data. But some data will connect occasionally back to the cloud to train AI models – we’ll come back to that. 

But this picture above says if you’re a hardware company, you’d better start thinking about how to take advantage of that blue line, the explosion of processing power. Dell, HPE, Pure, NetApp, etc. are either going to start designing custom silicon or they’re going to be disrupted in our view. AWS, Google and Microsoft are all doing it for a reason. As are Cisco and IBM. As Sarbjeet Johal said, “this is not your grandfather’s semiconductor business.” 

This is not your grandfather’s semiconductor business. – Sarbjeet Johal

And if you’re a software engineer, you’re going to be writing applications that take advantage of of all the data being collected and bringing to bear this immense processing power to create new capabilities like we’ve never seen before. 

AI Everywhere

Massive increases in processing power and cheap silicon will power the next wave of AI, machine intelligence, machine learning and deep learning.

We sometimes use artificial intelligence and machine intelligence interchangeably. This notion comes from our collaborations with author David Moschella. Interestingly, in his book “Seeing Digital,” Moschella says “there’s nothing artificial” about this. He uses the term machine intelligence and says there’s nothing artificial about machine intelligence just like there’s nothing artificial about the strength of a tractor. A nuance but interesting nonetheless. 

There’s nothing artificial about machine intelligence just like there’s nothing artificial about the strength of a tractor. – David Moschella

It’s a nuance but precise language can often bring clarity. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get “smarter” – make better models for example that can lead to augmented intelligence and better decisions by humans, or machines. These models improve as they get more data and iterate over time. 

Deep learning is a more advanced type of machine learning that uses more complex math.

The right side of the chart above shows the two broad elements of AI. The point we want to make here is that much of the activity in AI today is focused on building and training models. And this is mostly happening in the cloud. But we think AI inference will bring the most exciting innovations in the coming years. 

AI Inference Unlocks Huge Value

Inference is the deployment of the model, taking real time data from sensors, processing data locally, applying the training that has been developed in the cloud and making micro adjustments in real time.

Let’s take an example. We love car examples and observing Tesla is instructive and a good model as to how the edge may evolve. So think about an algorithm that optimizes the performance and safety of a car on a turn. The model takes inputs with data on friction, road conditions, angles of the tires, tire wear, pressure, etc. And the model builders keep testing and adding data and iterating the model until it’s ready to be deployed. 

Then the intelligence from this model goes into an inference engine, which is a chip running software, that goes into a car and gets data from sensors and makes micro adjustments in real time on steering and braking and the like. Now as we said before, Tesla persists the data for a very short period of time because there’s so much data. But it can choose to selectively store certain data if needed to send back to the cloud and further train the model. For example, if an animal runs into the road during slick conditions – maybe Tesla persists that data snapshot, sends it back to the cloud, combines it with other data and perfects the model further to improve safety. 

This is just one example of thousands of AI inference use cases that will further develop in the coming decade. 

AI Value Shifts From Modeling to Inferencing

This conceptual chart below shows percent of spend over time on modeling versus inference. And you can see some of the applications that get attention today and how these apps will mature over time as inference becomes more mainstream. The opportunities for AI inference at the edge and in IoT are enormous. 

 

Modeling will continue to be important. Today’s prevalent modeling workloads in fraud, adtech, weather, pricing, recommendation engines, etc. will just keep getting better and better. But inference, we think, is where the rubber meets the road so to speak as shown in the previous example.

And in the middle of the graphic we show the industries, which will all be transformed by these trends. 

One other point on that. Moschella in his book explains why historically, vertical industries remained pretty stovepiped from each other. They each had their own “stack” of production, supply, logistics, sales, marketing, service, fulfillment, etc. And expertise tended to reside and stay within that industry and companies, for the most part stuck to their respective swim lanes. 

But today we see so many examples of tech giants entering other industries. Amazon entering grocery, media and healthcare, Apple in finance and EV, Tesla eyeing insurance– there are many examples of tech giants crossing traditional industry boundaries and the enabler is data. Auto manufacturers over time will have better data than insurance companies for example. DeFi – decentralized finance  – platforms using the blockchain will continue to improve with AI and disrupt traditional payment systems and on and on. 

Hence we believe the often repeated bromide that no industry is safe from disruption.

Snapshot of AI in the Enterprise

Last week we showed you the chart below from ETR.

This is data shows on the vertical axis Net Score or spending momentum. The horizontal axis is Market Share or pervasiveness in the ETR data set. The red line at 40% is our subjective anchor -anything about 40% is really good in our view. 

ML/AI – is the number one area of spending velocity and has been for a while, hence the four stars. RPA is increasingly an adjacency to AI and you could argue cloud is where all the ML action is taking place today and is another adjacency, although we think AI continues to move out of the cloud for the reasons we just described. 

Enterprise AI Specialists Carve out Positions

The chart below shows some of the vendors in the space that are gaining traction. These are the companies CIOs and IT buyers associate with their AI/ML spend.

This graph above uses the same Y/X coordinates – Spending Velocity on the vertical by Market Share on the horizontal axis. Same 40% red line. 

The big cloud players, Microsoft, AWS and Google dominate AI and ML with the most presence. They have the tooling and the data. As we said, lots of modeling is going on in the cloud but this will be pushed into remote AI inference engines that will have massive processing capabilities collectively. We are moving away from peak centralization and this presents great opportunities to create value and apply AI to industry. 

Databricks is seen as an AI leader and stands out with a strong Net Score and a prominent Market Share. Spark Cognition is off the charts in the upper left with an extremely high Net Score albeit from a small sample. The company applies machine learning to massive data sets. DataRobot does automated AI – they’re super high on the Y axis. Dataiku helps create ML-based apps. C3.ai is an enterprise AI company founded and run by Tom Siebel. You see SAP, Salesforce and IBM Watson just at the 40% line. Oracle is also in the mix with its autonomous database capabilities and Adobe shows as well.  

The point is that these software companies are all embedding AI into their offerings. And incumbent companies that are trying not to get disrupted can buy AI from software companies. They don’t have to build it themselves. The hard part is how and where to apply AI. And the simple answer is – follow the data. 

Key Takeaways

There’s so much more to this story but let’s leave it there for now and summarize.

We’ve been pounding the table about the post- x86 era, the importance of volume in terms of lowering the costs of semiconductor production and today we’ve quantified something that we haven’t really seen much of and that’s the actual performance improvements we’re seeing in processing today. Forget Moore’s Law being dead – that’s irrelevant. The original premise is being blown away this decade by SoC and the coming system on package designs. Who knows with Quantum what the future holds in terms of performance increases. 

These trends are a fundamental enabler of AI applications and as is most often the case, the innovation is coming from consumer use cases…Apple continues to lead the way. Apple’s integrated hardware and software approach will increasingly move to the enterprise mindset. Clearly the cloud vendors are moving in that direction. You see it with Oracle. It just makes sense that optimizing hardware and software together will gain momentum because there’s so much opportunity for customization in chips as we discussed last week with Arm’s announcement – and it’s the direction Pat Gelsinger is taking Intel. 

One aside – Pat Gelsinger may face massive challenges with Intel but he’s right on that semiconductor demand is increasing and there’s no end in sight. 

If you’re an enterprise, you should not stress about inventing AI, rather your focus should be on understanding what data gives you competitive advantage and how to apply machine intelligence and AI to win. You’ll buy, not build AI. 

Data, as John Furrier has said many times, is becoming the new development kit. He said that 10 years ago and it’s more true now than ever before. 

Data is the new development kit. – John Furrier

If you’re an enterprise hardware player, you will be designing your own chips and writing more software to exploit AI. You’ll be embedding custom silicon and AI throughout your product portfolio and you’ll be increasingly bringing compute to data…data will mostly stay where it’s created. Systems, storage and networking stacks are all being disrupted. 

If you developer software, you now have processing capabilities in the palm of your hands that are incredible and you’re going to write new applications to take advantage of this and use AI to change the world. You’ll have to figure out how to get access to the most relevant data, secure your platforms and innovate. 

And finally, if you’re a services company you have opportunities to help companies trying not to be disrupted. These are many. You have the deep industry expertise and horizontal technology chops to help customers survive and thrive. 

Privacy? AI for good? That’s a whole topic on its own and is extensively covered by journalists. We think for now it’s prudent to gain a better understanding of how far AI can go before we determine how far it should go and how it should be regulated. Protecting our personal data and privacy should be something that we most definitely care for – but generally we’d rather not stifle innovation at this point.

Let us know what you think.

Keep in Touch

Remember these episodes are all available as podcasts wherever you listen.

Email david.vellante@siliconangle.com | DM @dvellante on Twitter | Comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail.

Watch the full video analysis:

Image credit: Buffaloboy

Note: ETR is a separate company from Wikibon/SiliconANGLE.  If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at legal@etr.ai.

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

You may also be interested in

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content