In between meeting with customers, crowdchatting with our communities and hosting theCUBE, the research team at Wikibon finds time to meet and discuss trends and topics regarding digital business transformation and technology markets. We look at things from the standpoints of business, the Internet of Things, big data, application, cloud and infrastructure modernization. We use the results of our research meetings to explore new research topics, further current research projects and share insights. This is the sixth summary of findings from these regular meetings, which we plan to publish every week.
The combination of faster, cheaper, and memory-rich hardware, coupled with unprecedented streams of data, has renewed interest in an old favorite: artificial intelligence. But this time AI, and its progeny “machine learning” and “deep learning,” is generating real returns in a wide array of industries and applications. We’ve written about a number of them at Wikibon: machine learning systems that extend the useful life of ERP systems in the grocery business; digital twin software that can dramatically improve automation in complex operations; and rapidly evolving technologies for accelerating productivity in IT operations management (ITOM), without which advances in other digital business domains would be impossible.
Last week, Apple announced iOS 11, and with it a new security feature based on facial recognition. How is Apple delivering this capability? Deep learning. The new iPhones are capable of performing 600 billion operations a second, and Apple is throwing some of that power at deep learning algorithms capable of locally performing facial recognition.
That got the Wikibon research team thinking: where will deep learning processing take place? Is all data going to be moved to central cloud locations for processing? Or, as Wikibon’s Jim Kobielus observed, will deep learning be baked into all edge endpoints?
Deep Learning Will Run at the Edge
We believe that the architecture for deep learning, machine learning, and other data rich variants of AI will be:
- Centralized training and testing.
- Distributed, edge-based execution.
Again, Jim Kobielus explains:
Why? Because the costs of moving data in real-time are extreme — to the point of being impossible, where latency is a problem. Moreover, the rapid advances in hardware technologies that are powering the development of the cloud are also reshaping computing possibilities at the edge, in local machines and human-friendly, mobile devices. Wikibon’s David Floyer explains:
If advanced software function for data-rich applications is going to be processed at the edge, closer to the point of action, what does that mean for the future of devices? It means we’re going to see a lot of demand for increasingly powerful clients, across a lot of form factors — many of which, as Jim Kobielus explains, don’t immediately evoke the notion of computer:
All of this will be brought to you by increasingly diverse and specialized roles, some — like data scientists — are nascent, while others — like application development — are about to undergo rapid transformation to tackle these new challenges. Jim Kobielus observes that:
Ultimately, these “intelligent” technologies will catalyze an accelerating demand for deeper business collaboration among an expanding array of disciplines. Without smart people working together to conceive of, properly architect in terms of data realities, and build/buy intelligent systems, we’ll end up spending a lot of money on a brand new generation of high-tech paperweights.