Formerly known as Wikibon
Close this search box.

Wikibon’s 2018 Artificial Intelligence Predictions


In 2018, artificial intelligence (AI) will shift toward focus on solution-oriented development environments, robust model training tools, edge-based inferencing architectures, generative applications, and optimized commodity chipsets.


Developers increasingly build AI into their applications. Key aspects of the AI development process include model training, open development environments, building customized AI applications, and pushing AI functions toward the edges of the Internet of Things and People (IoT&P).

With these overall trends in mind, Wikibon makes the following predictions for AI platforms, frameworks, and tooling in 2018:

  • AI development toolkits will shift toward solution-domain orientation.
  • AI model training will become a robust platform segment.
  • Local AI inferencing will become standard in edge applications.
  • Generative AI will give creative professionals new power tools.
  • Low-cost AI chipsets will take the mobility market by storm.

AI Development Toolkits Will Shift Toward Solution Orientation.

Wikibon Prediction: Developers will adopt tools that enable fast development of AI applications for specific solution domains. By year-end 2018, leading AI solution providers will offer domain-specific software development kits (SDKs) for the majority of principal commercial AI use cases.

AI frameworks are fundamental to next-generation applications. Thus, they are becoming fundamental to developer productivity. Over the past several years, the AI market has been converging on an increasingly vendor-agnostic development environment. In the coming year, key layers of this general-purpose vendor-agnostic framework will become standard in most general-purpose AI development tools.

In 2018, vendors will launch a growing range of AI SDKs that take open-source tools to the next level of solution orientation. Ultimately, the AI tool vendors who prevail will be those who recognize that each domain requires a development framework suited to its special requirements. For example, the requirements of developers of AI-infused industrial robots tend not to overlap with those who embed AI in mass-market smartphones, IoT edge devices, or embedded e-commerce chatbots. For one thing, some solution domains (such as robotics) make extensive use of reinforcement learning, as opposed to the supervised and unsupervised learning that prevail in other AI domains. For another, some domains (such as interactive chatbots) deliver AI models that drive conversational user interfaces, whereas others (such as drones) enable highly autonomous robotic endpoints. Furthermore, AI use cases vary widely in their target hardware, cloud, and application deployments.

By year-end 2018, the more diversified AI solution providers will offer domain-specific SDKs, or have partners who extend those vendors’ general-purpose SDKs, for a diverse array of use cases (see Figure 1). Already, Wikibon is seeing an expanding range of AI SDKs geared to mobile, chatbot, IoT, drone, gaming, and robotics applications. Figure 1 provides an overview of solution-oriented AI SDKs.

Going forward, vendors will differentiate solution-oriented development tools on their ability to speed development, training, and deployment of finished AI apps. Key differentiators will include APIs, statistical modeling interfaces, algorithm and code libraries, pre-trained models, reference applications, and other functional components suited to the most common commercial, industrial, and public-sector use cases. To support the entire AI app-dev pipeline, the new generation of solution-oriented AI toolkits will also include embedded DevOps, collaboration, governance, training, and data management features suited to their various domains.

In addition, these tools will provide role- and task-oriented development interfaces tailored to the needs of the technical and domain specialists in each domain. And they will allow developers to extend and customize every interface, feature, and component to address domain-specific AI challenges.

Figure 1: AI development toolkits will shift toward solution orientation.

AI Model Training Will Become A Robust Platform Segment.

Wikibon Prediction: Developers will adopt robust tools for training AI models for disparate applications and deployment scenarios. By year-end 2018, AI model training will emerge as the fastest growing platform segment in big data analytics. To keep pace with growing developer demand. most leading analytics solution providers will launch increasingly feature-rich training tools.

Maintaining AI applications’ fitness for purpose often involves training them with data from the solution domain into which they will be deployed. In 2018, developers will come to regard training as a potential bottleneck in the AI application-development process and will turn to their AI solution providers for robust training tools.

During the year, we’ll see AI solution providers continue to build robust support for a variety of AI-model training capabilities and patterned pipelines in their data science, application development, and big-data infrastructure tooling. Many of these enhancements will be to build out the automated ML capabilities in their DevOps tooling. By year-end 2018, most data science toolkits will include tools for automated feature engineering, hyperparameter tuning, model deployment, and other pipeline tasks. At the same time, vendors will continue to enhance their unsupervised learning algorithms to speed up cluster analysis and feature extraction on unlabeled data. And they will expand their support for semi-supervised learning in order to use small amounts of labeled data to accelerate pattern identification in large, unlabeled data sets (see Figure 2).

In 2018, synthetic (aka artificial) training data, will become the lifeblood of most AI projects. Solution providers will roll out sophisticated tools for creation of synthetic training data and the labels and annotations needed to use it for supervised learning.

The surge in robotics projects and autonomous edge analytics will spur solution providers to add strong reinforcement learning to their AI training suites in 2018. This will involve building AI modules than can learn autonomously with little or no “ground truth” training data, though possible with human guidance. By the end of the year, more than 25 percent of enterprise AI app-dev projects will involve autonomous edge deployment, and more than 50 percent of those projects will involve reinforcement learning.

During the year, more AI solution providers will add collaborative learning to their neural-net training tools. This involves distributed AI modules collectively exploring, exchanging, and exploiting optimal hyperparameters so that all modules may converge dynamically on the optimal trade-off of learning speed vs. accuracy. Collaborative learning approaches, such as population-based training, will be a key technique for optimizing AI in that’s embedded in IoT&P edge devices. It will also be useful in for optimizing distributed AI architectures such as generative adversarial networks (GANs) in the IoT, clouds, or even within server clusters in enterprise data centers. Many such training scenarios will leverage evolutionary algorithms, in which AI model fitness is assessed emergently by collective decisions of distributed, self-interested entities operating from local knowledge with limited sharing beyond their neighbor entities.

Another advanced AI-training feature we’ll see in AI suites in 2018 is transfer learning. This involves reuses of some or all of the training data, feature representations, neural-node layering, weights, training method, loss function, learning rate, and other properties of a prior model. Typically, a developer relies on transfer learning to tap into statistical knowledge that was gained on prior projects through supervised, semi-supervised, unsupervised, or reinforcement learning. Wikibon has seen industry progress in using transfer learning to reuse the hard-won knowledge gained in training one GAN on GANs in adjacent solution domains.

Also during the year, edge analytics will continue to spread throughout into enterprise AI architectures. During the year, edge-node on-device AI training will become a standard feature of mobile and IoT&P development tools. Already, we see it in many leading IoT and cloud providers’ AI tooling and middleware.

Figure 2: AI model training will become a robust platform segment.

Local AI Inferencing Will Become Standard In Edge Applications.

Wikibon Prediction: Edge-based inferencing will become a foundation of all AI-infused applications in the IoT&P. By year-end 2018, the majority of new IoT&P application-development projects will involve building the AI-driven smarts for deployment to edge devices for various levels of local sensor-driven inferencing.

AI is rapidly being incorporated into diverse applications in the cloud and at the network’s edge, especially in embedded, mobile, and IoT&P platforms.

By year-end 2018, IoT&P development will shift toward applications that perform edge-based local inferencing on locally sensed data. This inferencing will encompass the full range of decisions that may be required of edge devices, including performing high-speed correlation, prediction, classification, recognition, differentiation, and abstraction based both on sensor-sourced machine data, plus data acquired from clouds, hub gateways, and other nodes.

Local inferencing will be the core workload for all system of agency applications, which enable continuous real-time decision support and automated recommender systems throughout the digital online economy. Local inferencing is the foundation of all modes of edge-based agency, including various degrees of autonomous operation, augmented human decisioning, and actuated environmental contextualization. Figure 3 illustrates AI-driven device-level local inferencing in a tiered  IoT&P edge-computing architecture.

During 2018, the following applications, all of which rely on local inferencing, will become standard in IoT&P applications built for edge deployment:

  • multifactor authentication,
  • speech recognition,
  • natural language processing,
  • conversational user interfaces,
  • digital assistants,
  • recommenders,
  • computer vision,
  • face recognition,
  • object recognition,
  • geospatial and propriocentric awareness,
  • mixed reality,
  • image manipulation,
  • emotion detection,
  • sentiment analysis, and
  • cybersecurity protection.

Where IoT&P intersects with robotics and industrial systems, AI-driven local inferencing will also drive edge-node physical responses, including various forms of locomotion, manipulation, shapeshifting, absorption, fabrication, dispensing, and delivery.

For these and other IoT&P edge applications, adaptive learning and federated training will become essential for embedded AI models to continually assure accuracy in local inferencing. Nevertheless, most edge-AI training will continue to be managed centrally for IoT&P applications, with the standard process being to distribute trained models to edge devices for local inferencing.

Where federated edge-based training gains a foothold, it will be for more complex, distributed, and autonomous AI applications that must rely on collaborative learning to attain a common cross-node service level. In addition, due to their distributed nature, federated edge-based AI training is likely become common in industrial IoT, IT operations management, and autonomous vehicle infrastructure management.

Figure 3: Local AI inferencing will become standard in edge applications.

Generative AI Will Give Creative Professionals New Power Tools.

Wikibon Prediction: Generative AI will drive the next generation of apps for auto-programming, content development, visual arts, and other creative, design, and engineering activities. By year-end 2018, most of the leading AI solution providers will offer tools and libraries for building AI-powered natural language generation, image manipulation, and other generative use cases.

AI can generate fresh patterns in data with astonishing speed, efficiency, and verisimilitude. Over the past few years, it has become commonplace for AI to algorithmically generate any object that can be rendered digitally. Already, the technology has proven itself in the following areas, which are illustrated in Figure 4:

In 2018 and beyond, more solutions will come to market—in all verticals—that leverage leading-edge AI approaches known as generative adversarial networks (GANs) to algorithmically digital and analog objects of all sorts with astonishing accuracy. Before year-end 2018, more solution providers will roll out GAN-driven tools and workbenches for software programming, computer-aided design, web content development, music composition, image manipulation, video production, and other creative disciplines. And generative photo apps are likely to come to every smart camera application on mobiles and other mass-market IoT&P endpoints.

GANs were a massive global research focus in the AI community in 2017. The pace of advances in GAN technology is likely to accelerate in the coming year. And generative design techniques are likely to come into the core curricula of data science, creative, and engineering professions globally.

Figure 4: Generative AI will give creative professionals new power tools.

Low-Cost AI Chipsets Will Take The Mobility Market By Storm.

Wikibon Prediction: The next generation of commodity AI-optimized chipsets will gain mass-market edge deployment. By year-end 2018, the dominant AI chipmakers will all have introduced new generations of chipsets that densely pack tensor-processing components on low-cost, low-power systems on a chip.

Over the past several years, hardware manufacturers have introduced an impressive range of chip architectures—encompassing graphic processing units, tensor processing units, field programmable gate arrays, and application-specific integrated circuits—that address these requirements. In 2018, a new generation of AI-optimized commodity chipsets will emerge to accelerate this technology’s adoption in edge devices. What emerges from this ferment will be innovative approaches that combine GPUs with CPUs, FPGAs, and a new generation of densely packed tensorcore processing units, exemplified by Google Tensor Processing Unit (which is one of many such architectures in development or on the market now).


In 2018, most of the AI chipset startups who’ve received funding in the past 2 years will come to market. The pace of mergers and acquisitions in this segment will increase as the incumbent AI solution providers (especially Google, AWS, Microsoft, and IBM) deepen their technology portfolios and the incumbent AI chip manufacturers—especially NVIDIA and Intel—defend their positions in what’s sure to be the fastest growing chip segment of the next several years.

Figure 5: Low-cost AI chipsets will take the mobility market by storm.

Action Item

Wikibon recommends that developers adopt solution-oriented tools for to speed modeling, training, deployment, and optimization of AI-infused applications such as smart mobility, interactive chatbots, and computer vision. These solution-oriented tools should extend and deepen developers’ investments in popular AI frameworks such as TensorFlow, support the sophisticated model training needed by myriad AI applications , and enable fast-paced deployment of trained AI models to edge devices such as smartphones and IoT&P smart sensors.

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content