Formerly known as Wikibon

Mitigating AI’s Many Risks to Society


Artificial intelligence (AI) is rife with risks. Some of these may stem from design limitations in a specific buildout of the technology.  Others may be due to inadequate runtime governance over live AI apps. Still others may be intrinsic to the technology’s inscrutable “blackbox” complexity.

Wikibon refers to this overarching societal concern as “AI risk management.” Generally, this refers to the myriad of ways in which the technology may adversely impact society, as well as the various technological, procedural, regulatory, and other guardrails to mitigate the most worrisome threats. Check out this recent Wikibon Action Item for a wide-ranging crowdchat on this topic.

AI’s principal risks to society include:

  • Can we prevent AI from invading people’s privacy?
  • Can we eliminate socioeconomic biasesthat may be baked into AI-driven applications?
  • Can we ensure that AI-driven processes are entirely transparentexplicable, and interpretableto average humans?
  • Can we engineer AI algorithms so that there’s always a clear indication of human accountability, responsibility, and liability for their algorithmic outcomes?
  • Can we build ethical and moral principlesinto AI algorithms so that they weigh the full set of human considerations into decisions that may have life-or-death consequences?
  • Can we automatically alignAI applications with stakeholder values, or at least build in the ability to compromise in exceptional cases, thereby preventing the emergence of rogue bots in autonomous decisionmaking scenarios?
  • Can we throttle AI-driven decision making in circumstances where the uncertaintyis too great to justify autonomous actions?
  • Can we institute failsafe proceduresso that humans may take back control when automated AI applications reach the limits of their competency?
  • Can we ensure that AI-driven applications behave in consistent, predictable patterns, free from unintended side effects, even when they are required to dynamically adapt to changing circumstances?
  • Can we protect AI applications from adversarial attacksthat are designed to exploit vulnerabilities in their underlying statistical algorithms?
  • Can we design AI algorithms that fail gracefully, rather than catastrophically, when the environment data departs significantly from circumstances for which they were trained?

AI risk mitigation has become a popular topic on the mainstage at tech conferences. Researchers can tap into a growing pool of grants that fund innovative approaches for addressing the problem, much of it coming from the coffers of big technology vendors. And it’s a challenging time for legislators, policy analysts, and others trying to bring coherence to the confusing, overlapping, and sparse regulatory mechanisms for dealing with all of this. For a dissection of the likely global regulatory fall-out around facial recognition, for example, check out my recent InformationWeek column on the topic.

AI risk management is the focus of a growing curriculum that’s essential study for the next generation of data scientists and other application developers. AI safeguards will almost certainly find their way into future waves of commercial devices, applications, and cloud services, though it’s clear that these will need to coalesce into a broader body of risk mitigation practices in order to be effective.

If you’re a developer, certifying an AI application, service, or product as a manageable risk is possible. However, as I discussed in this recent Dataversity article, it will need to holistically address the following risk factors:

  • AI rogue agency: AI must always be under the control of the user or a designated third-party. Testing should certify that users can always rescind AI-driven decisioning agency in circumstances where the uncertainty is too great to justify autonomous actions.
  • AI instability: AI’s foundation in machine learning means that much of its operation will be probabilistic and statistical in nature, rather than according to fixed, repeatable rules. It should be possible to certify that the AI fails gracefully, rather than catastrophically, when environment data departs significantly from circumstances for which they were trained.
  • AI sensor blindspots: When AI is incorporated into robots, drones, self-driving vehicles, and other sensor-equipped devices, there should be some indication to the consumer about the visuals, sounds, smells, and other sensory inputs they’re unable to detect under realistic operating environments. Independent testing could have uncovered this risk, as well as any consequent risks from faulty collision avoidance and defensive maneuvering algorithms.
  • AI privacy vulnerabilities: Considering that many AI-driven products, such as Alexa, are in the consumer end of the Internet of Things, there must be safeguards to prevent them from inadvertently invading people’s privacy, or from exposing people to surveillance hack attacks by external parties.
  • AI adversarial exposure: Vulnerabilities in your deep neural networks can expose your company to considerable risk if they are discovered and exploited by third-parties before you even realize or have implemented defenses. Testing should be able to certify that AI-infused products are able to withstand some of the most likely sources of adversarial attacks.
  • AI algorithmic inscrutability: Many safety issues with AI may stem from the “blackbox” complexity of its algorithms. Independent testing of an AI product should call out the risks a consumer faces when using products that embed such algorithms. And there should be disclaimers on AI-driven products that are not ideally transparentexplicable, and interpretableto average humans.
  • AI liability obscurity: Just as every ingredient in the foodchain may be traceable back to a source, the provenance of every AI component of a product should be transparent. Consumer confidence in AI-infused products rests on knowing that there’s always a clear indication of human accountability, responsibility, and liability for their algorithmic outcomes. In fact, this will almost certainly become a legal requirement in most industrialized countries, so testing labs should start certifying products that ensure transparency of accountability.

There is a perfect storm of AI nasties just waiting to happen. The human race has barely begun to work through the disruptive consequences of this bubbling cauldron of risk. And let’s not  overlook the trend toward AI’s weaponization, which poses an existential threat any way you look at it. In my recent SiliconANGLE column on the technology’s central role in military initiatives everywhere. Check out my recent Datanami article on the threat that AI-driven drones pose to civil defenses, in which I go into depth on advances in counterdrone technology.

Yes, we can protect society from many, but not all of these AI downsides. However, many tradeoffs must be made and many people may find the resulting technological, regulatory, and other remedies disproportionate to the peril. And we need political leadership everywhere who are themselves not going rogue on these matters.

But we would be naïve to believe that society can ever fully protect itself from all the adverse consequences that may befall us from our AI inventions. The sounder minds among us will have to erect guardrails to keep it all in check without denying humanity the many amazing fruits that AI promises.

Article Categories

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
"Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. "
John Furrier
Co-Founder of theCUBE Research's parent company, SiliconANGLE Media

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well”

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content