Formerly known as Wikibon
Close this search box.

Artificial Agency: The “Action” Side of AI

Artificial intelligence is all the rage. Today, computing can perform feats that appear similar to stuff humans might do, if we were smarter (i.e., had more computing power). But just because a computing system can do something, should it? What actions do we want computers performing on our behalf?

We can treat this like a question of “agency.” Not unlike a non-employee insurance agent that provides customer services on behalf of an insurance company, we are venturing into an area in which computing systems will be expected to perform work – to act – on behalf of brands (or other legal entities). Artificial intelligence has demonstrated that it can perform incredible processing feats, like recommend music, recognize faces, and predict part failures, by recognizing patterns in data. Moreover, it can do these things at scale, with a proficiency that’s at least equal to a group of humans, and at very low cost.

But most of these tasks involve trivial agency questions. As AI is put into service to perform more complex tasks, where trade-offs options are unresolvable or even can’t be formulated, agency becomes a central feature of system design, build, and operation. Indeed, artificial intelligence success is spawning artificial agency problems.

The algorithms for determining if a task is or is not simple, and therefore if agency is or isn’t a potential problem, are really weak. Generally, the rule has been that if agency questions are a problem, don’t implement AI. That’s why so many of today’s powerful AI solutions are operating in domains that already accept automation from engineering, legal, and moral perspectives, but are looking for better approaches to automate.

But we know that’s going to change. Elon Musk’s decision to add self-driving to Tesla’s didn’t involve a lot of public discussion, to the best of my knowledge. His “algorithm” appears to have been “I shoot first and ask questions later.” I don’t think that’s a particularly good approach for the commons, despite Musk’s clear intellectual gifts. Interestingly, Musk is one of the loudest voices in the pitched debate about limiting AI, essentially claiming that AI will bury humanity one way or another. Perhaps Musk figured that by moving first he could both reap extraordinary profits and get a jump on ingratiating himself to his future robot masters.

More likely Musk, like many others seeking greater public discourse about complex topics, wants a bit of government cover as he seeks to push AI’s limits. But in any event more discussion is necessary – especially at smaller scale, like inside companies. Here’s a few of the questions that business leaders have to start addressing:

  • Where do we want to deploy “systems of agency?” You no doubt have heard the terms systems of record and systems of intelligence. Well, let me introduce you to systems of agency. Wikibon is conducting significant research on this topic. Essentially, a system of agency is a system that perform work on behalf of a brand, especially work that makes that brand liable. Our research suggests a range of systems of agency, depending on the nature of resultant liabilities and what – or who – determines behavior.
  • Where are the tools for designing and deploying agency? Tooling for application development and DevOps are advancing rapidly on method, orchestration, service, security and other lines. Soon, nascent tools for translating a business’s appetite for agency risk into working AI will start to hit the market. Over time – probably pretty rapidly, actually – conventions for designing, developing, testing, deploying, and maintaining agency will be incorporated into apps. Indeed, it’s kinda started, if you are using blockchain in an app.
  • How will we price risk in systems of agency? This is a big issue, one that will gate a lot of AI-related innovation. The core challenges are two. First, complex decision-making domains by definition feature contingency that cannot be designed or built away. Terrible choices have terrible consequences and there’s nothing a great developer can do about that. Second, we still have a problem reasonably pricing the value of data. If we can’t put a price on the data used to build AI-driven systems of agency, then pricing the risk of a system based on that data is problematic. Simply put, we’ll hear a lot about system of agency proof-of-concepts that never see the light of day because we can’t bond them.

Action Item. AI is automating increasingly complex work. However, business leaders need to look closely at AI agency questions. Just because an AI system can be built, doesn’t mean it should be deployed.

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content