Free speech is a fundamental right in any democracy. But some people have loud voices and the proverbial megaphone that can easily drown out everybody else’s free speech.
Freedom of the press is also a fundamental democratic right, but it comes with a double-edged sword. The people who own and control the newspapers, magazines, TV networks, radio stations, and the rest of mass media have far louder voices, effectively, than the rest of us.
Social media and other online channels are the new mass media, and they give more of us a chance to grow vast audiences and expose them to our messages, including our political agendas. On some level, this new mass-media landscape requires that the people who run social media channels refrain from using their channels to hammer their own political views into our heads. There is also a growing call for them to monitor, moderate, and even police the usage of their channels for messages that run afoul of laws, sensitivities, and societal norms.
Among these problematic uses is the cynical manipulation of social media by “fake news” and other deceptive tactics designed to engineer mass opinion in order to achieve electoral results. As noted in this recent Wired article, “automated bot armies have artificially amplified perspectives and manipulated trending algorithms. These small, coordinated groups have deliberately gamed algorithms so that a handful of voices can mimic a broad consensus. We’ve seen online harassment used to scare people into self-censorship, chilling their speech and eliminating those perspectives from the debate. Fake likes, shares, comments, and retweets trigger algorithms into thinking that a piece of content is worthwhile or interesting, leading to that content appearing in the feeds of millions.”
As the US midterm elections approach, Americans are yet again bombarded by political messages from all partisans through all channels—especially social media. As a US-based firm, Facebook has been in the crosshairs of popular concern about the potential of social media to sway this election along the lines of what happened in the electoral cycle two years ago. Recently, Facebook stepped up to this responsibility by removing 559 inauthentic political pages of domestic US origin as well as 251 accounts for violating its terms of service on coordinated inauthentic behavior, which it defines as “networks of accounts or Pages working to mislead others about who they are, and what they are doing.” This is the first time it has taken this action on domestic disinformation campaigns, though it has done similarly in the past regarding pages and accounts that were established by parties overseas.
However you might feel about the political leanings of Facebook or the appropriateness of its latest steps, the company clearly realizes that, whatever its future course of action, the response will require a combination of human curation/moderation plus its most sophisticated artificial intelligence (AI). Building a sustainable algorithmic-powered workflow to deal with these tasks on an ongoing basis will be quite tricky.
For Facebook or any media company in its position, building algorithmic protections against engineered political opinions requires a three-pronged approach:
- Assessing account authenticity: Its AI will need to look at political-discussion behavioral patterns to assess the likelihood that the discussants are bona fide humans or simply bots doing brilliant (AI-stoked) impersonations of humans. AI-informed behavioral analytics and natural language processing are at the heart of this challenge. As the furor over the Google Duplex conversational AI technology has shown, this will be increasingly difficult, especially now that natural language processing has gotten scary-good at replicating the nuances of our spoken and textual conversations.
- Determining the organic flow of the narrative distribution pattern: AI-driven applications will need to identify the extent to which the political discussion has played out in a manner characteristic of normal human “word of mouth,” albeit in an entirely digital medium. On top of behavioral analytics and natural language processing, this will require AI-informed social graph analysis. Considering that no two human conversations are likely to spread in exactly the same way, AI will have to perform sophisticated anomaly detection to rate the likelihood that the next political discussion is within confidence bounds associated with organic flow. But because the evil botmasters who orchestrated that flow probably also have sophisticated AI, they’ll get scary-good at faking this too.
- Certifying the integrity of the sources of published opinions: The firm’s AI will need to leverage an arsenal of techniques for identifying whether the domains that have orchestrated the campaigns are interested parties who aren’t simply erecting short-lived sites and distributed bots in a cynical attempt to manipulate popular opinion. But considering how central such microsites and outbound algorithmic tactics are to legitimate campaigns in the marketing world, it will be hard to draw the line in a practical sense — and in terms of free speech protections — regarding their use in political campaigns. And reputation assessment systems can be biased against unpopular opinions, a fact should dissuade their use in a society that guarantees freedom of dissent.
Underlining all of these algorithmic responses must be utter transparency into how Facebook’s political-discussion moderation/policing efforts are managed. The underlying platform must maintain a tamperproof log of every human decision–as well as every step in the data science pipeline used to build, train, deploy, and evaluate the AI model’s actions—in order to justify the actions that were taken and fend off the inevitable accusations of partisan bias.
It’s absolutely essential that there be iron-clad accountability by all organs of the mass-media ecosystem—especially those run by algorithmic gears of stupefying complexity—to maintain the integrity of democratic governance as US political culture grows ever nastier.