Skip to content Skip to footer

Blaming Workers for AI Shortcomings — a New Corporate Strategy?

In a highly competitive AI market, blaming “human error” can help companies hide serious flaws in their systems.

A Microsoft logo is displayed on a smartphone with Artificial Intelligence (AI) symbols in the background.

Amid unprecedented inflation of Canadian grocery prices which were up 9.1 percent year-over-year in June 2023, Microsoft recently posted an article to MSN.com which offered travel tips for the capital city Ottawa. The article included a suggestion to check out the Ottawa Food Bank, with this unusual advice: “Life is already difficult enough. Consider going into it on an empty stomach.”

After being mocked by several commentators, the article was taken down and Microsoft stated that “the issue was due to human error … the content was generated through a combination of algorithmic techniques with human review, not a large language model or AI system.”

While there is no way to know for sure exactly what happened, attributing blame for this incident to a human reviewer is disingenuous. Perhaps the reviewer was asleep at the wheel, but the content was surely generated by a machine. It’s not hard to imagine artificial intelligence (AI) behind this incident, given Microsoft’s track record of algorithmic missteps. Consider the chatbot Tay, which spouted Nazi slogans not long after its launch. Or the rushed release of the Bing AI large language model, which has generated all kinds of bizarre behaviors — for which Bill Gates has blamed users for “provoking” the AI. Regardless of who is actually at fault with the Ottawa Food Bank incident, there’s something interesting about Microsoft’s blame game here.

Let’s contrast the Ottawa Food Bank incident with the 2017 event in which the supposedly AI-powered startup Expensify was exposed for not having the technological capacities it claimed to have. Reports revealed Expensify to be using Amazon Mechanical Turk — a platform which hires workers to complete small tasks that algorithms cannot — to process confidential financial documents.

The Expensify story provides fodder for a now common critique of the AI industry: that overhyped AI acts as a mere facade for necessary human labor behind the scenes. This is dubbed “Potemkin AI” or “fauxtomation.” But Microsoft’s gaffe reveals a different operation at work. Instead of human workers being hidden by a false AI, we see an AI being hidden behind an anonymous human error. Human labor is presented as a “fall guy” who takes the blame for a machine.

The first question to ask is whether explanations based on human error are too easy — and what other elements of the system they divert attention away from.

To think about this, we can draw on anthropologist Madeleine Clare Elish’s 2019 concept of “moral crumple zones,” which describes how “responsibility for an action may be misattributed to a human actor who had limited control over the behavior of an automated or autonomous system.” While a crumple zone in a car serves to protect the humans within the vehicle, moral crumple zones serve to protect “the integrity of the technological system” by attributing all responsibility to human error. Elish’s study does not consider AI-related moral crumple zones, but she does frame her research as motivated by a need to inform debates about the “policy and ethical implications of AI.” As Elish notes, one can occupy many different positions in relation to an automated system — with varying degrees of possibility of intervention — and thus culpability for the system’s failure. Moral crumple zones can thus be weaponized by parties with an interest in limiting scrutiny of their machines. As the Ottawa Food Bank story shows, a faceless human error can absolve the failure of machines in a complex automated system.


This is significant because it suggests the AI industry is moving from pretending to deploy AI to actually doing so. And often, driven by competition, these deployments occur before systems are ready, with an increased likelihood of failure. In the wake of ChatGPT and the proliferation of large language models, AI is an increasingly consumer-facing technology, so such failures are going to be visible to the public, and of increasingly tangible effect.

The Ottawa Food Bank incident and its deployment of a moral crumple zone was relatively harmless, serving mainly to preserve public opinion of Microsoft’s technical capacities by suggesting that AI was not to blame. But if we look at some other examples of algorithmic moral crumple zones, we can see the potential for more serious uses. In 2022, an autonomous semi-truck made by startup TuSimple unexpectedly swerved into a concrete median while driving on the highway. The overseer in the cab assumed control and a serious accident was averted. While TuSimple attributed the accident to human error, this was disputed by analysts. Back in 2013, when Vine was a trending social medium, hardcore porn appeared as the “Editor’s Picks” recommended video on the app’s launch page. Again, a company spokesperson explicitly blamed “human error.”

It doesn’t really matter if human error was actually to blame in these incidents. The point is that the AI industry will no doubt seek to use moral crumple zones to their advantage, if they haven’t already. It is interesting to note that Elish is now the Head of Responsible AI at Google, according to her LinkedIn profile. Google is surely mobilizing the conceptual apparatus of moral crumple zones as it conducts its public-facing AI operations. The Ottawa Food Bank incident suggests that the users of AI and those otherwise affected by its processing of data should similarly consider how blame is attributed within complex sociotechnical systems. The first question to ask is whether explanations based on human error are too easy — and what other elements of the system they divert attention away from.