Skip to content Skip to footer

Blaming Workers for AI Shortcomings — a New Corporate Strategy?

In a highly competitive AI market, blaming “human error” can help companies hide serious flaws in their systems.

A Microsoft logo is displayed on a smartphone with Artificial Intelligence (AI) symbols in the background.

Amid unprecedented inflation of Canadian grocery prices which were up 9.1 percent year-over-year in June 2023, Microsoft recently posted an article to MSN.com which offered travel tips for the capital city Ottawa. The article included a suggestion to check out the Ottawa Food Bank, with this unusual advice: “Life is already difficult enough. Consider going into it on an empty stomach.”

After being mocked by several commentators, the article was taken down and Microsoft stated that “the issue was due to human error … the content was generated through a combination of algorithmic techniques with human review, not a large language model or AI system.”

While there is no way to know for sure exactly what happened, attributing blame for this incident to a human reviewer is disingenuous. Perhaps the reviewer was asleep at the wheel, but the content was surely generated by a machine. It’s not hard to imagine artificial intelligence (AI) behind this incident, given Microsoft’s track record of algorithmic missteps. Consider the chatbot Tay, which spouted Nazi slogans not long after its launch. Or the rushed release of the Bing AI large language model, which has generated all kinds of bizarre behaviors — for which Bill Gates has blamed users for “provoking” the AI. Regardless of who is actually at fault with the Ottawa Food Bank incident, there’s something interesting about Microsoft’s blame game here.

Let’s contrast the Ottawa Food Bank incident with the 2017 event in which the supposedly AI-powered startup Expensify was exposed for not having the technological capacities it claimed to have. Reports revealed Expensify to be using Amazon Mechanical Turk — a platform which hires workers to complete small tasks that algorithms cannot — to process confidential financial documents.

The Expensify story provides fodder for a now common critique of the AI industry: that overhyped AI acts as a mere facade for necessary human labor behind the scenes. This is dubbed “Potemkin AI” or “fauxtomation.” But Microsoft’s gaffe reveals a different operation at work. Instead of human workers being hidden by a false AI, we see an AI being hidden behind an anonymous human error. Human labor is presented as a “fall guy” who takes the blame for a machine.

The first question to ask is whether explanations based on human error are too easy — and what other elements of the system they divert attention away from.

To think about this, we can draw on anthropologist Madeleine Clare Elish’s 2019 concept of “moral crumple zones,” which describes how “responsibility for an action may be misattributed to a human actor who had limited control over the behavior of an automated or autonomous system.” While a crumple zone in a car serves to protect the humans within the vehicle, moral crumple zones serve to protect “the integrity of the technological system” by attributing all responsibility to human error. Elish’s study does not consider AI-related moral crumple zones, but she does frame her research as motivated by a need to inform debates about the “policy and ethical implications of AI.” As Elish notes, one can occupy many different positions in relation to an automated system — with varying degrees of possibility of intervention — and thus culpability for the system’s failure. Moral crumple zones can thus be weaponized by parties with an interest in limiting scrutiny of their machines. As the Ottawa Food Bank story shows, a faceless human error can absolve the failure of machines in a complex automated system.


This is significant because it suggests the AI industry is moving from pretending to deploy AI to actually doing so. And often, driven by competition, these deployments occur before systems are ready, with an increased likelihood of failure. In the wake of ChatGPT and the proliferation of large language models, AI is an increasingly consumer-facing technology, so such failures are going to be visible to the public, and of increasingly tangible effect.

The Ottawa Food Bank incident and its deployment of a moral crumple zone was relatively harmless, serving mainly to preserve public opinion of Microsoft’s technical capacities by suggesting that AI was not to blame. But if we look at some other examples of algorithmic moral crumple zones, we can see the potential for more serious uses. In 2022, an autonomous semi-truck made by startup TuSimple unexpectedly swerved into a concrete median while driving on the highway. The overseer in the cab assumed control and a serious accident was averted. While TuSimple attributed the accident to human error, this was disputed by analysts. Back in 2013, when Vine was a trending social medium, hardcore porn appeared as the “Editor’s Picks” recommended video on the app’s launch page. Again, a company spokesperson explicitly blamed “human error.”

It doesn’t really matter if human error was actually to blame in these incidents. The point is that the AI industry will no doubt seek to use moral crumple zones to their advantage, if they haven’t already. It is interesting to note that Elish is now the Head of Responsible AI at Google, according to her LinkedIn profile. Google is surely mobilizing the conceptual apparatus of moral crumple zones as it conducts its public-facing AI operations. The Ottawa Food Bank incident suggests that the users of AI and those otherwise affected by its processing of data should similarly consider how blame is attributed within complex sociotechnical systems. The first question to ask is whether explanations based on human error are too easy — and what other elements of the system they divert attention away from.

Truthout Is Preparing to Meet Trump’s Agenda With Resistance at Every Turn

Dear Truthout Community,

If you feel rage, despondency, confusion and deep fear today, you are not alone. We’re feeling it too. We are heartsick. Facing down Trump’s fascist agenda, we are desperately worried about the most vulnerable people among us, including our loved ones and everyone in the Truthout community, and our minds are racing a million miles a minute to try to map out all that needs to be done.

We must give ourselves space to grieve and feel our fear, feel our rage, and keep in the forefront of our mind the stark truth that millions of real human lives are on the line. And simultaneously, we’ve got to get to work, take stock of our resources, and prepare to throw ourselves full force into the movement.

Journalism is a linchpin of that movement. Even as we are reeling, we’re summoning up all the energy we can to face down what’s coming, because we know that one of the sharpest weapons against fascism is publishing the truth.

There are many terrifying planks to the Trump agenda, and we plan to devote ourselves to reporting thoroughly on each one and, crucially, covering the movements resisting them. We also recognize that Trump is a dire threat to journalism itself, and that we must take this seriously from the outset.

After the election, the four of us sat down to have some hard but necessary conversations about Truthout under a Trump presidency. How would we defend our publication from an avalanche of far right lawsuits that seek to bankrupt us? How would we keep our reporters safe if they need to cover outbreaks of political violence, or if they are targeted by authorities? How will we urgently produce the practical analysis, tools and movement coverage that you need right now — breaking through our normal routines to meet a terrifying moment in ways that best serve you?

It will be a tough, scary four years to produce social justice-driven journalism. We need to deliver news, strategy, liberatory ideas, tools and movement-sparking solutions with a force that we never have had to before. And at the same time, we desperately need to protect our ability to do so.

We know this is such a painful moment and donations may understandably be the last thing on your mind. But we must ask for your support, which is needed in a new and urgent way.

We promise we will kick into an even higher gear to give you truthful news that cuts against the disinformation and vitriol and hate and violence. We promise to publish analyses that will serve the needs of the movements we all rely on to survive the next four years, and even build for the future. We promise to be responsive, to recognize you as members of our community with a vital stake and voice in this work.

Please dig deep if you can, but a donation of any amount will be a truly meaningful and tangible action in this cataclysmic historical moment.

We’re with you. Let’s do all we can to move forward together.

With love, rage, and solidarity,

Maya, Negin, Saima, and Ziggy