Automation bias is a cognitive tendency where people over-trust automated or AI-generated outputs, reducing their own critical judgment and accepting AI recommendations without adequate scrutiny. It was first documented in aviation, where pilots were found to follow autopilot and automated alert systems even when those systems were producing erroneous outputs.
In the workplace, automation bias manifests as: accepting AI-generated reports without checking underlying data, trusting AI diagnostic outputs without clinical verification, following algorithmic hiring recommendations without human review, or approving AI-drafted legal documents without reading them carefully. As AI tools become more integrated into workflows, automation bias risk grows.
The consequences can be severe: an AI medical diagnostic system confidently misclassifying a cancer scan, a financial algorithm generating a fraudulent-looking but undetected report, or a contract review AI missing a problematic clause that a human lawyer would have caught. The errors happen precisely because the human in the loop was not functioning as a genuine check.
Counterintuitively, automation bias is often stronger for highly competent workers who are confident in AI tools. The antidote is cultivating calibrated skepticism — not distrusting AI, but maintaining active critical engagement with AI outputs, treating AI recommendations as hypotheses requiring verification rather than conclusions to accept. This is an increasingly important professional skill as AI tool use expands.