Mony Aramalla investigates why AI is not a neutral decision maker and how the bias within the model impacts lives.
Artificial Intelligence is often described as objective, after all, it’s in the name: artificial, devoid of error and bias. Belief that they are neutral decision makers guided by data rather than human judgement is commonplace. From hiring algorithms to medical risk scores, we are told that replacing people with machines removes bias from the equation. Yet again and again, these systems produce outcomes that mirror the same inequalities, assumptions and blind spots they were meant to avoid. This is not because AI is malfunctioning, but simply because objectivity was never truly an offer. Long before a model is trained, human choices shape what is measured, whose data is collected, how success is defined, and which errors are deemed acceptable. AI doesn’t stand outside society; it reflects it, encodes it, and amplifies it, often while claiming false neutrality.
For example, hiring algorithms have seen an increase in implementation in workplaces. Many of these models are trained on past resumes to identify successful candidates. In instances where a company’s historical workforce is overwhelmingly male, the model learns to penalise CVs that include indicators associated with women such as women’s colleges or women’s sports. The math works exactly as intended. The framing is the problem. Data reflects the world as it has been recorded, and the world is uneven.
In healthcare, a widely used algorithm was designed to identify patients who would benefit from extra care. Instead of measuring illness directly, it used past healthcare spending as a proxy. Because Black patients historically had less access to healthcare and lower medical spending, the algorithm systematically underestimated their health needs. As a result, Black patients who were just as sick as white patients were less likely to receive additional care.
The bias came from treating an unequal system as an objective baseline.
Similar patterns appear in facial recognition technologies, which have repeatedly shown higher error rates for people with darker skin tones, particularly women. These systems were trained on datasets that underrepresented certain populations, yet were deployed in policing and surveillance under the assumption of neutrality. In some cases, false matches led directly to wrongful arrests. When data reflects inequality, AI systems trained on that data will reproduce and often amplify it.
Bias does not stop at data collection. Labeling data requires human interpretation. What counts as “toxic language,” “fraud,” or “high risk” is rarely obvious. Ambiguity is flattened into right or wrong answers, disagreement is averaged out or completely avoided, and context is lost from many situations that require nuance to understand.
Ambiguity is flattened into right or wrong answers, disagreement is averaged out or completely avoided, and context is lost from many situations that require nuance to understand.
Optimisation then turns these judgments into consequences. AI systems are trained to maximise metrics like accuracy, efficiency, or risk reduction. These outcomes are not technical failures. They are the predictable results of design choices about what to optimise and whose errors matter most. The issue is not that AI contains values. That is unavoidable. The problem is that those values are often hidden behind claims of objectivity.
Calling AI “neutral” shifts responsibility away from designers, institutions, and policymakers. Harmful outcomes are treated as glitches rather than consequences of human decisions and this is a problem. Bias becomes something to “fix later,” instead of something to confront at the design stage.
A more responsible approach to AI begins by abandoning the myth of objectivity. Instead of asking whether a system is neutral, we should ask: whose values does it encode, who made these choices, and who bears the cost?
Transparency about assumptions, trade offs, and limitations is essential. AI does not need to be value-free to be useful. It needs to be accountable. AI is not an impartial judge. It is a mirror that reflects the priorities, inequalities, and stereotypes of the systems that built it. Recognising that is the first step toward building technology that actually serves the society it claims to improve.
