AI Bias
- charlie0676
- Jul 26, 2025
- 2 min read

Humans have naturally been biased since the beginning of our species when bias could serve as a tool in the process of determining which plant may be poisonous, or which person may be hostile. Since then, we built civilizations and created technologies capable of modeling our own minds. Over time, algorithms developed to automate various processes, these biases inherent in our cognitive structures began to find their way into the programs that we designed. The largest contributor to bias in Artificial Intelligence systems remains to be biased, or inaccurate input data used for training the models. Similar to how a child may assume biases from their parent, these artificial systems will model neural pathways according to the biased data, therefore perpetuating the bias.
Methods exist for reducing the amount of bias in AI tools. Several AI models use a Human in the Loop system, in which a human regulates the data flow to the training model in an effort to reduce bias in the system. Naturally, this is not a perfect solution since the human may introduce biases of their own into the system as they weight various data on their importance; that said, it still presents a method for reducing bias. Additionally, companies developing AI can strive to use data that encompasses a broad spectrum of demographics, geographies, and contexts. This ensures that AI represents all users without bias.
Biased AI introduces many harmful real-world consequences. This fact makes us question whether AI is living up to its promise: to better humanity. Several instances exemplify the dangers and potential that biased AI can create in the world. Launched in 2014 and discontinued in 2017, Amazon’s hiring tool for screening resumes downgraded resumes containing the word “women's” as in “women’s coding club leader” and favored resumes containing strongly male associated terms. Due to the historical precedent of favoring men over women for high rankings positions, the AI began to associate a good job application with a man’s application, encoding former biases into the current algorithms. This emphasizes the importance of using a wide range of representative data for training AI.
Biased AI is not simply a technical defect, but also a violation of justice, which Aristotle considered the highest virtue. When algorithms fail to treat equals equally, they defy his definition of distributive justice: the allocation of burdens and goods in society. When we align AI with virtue and not convenience, we will create a technology to benefit all of humanity.


Comments