AI Deepfakes
- Nov 23, 2025
- 2 min read
In today’s technology driven age, truth becomes increasingly separated from appearances. Artificial Intelligence (AI) Deep faking technologies pose the greatest threat to truth as we know it online. Several emergent technologies in 2014 and 2015 built the foundations for deepfakes: they were autoencoders and generative adversarial networks. Autoencoders enabled technologies to compress a face into a latent representation and reconstruct it with high accuracy. This meant that the face of one person could be saved and reconstructed on another person’s body, essentially swapping the faces. Generative Adversarial Networks proved to be yet another pivotal technology in this progression toward AI Deepfakes. Through two components, the generator and the discriminator, these networks train themselves to create increasingly realistic outputs. The generator aims to create an image convincing enough to fool the discriminator, and the discriminator aims to discern which images are real and which images are AI generated. Later, in 2017, the term deepfake was coined by the Reddit user Deepfakes, who was one of the first people to utilize AI face swapping technology. This technology poses several ethical risks such as consent violations, manipulation, fraud, and more.
Consent is central to the ethics argument of deep faking; it is, in a sense, a person’s right to control their own identity, and AI deep faking seeks to violate this right. A person’s face and voice are representative of their whole identity, and people deserve to be in control of their appearance online. Many states have taken actions to limit unethical deepfaking. Furthermore, Congress proposed the Deepfakes Accountability Act most recently in 2023; however, it has not been passed. Laws certainly need to be made to protect citizens from deepfake harm, but they must be carefully considered and evaluated to ensure that the laws themselves do not violate the autonomy of people. Many people have criticized the Deepfakes Accountability Act for this.
Additionally, manipulation is a severe danger that accompanies the rise of AI deepfakes. The technology may be used to fabricate events or distort realities all while undermining public trust in the media. In the case of politics, deepfaked content could alter election results often by impersonating a candidate, degrading their public image. This content can spread rapidly, even faster than it can be caught by fact-checkers. Increasingly, these deepfakes are becoming indistinguishable from reality, making them more effective at deceiving voters.
Deepfake-driven fraud has spread at an alarming rate over the past year. Using hyper-realistic audio voice clones, criminals have been able to impersonate the voices of executives, approve wire transfers, and trick employees. Scammers often use AI generated voice clips of a someone in urgent need to deceive that person’s family members into sending money to “aid.”
Deepfakes ultimately represent a shift in the perception of truth in our digital society. As the ability to distort reality becomes widespread, the preservation of truth becomes vital importance. It will require collaboration between technologists, educators, policy makers, and everyday citizens to solve this pressing philosophical crisis.



Comments