top of page
Search

AGI: When is it Human?

  • charlie0676
  • Jun 28
  • 3 min read

ree

If a machine feels pain, dreams, or falls in love... is it human? The answer to the question regarding the ethics of Artificial Intelligence remains obscure to modern society, creating possibilities earlier philosophers never conceived.

With the establishment of companies driving Artificial Intelligence (AI) technology forward, the world has entered a new era of rapidly developing AI technology. Traditional AI operates using algorithms, which are predefined instructions given by a human programmer that the AI follows. Therefore, the output of traditional AI models is not necessarily original thought but rather a reproduction of inputted data. Although it is only in theoretical development, AGI changes this by introducing the potential to generate original thought. It can attain proficiency in a wide range of tasks, learn and adapt rapidly, and often exceed human intelligence in these fields.


Artificial General Intelligence (AGI)

There are several functions an AGI must be able to perform to be considered on par with the human mind:

  • Abstract Reasoning

  • Learning from limited data

  • Creativity

  • Language


The Turing Test

In 1950, Alan Turing introduced the Turing Test as a theoretical method for testing the intelligence of machines. The test consists of a human “judge” interacting with both another human and a machine through text-only messages. The judge’s job is to determine who is the human and who is the machine. Today, many modern AI systems can pass this test. The test is not a true indicator of human intelligence or moral worth, only its ability to imitate human-like qualities. To explore these ideas further, we can look to metaphysics and the Chinese Room thought experiment.


Thinking Deeper With Metaphysics

Perhaps it is the metaphysical difference between humans and machines that defines their moral worth. Human brains consist of cells, and machines are made of metal. For humans, these thoughts are generated by connections between life – the living cells of the brain. Whereas for machines, it is simply whether a bunch of transistors are supplied with sufficient voltage or not. This definition of being alive is what supplies moral worth, not the ability to imitate and impersonate.


The Chinese Room Argument

Published in a paper called "Minds, Brains, and Programs," John Searle proposed the idea of the Chinese Room test. A person who does not understand Chinese is in a room but given exact instructions for communicating in Chinese. Outsiders talking to the person in the room do not know the difference. These outsiders think he understands Chinese, yet he really does not. This thought experiment strongly asserts the argument that a machine cannot have a mind, understanding, or consciousness. With this conclusion, we can assume that no matter how advanced AI becomes, it will never rival the complexity of the human mind.


Rene Descartes “I think, therefore I am” and AGI

Descartes famously claimed, “I think, therefore I am.” However, if an AGI begins to think, does this mean that it exists in the same meaningful sense as humans who think? Descartes developed the theory of dualism in which the mind or soul exists separately from the body. If Descartes lived in a world with AGI, he may argue that without this component of the soul, it does not exist with the same meaning as humans.


Applying The Utilitarian Framework to Analyze AGI

The basic principles of utilitarianism state that actions are right if they maximize overall well-being and happiness. Applying this ethical framework to AGI, if AGI brings numerous enhancements to human welfare such as prosperity, advanced scientific and medical research with extreme potential for availing life, or an impact on the environment through solutions to climate change, the development of AGI could be morally justified from an ethical standpoint. That said, if AI causes harm in such a way that it would be a detriment to society, strict safeguards will need to be implemented.


If society deems AI deserving of rights equivalent to those given to humans, utilitarianism would require balancing the rights given to AGI with those of humans. An example of this being, should an AGI be switched off or “killed” in a sense, if it benefits a human?

These are the prominent thoughts for evaluating AGI’s consciousness. If AGI becomes integrated into daily life, the lines separating the human mind from all else will become blurred. As we develop technology, we must not only engineer brilliance but also develop ethical frameworks that guide our technological vision.

 
 
 

Comments


bottom of page