Cairo
Source: Al-Wafd Newspaper
Prof. Dr. Ali Mohammed Al-Khouri
Humanoid robots are no longer merely an engineering feat of industrial progress. Rather, they represent a profound philosophical shift that addresses human nature, the limits of consciousness, and the function of intelligence itself. We are living in a historical phase in which the lines between the natural and the artificial, between emotions and software, are blurring. The world stands on the threshold of a turning point where technological innovations intertwine with major existential questions.
What’s exciting is that projects to build robots with human-like bodies and behaviors that mimic natural interactions have become a daily occurrence in the laboratories of major tech companies, based on unprecedented developments in generative artificial intelligence, deep learning, and neuroengineering. Perhaps ironically, these robotic entities are no longer confined to factories or industrial environments, but are gradually infiltrating social, educational, healthcare, and even emotional spaces, bringing with them unprecedented questions about the nature of the “self,” the legitimacy of the “alternative,” and the limits of the relationship between humans and machines.
While robots in the recent past were designed to perform purely mechanical functions, the current landscape points to a radically different phase, with companies and research centers seeking to develop robots capable of linguistic interaction, facial recognition, interpreting gestures, and even mimicking emotional expressions. The famous “Sophia” and “ASIMO” models were not merely technical models; they were early attempts to represent “human-like existence” in electronic form. With the rise of artificial intelligence based on giant language models, it has become possible to generate natural conversations that are almost invisible to the untrained eye, raising a strategic question about how close artificial intelligence actually is to human consciousness.
The importance of this progress stems from the fact that it is no longer confined to the laboratory or experimental model, but has become embedded in the core structures of society. Social robots are beginning to be used in elderly care, educational settings, and media and entertainment platforms, in an attempt to relieve pressure on human labor without abandoning the “human dimension” of interaction. Herein lies the most important challenge: To what extent can these artificial entities constitute an effective alternative to human relationships, without leading to a decline in social cohesion or a widening of the communication gap between individuals?
In parallel with this progress, language models such as ChatGPT and others are being developed. These models go beyond generating text or providing answers, but also demonstrate a growing ability to capture the speaker’s style, understand the context of the conversation, and interact in a way that mimics the intended meaning, even lending a human touch through the use of humor or emotional language. These technologies are no longer confined to entertainment applications or smart assistant tools, but are now present in our daily lives as a new mediator in the relationship between humans and technology.
In the world of video games, digital characters, once pre-programmed to perform fixed roles, have evolved into digital entities capable of learning and interacting based on the user’s actions, creating new forms of interactive relationships within virtual environments. These environments are no longer mere entertainment platforms; they have become spaces where new aspects of communication and behavior emerge, with psychological and social dimensions that were previously impossible.
This transformation reaches a more subtle and dangerous level when we move to discuss what is known as “artificial general intelligence,” a concept that refers to the construction of an intelligent system capable of performing any mental task that a human can accomplish, not within a specific scope, but rather across multiple and changing fields of knowledge.
This ambition, which has preoccupied researchers for decades, is not simply a new technical achievement. Rather, it represents an existential turning point that re-raises major questions about the meaning of thought, the limits of consciousness, and the human place in the intelligence equation. Once this type of intelligence is achieved, we will no longer be developing a tool; rather, we will be faced with a non-human entity possessing comprehensive cognitive capacity, qualifying it to be a true cognitive partner, not merely a controlled artificial product.
Robots designed to resemble humans in appearance and behavior—known as anthropomorphic robots—clearly embody this accelerating trajectory toward human emulation. These technological models no longer merely seek to mimic movements or voices; their goal has become to build an artificial body capable of social communication, emotional interaction, and even participating in reshaping the concept of the “other” within human perception. When confronted with this “artificial doppelganger,” humans are no longer dealing with a machine, but rather with a mirror that resembles them, inviting them to rethink the boundaries of their own self.
As the development of these semi-human entities accelerates, they are expected to gradually enter sectors previously considered to be among the most demanding of human interaction. From customer service to home care, from psychological support to administrative work, these models are approaching the performance of roles that were, until recently, the exclusive domain of humans. This makes it imperative to consider a new legal and social framework to keep pace with these changes and recalibrate the relationship between humans and machines within the institutional system.
Despite the broad prospects opened by these technological breakthroughs, the challenges they pose remain equally dangerous and complex, and pose even more pressing ethical and societal questions. The first of these challenges is: Who has the right to program these entities? Who is responsible for their behavior? Do they have the right to make ethical decisions that affect humans? What are the limits of their use in areas such as security, the judiciary, or education? These are not theoretical questions far removed from reality; rather, they are legislative and political issues that are now being imposed on decision-makers in public policy and national legislation.
Another challenge is the risk of dual use. Just as robots are capable of providing care and assistance, they can also be used as tools for control or harm, whether in military or informational contexts. There are also long-term social impacts, such as the replacement of traditional jobs, the disintegration of social ties, and emotional detachment from reality—effects that require new societal understandings of what it means to “be human” in the age of artificial intelligence.
From this perspective, multi-level strategies are necessary: legislative, philosophical, educational, and technological. New governance frameworks must be established that treat these entities as more than mere tools, but rather as agents with social, cultural, and psychological impact. These transformations must also be taught in universities, and educational curricula that address the concepts of intelligence, knowledge, and human interaction must be rethought.
Amid this accelerating progress, it’s important not to assume that everything that can be manufactured is necessarily progress, and not to be distracted by the glitter of technical achievement at the expense of fundamental questions that remain resistant to being answered by algorithms or reduced to mathematical equations: Who are we? What makes us human? Is there something at the core of the human experience that cannot be copied, translated into code, or presented as a commodity?
Ultimately, the question today is not whether robots will replace humans, but whether, in our quest to emulate ourselves, we have begun to redefine “human” according to artificial standards. At such a moment, preoccupation with artificial intelligence is no longer merely a technological passion, but rather a means of understanding the human self from a different angle—one that stems not from emotion or sensory experience, but from a rigorous, unsentimental analysis that forces us to view ourselves with a rigorous, rational eye. The fundamental question remains not what a machine can accomplish, but whether humans have a role beyond what has become programmable.