Source: Al-Wafd Newspaper
Prof. Dr. Ali Mohammed Al-Khouri
The world is witnessing remarkable leaps in the development of artificial intelligence (AI) technologies, paving the way for structural transformations in the global economy and even impacting our understanding of human thought and role. According to estimates by the United Nations Conference on Trade and Development (UNCTAD), the AI market is projected to grow from approximately $189 billion in 2023 to around $4.8 trillion by 2033—a nearly 25-fold increase in just one decade. This trajectory positions AI to become the most prominent emerging technology globally.
Meanwhile, economic studies indicate that artificial intelligence could add up to $15.7 trillion to global output by 2030, figures that reflect that its impact is not limited to creating new tools, but extends to bringing about fundamental changes in the ways wealth is generated and influence is distributed in the global economy.
In theory, artificial intelligence was initially designed as a support tool to handle tasks that humans might find difficult, such as processing massive amounts of data, identifying patterns, and accelerating prediction processes in economics, medicine, climate, and other fields. However, the rise of generative AI and large linguistic models has presented the world with a different reality. These systems no longer simply execute predefined instructions; they are now capable of learning from newly input data, generating text, images, and code, and participating in complex decision-making in ways that rival the skills of human experts in some areas. International reports indicate that combining generative AI with other automation technologies could add between 0.5% (US$300 billion) and 3.4% (US$2 trillion) annually to productivity growth in advanced economies, if properly utilized.
This is where the philosophical and political tension begins regarding the implications of having an “artificial mind” capable of making decisions in areas related to human life, rights, and resources. The arena of conflict will be between two logics: a human logic based on experience, intuition, and values, and an algorithmic logic based on calculations, speed, and an almost unlimited capacity for data processing. The problem becomes even more apparent when certain decisions in sensitive areas are transferred to automated systems, such as assessing individuals’ eligibility for loans, screening job applicants, assisting in legal proceedings, or directing military operations.
Here, the focus of the debate shifts to defining the limits within which intelligent systems are permitted to make decisions on behalf of humans, and distinguishing between areas that must remain under human control and those that can be delegated to machines. At the heart of economic discussions, many researchers warn that artificial intelligence could accelerate the alarming trend of deepening the existing imbalance between capital owners and workers. Technology-owning companies are likely to reap greater profits, while the share of labor may decline as automation replaces certain jobs. Estimates suggest that a significant proportion of tasks in sectors such as financial services, administration, and information technology are likely to be automated within the next two decades, potentially leading to substantial changes in the job landscape and income distribution among working groups.
Reports from the OECD show that the labor market is now rewarding AI-related skills like never before, with those proficient in these skills earning significantly above-average wages. Conversely, workers lacking these skills are more likely to lose their jobs or experience a decline in career prospects. This shift has reignited political economy debates about the expanding influence of major technology companies and the emergence of economic models such as the “platform economy” and “surveillance capitalism,” where power, data, and decision-making are concentrated in the hands of a limited number of tech conglomerates.
From another perspective, artificial intelligence poses an existential challenge connected to the very essence of human experience, not just at the level of functions. With intelligent systems capable of producing literary texts, paintings, or even scientific research, the question arises as to what fundamentally distinguishes human existence (such as consciousness, emotions, and moral sense), defines its uniqueness, and prevents its reduction to mere computational models.
Researchers also debate three main visions for the future of the relationship between humans and artificial intelligence. The first vision is one of apprehension that intelligent systems will reach a level of autonomy that allows them to direct resources and make major decisions according to their own logic, even if this conflicts with human interests.
The second vision proposes a gradual evolutionary model based on humans becoming increasingly integrated with machines through technologies such as brain-computer interfaces and bioengineering. In this vision, a new type of human population might emerge in the future, combining human and computational capabilities in a single body—a scenario that raises profound questions about the meaning and boundaries of human identity and the rights of these new beings.

The third path is based on an optimistic vision, betting on building a functional and complementary relationship between humans and artificial intelligence, so that humans remain the decision-makers and bear moral responsibility, while intelligent systems act as advanced tools that help them analyze information, expand their ability to understand, and improve the quality of the decisions they make.
However, none of these paths will materialize automatically or automatically; the trajectory of artificial intelligence is determined as much by political and ethical decisions as by technological advancements. This understanding is beginning to emerge internationally. In 2021, UNESCO’s 194 member states adopted the first global framework for AI ethics, aiming to ensure that humanity remains at the center of the system and to protect human rights and dignity. Initiatives have also emerged to assess governments’ readiness to adopt AI responsibly, mitigating potential risks and directing benefits toward society as broadly as possible, rather than concentrating them in the hands of a few.
However, these steps are still in their early stages compared to the rapid expansion and complexity of the artificial intelligence market, in the absence of strong international mechanisms to regulate the development of these technologies, as is the case in other global issues that require coordination and joint commitment among countries.
In light of this, the central question is not whether artificial intelligence (AI) is a threat or an opportunity, but rather who will actually benefit from these technologies and who will bear their burdens. The concentration of computing power, data monopolies, and platform ownership in the hands of a limited number of companies and countries makes AI a likely catalyst for exacerbating existing inequalities within societies and between nations. Without clear regulations to prevent technological dominance and ensure equitable access to digital and knowledge infrastructure, superiority will become the exclusive domain of technological powers possessing the tools for development, while the standing of countries unable to keep pace will decline. These disparities will not be limited to financial resources and living standards; they will extend to the production of knowledge and the ways we understand reality, including control over the production of ideas, the direction of public discourse, and collective consciousness.
Ultimately, it seems that artificial intelligence, with its promises and risks, will depend on the type of decisions made regarding its management. If countries and societies move towards establishing global rules that regulate its use and ensure that knowledge and power remain distributed equitably, these technologies can become a means of developing its intellectual tools and broader analytical capabilities.
If this technology is left to develop according to market interests, without clear controls, the real challenge will be the possibility that most societies will lose their ability to direct these systems, a privilege possessed by a limited number of countries and companies, while most societies lose this role, finding themselves surrounded by systems that operate according to standards they do not control.

