Artificial intelligence: between empowerment and complex risks

مدة القراءة 10 دقائق

Abu Dhabi

Source: Mufakiru Alemarat

Prof. Dr. Ali Mohammed Al-Khouri

In a global landscape where technology intertwines with economics, politics, and society, artificial intelligence is at the forefront of the major transformations reshaping the contemporary world. This technology, which has opened unprecedented horizons in research, production, and planning, has simultaneously created an environment of complex risks that extend beyond technical errors to threaten the very essence of human values, legal systems, and sovereignty. The capabilities offered by algorithms are also able to disrupt the delicate balance between freedom and responsibility, and between innovation and anarchy.

Artificial intelligence has become a new knowledge and economic system with a pervasive impact on the structure of the modern state. The potential harms of its systems are numerous. Automated medical systems may err in clinical assessment due to the poor quality of the data upon which they are based. Smart financial platforms can become tools for reproducing social inequalities when built on flawed data and relying on databases with inherent biases in their collection or classification. Furthermore, the lack of regulatory oversight of personal data collection allows data to become an unregulated commercial resource, representing a facet of the new gray economy. Deepfakes, with their images and audio, have become tools for disinformation, defamation, confusing public opinion, and undermining institutional credibility.

Specialized reports indicate that cybercrime aided by artificial intelligence will cost the global economy more than ten trillion dollars by 2025. Research data also shows that AI-powered cyberattacks will become a daily reality for organizations in the near future, and this trend is likely to intensify in the next two years. These indicators reveal the escalating risks of technological innovation and highlight the widening gap between the pace of innovation and the ability of laws and regulations to keep up.

Distributing responsibility in this context is a complex issue where legal, ethical, and economic dimensions intersect. The developer bears responsibility for design flaws or testing failures, the producing company is responsible for damage caused by its product, and the user is responsible for the misuse or unsafe use of the system. The state’s responsibility is not limited to enacting laws; it also includes building a comprehensive legislative framework that anticipates risks rather than reacting to them, and works to establish a culture of safe and responsible use of smart technologies. Interestingly, some legislative circles have proposed granting certain smart systems partial legal status, recognizing them as entities with certain rights and obligations, and assigning them a degree of responsibility limited by their programming and automated behavior. However, this idea remains a gray area in legal and intellectual circles.

Preventing the harms and risks of artificial intelligence requires a holistic approach that treats the phenomenon as an intertwined cognitive, economic, and ethical construct. It is impossible to separate technology from the ethical, legal, and social contexts within which it operates. Transparency in algorithm design is essential, allowing for the tracking of decision-making processes and the review of their rationale. Clear accountability mechanisms must be developed from the earliest stages of development, involving developers, companies, and regulatory bodies. Simultaneously, correcting algorithmic biases within intelligent systems is a fundamental step in protecting the principles of social justice and ensuring equal opportunities. Achieving this requires diversifying data sources to include broad representation of different groups, regions, and cultures, and continuously updating them to prevent the reproduction of biased societal patterns in automated decisions.

The confrontation also requires raising public awareness of the inherent risks of artificial intelligence, from ordinary individuals to decision-makers; knowledge here represents the first line of defense. It is crucial to integrate the concepts of “digital literacy” into educational curricula to cultivate a generation that understands the meaning of responsible technology use and possesses the critical thinking skills to distinguish accurate information from digital misinformation. Civil society organizations should also be involved in building interactive spaces that help individuals understand technology and discuss its implications through awareness initiatives and open forums that contribute to transforming artificial intelligence into a constructive force, rather than a mysterious system that could create a psychological and cognitive gap between humans and machines.

Artificial intelligence is reshaping the global balance of power from a political economy perspective. Countries that control machine learning and big data technologies effectively control the keys to the knowledge economy and exert their dominance through “digital soft power.” The absence of a unified Arab strategy in this field makes the region a geo-digital testing ground for major powers—a testing ground for their new innovations—without local countries being genuine partners in development or in setting legal regulations. Hence the need for regional alliances to establish shared national platforms for data exchange, research development, and skills training. This would transform artificial intelligence from a state of complete dependence on foreign AI systems for managing security and the economy to one where the state possesses the tools of knowledge and economic sovereignty, shifting from a passive recipient to an active producer of knowledge.

The security aspect is no less important than the economic dimension; algorithms are now used to predict military movements, monitor social patterns, and even manage public opinion. Therefore, the focus should be on building a digital defense system capable of dealing with sophisticated attacks with equal intelligence through real-time monitoring and proactive analysis. Investing in cybersecurity and developing specialized teams capable of managing digital crises is a strategic option for protecting a nation’s digital infrastructure.

To achieve this, a set of priority recommendations emerges: First, establishing a comprehensive legislative framework that defines the responsibilities of all parties and enshrines the principle of transparency in the development of algorithmic models. Second, building national systems for secure artificial intelligence that integrate state institutions with the private sector, universities, and civil society. Third, investing in knowledge infrastructure and digital security as an investment in both political and economic stability. Fourth, developing ongoing training programs for personnel in the fields of justice, security, and scientific research to enable them to address emerging crimes and keep pace with rapid technological advancements.

Addressing the risks of artificial intelligence is not simply a matter of regulation; it also concerns societies’ ability to understand the profound transformations shaping their future. The issue extends beyond data protection or crime prevention, reaching the point of redefining the relationship between the human world and artificial intelligence, between ethical principles and technological advancement, and between human sovereignty and the power of algorithms. These crossroads will be the primary determinants of the trends of the new century.

Looking ahead, it seems that artificial intelligence will not only be a tool for progress, but also a measure of the humanity of that progress itself. Technology is neither inherently good nor inherently evil; its outcomes are determined by how it is used. If used consciously, responsibly, and within clear ethical and legal frameworks, it will contribute to serving societies and promoting peace, justice, and prosperity. However, if used haphazardly or outside of established frameworks, it may generate new forms of domination that deepen class and social inequalities among nations and individuals. The greatest danger lies in the fact that this domination may be invisible, exercised through data and algorithms, and by controlling information and consciousness.

Building a new human balance in the age of artificial intelligence requires global intellectual leadership that transcends narrow technical and economic interests, possesses a vision and mindset that sees beyond the glitter of quick solutions, and understands the importance of regulating the pace of the digital revolution within the limits of conscience and the system of values, in order to redirect the course of progress towards the human goal, not the goal of algorithms or the companies that develop them.