نوع مقاله : مقاله پژوهشی
نویسندگان
1 دانشجوی کارشناسی ارشد روابط بینالملل دانشگاه مازندران، بابلسر، ایران
2 استادیار روابط بینالملل، گروه علوم سیاسی و روابط بینالملل دانشگاه مازندران، بابلسر، ایران
چکیده
هوش مصنوعی را میتوان از شاخصها و نتایج مهم انقلاب صنعتی پنجم دانست. تحولی که بسیاری از کشورها را به فکر اتخاذ یک راهبرد مناسب برای مدیریت عواقب آن انداخته است. هدف مقاله پاسخ به این پرسش است که چرا اتحادیه اروپا اقدام به سیاستگذاری در حوزه فناوری هوش مصنوعی نموده است؟ بنا بر فرضیه، اتحادیه اروپا بنا دارد قوانین و سیاستگذاری مناسب و پایداری در حوزه هوش مصنوعی وضع نماید تا ضمن مدیریت عواقب آن، شرایط دستیابی به هوش مصنوعی خوب و قابل اعتماد را در راستای استقلال استراتژیک خود فراهم نماید. فرضیه فوق از دریچه نظریه سیاستگذاری عمومی و تنظیمگری به بحث گذاشته میشود. روش جمعآوری اطلاعات در قالب استفاده از منابع کتابخانهای و اینترنتی است. سپس اطلاعات جمعآوری شده به روش کیفی در قالب روش تحلیل مضمون مورد تجزیه و تحلیل قرار گرفته است. یافتههای تحقیق نشان میدهد اتحادیه اروپا در تلاش است با وضع قوانین و دستورالعملها، ریسک پذیرش تکنولوژی جدید را به حداقل رسانده و الزامات امنیتی و حکمرانی را متناسب با هوش مصنوعی بازتنظیم نماید.
کلیدواژهها
موضوعات
عنوان مقاله [English]
The EU’s Strategic Independence Strategy and Good Artificial Intelligence
نویسندگان [English]
- Mostafa Kaka 1
- Mokhtar Salehi 2
1 Master Student, International Relations, University of Mazandaran, Babolsar, Iran
2 Assistant Professor, International Relations, Department of Political Sciences and International Relations, University of Mazandaran, Babolsar, Iran
چکیده [English]
Introduction
Artificial intelligence (AI) can be considered one of the key indicators and outcomes of the Fifth Industrial Revolution. This transformation has led many countries to consider adopting appropriate strategies to manage its consequences. The present study aimed to answer the following question: Why has the European Union (EU) pursued policymaking in the field of artificial intelligence? The study is based on the hypothesis that the EU seeks to establish appropriate and sustainable laws and policies for AI in order to manage its consequences, while also creating conditions to achieve reliable and trustworthy AI in line with its strategic autonomy.
Literature Review
Previous studies have addressed various aspects of AI and its impact on security and governance. For example, in the report titled Artificial Intelligence and Life in 2030, Stone et al. (2016) emphasized the role of artificial intelligence in enhancing strategic autonomy and its applications across sectors such as defense, agriculture, and healthcare. Furthermore, the 2022 report by the Centre for European Economic Policy (CEEP) analyzed the EU’s strategies in the field of AI and its impact on global competitiveness. Building on these studies, the present research employed existing theoretical frameworks and new data to provide a more detailed examination of the topic.
Materials and Methods
The current study focused on analyzing and identifying the main themes within relevant texts. The data was collected through both library and online sources, including official European Union documents, research papers, and reports related to AI and public policy. The method of thematic analysis method was used to analyze the data.
Results and Discussion
The thematic analysis was applied to examine European Union documents, policies, and regulations related to AI. This resulted in the identification of key themes that reflect the EU’s strategy toward achieving strategic autonomy and developing trustworthy AI. The findings indicate that the EU aims to strengthen its digital sovereignty by emphasizing factors such as reducing dependency on global actors, ensuring independent decision-making in critical areas, implementing comprehensive regulations, and protecting citizens’ data and privacy. The identified themes were organized into seven major categories: 1) strategic autonomy, 2) AI governance, 3) data transparency and security, 4) risk management and technological ethics, 5) public trust, 6) human oversight, and 7) security and military applications of AI. Each of theme was supported by various open codes, reflecting the EU’s focus on balancing technological innovation with human rights principles. Moreover, the EU tends to classify AI systems according to risk levels and impose strict requirements on high-risk technologies, thus seeking to establish a legal, ethical, and accountable framework that ensures both societal security and public acceptance of AI. In addition, relying on comprehensive policies on transparency, accountability, and system explainability, the EU is actively working to strengthen public trust in this technology. According to the findings, in response to technological transformations and the rise of AI, the EU has assumed a proactive role in regulation and policymaking. Leveraging its legal and institutional capacities, the EU seeks to establish a framework for the safe, ethical, and trustworthy use of AI—one that addresses internal concerns related to human rights, security, and public trust, while also serving as a viable model for other countries. The analysis also focused on the concept of good AI, which translates the EU’s core values into the digital domain. By emphasizing principles such as transparency, human oversight, privacy, and accountability, the EU seeks to align technological development with human dignity and democratic values. This approach, in contrast to purely technocratic views, demonstrates that the future of AI depends not only on technical progress but also heavily on governance and policy choices. Moreover, the thematic analysis indicated that the EU, as a normative power, seeks to extend its influence in the global technology market by implementing strict regulations, thereby encouraging other international actors to align with these standards. This approach strengthens the EU’s regulatory power on the global stage and gradually positions it as a global authority in the governance of emerging technologies such as AI. Overall, the current analysis demonstrated that the EU’s AI policy is not only designed to prevent potential threats but also to embody a forward-looking and normative approach that integrates innovation with ethics, and security with fundamental human freedoms. While this path comes with challenges, it can serve as a model for responsible governance in the digital transformation era.
Conclusion
The results underscored the EU’s commitment to establishing a regulatory framework that balances technological advancement with ethical considerations and public safety. By focusing on transparency, data protection, and human oversight, the EU is setting a global example for AI governance that aligns with democratic values and human rights. However, challenges remain, particularly in the rapid evolution of AI technologies, international competition, and the need for cross-border collaboration to address global AI challenges. Future research should explore the effectiveness of the EU’s regulatory policies in practice, assess their impact on innovation, and examine how these policies can adapt to emerging technological developments.
کلیدواژهها [English]
- Good AI
- Trustworthy AI
- European Strategic Autonomy
- AI Security
- AI Governance