United Nations UN Will the UN Write the First Global AI Constitution

United Nations (UN) – Will the UN Write the First Global AI Constitution?

Will the UN Write the First Global AI Constitution? The $632 Billion Question

Will the UN Write the First Global AI Constitution?

The $632 Billion Question Reshaping International Law

AI development races ahead at breakneck speed while the world scrambles to catch up. Worldwide spending on artificial intelligence is forecast to reach $632 billion by 2028, with a compound annual growth rate of 29.0%. Yet 193 countries remain largely uncoordinated on the rules that will govern this transformation.

Picture this scenario: A single international framework that governs artificial intelligence across every nation on Earth, protecting humanity while unleashing innovation. Science fiction? The United Nations thinks differently. The Global Dialogue on AI Governance now provides an inclusive platform within the United Nations for States and stakeholders to discuss the critical issues concerning AI facing humanity today.

The math is staggering. From 2013 to 2024, the U.S. alone raised nearly half a trillion dollars in private investment for AI, followed by China at $119 billion, the UK at $28 billion, and Canada and Israel at $15 billion each. Meanwhile, artificial intelligence has the potential to significantly support the UN by promoting inclusivity, reducing inequalities, and advancing around 80% of the Sustainable Development Goals.

The Reality Check: The UN won't publish a single, binding "AI Constitution" tomorrow. But it's already constructing something potentially more powerful—a living, adaptable governance ecosystem that could reshape how humanity manages its most transformative technology.

This deep-dive reveals why global cooperation on AI governance intensified in 2024, what's working, what's failing, and why the next 18 months could determine AI governance for generations.

The Constitutional Landscape: Beyond Traditional Governance

Traditional international law offers three pathways to global governance. Each represents a different level of commitment and enforceability that shapes how nations coordinate on complex challenges.

Soft Law: The Power of Shared Principles

These frameworks create normative pressure without legal penalties. UNESCO's 2021 Recommendation on the Ethics of AI exemplifies this approach perfectly. Adopted by all 193 UN member states, it establishes principles around human rights, transparency, accountability, and human oversight.

The impact proves remarkable despite lacking enforcement teeth. Countries routinely lift language directly from UNESCO guidance when crafting national AI legislation. This demonstrates how soft law creates practical influence through moral authority and policy blueprints.

Institutional Coordination: The IPCC Model for AI

Permanent bodies that assess risks, publish evaluations, and convene experts represent the institutional backbone of global governance. The UN's High-level Advisory Body on AI has proposed creating an International Scientific Panel on AI, functioning similarly to how the Intergovernmental Panel on Climate Change operates for environmental issues.

This approach recognizes that AI governance requires ongoing adaptation rather than static rules. Technology evolves quarterly while traditional treaties evolve over years—institutional coordination bridges this temporal gap.

Binding Treaties: Legal Instruments with Enforcement

Hard law represents the strongest form of international governance. The Council of Europe's Framework Convention on AI, opened for signature in September 2024, marks a historic milestone as the first international, legally binding AI treaty. Importantly, this convention remains open to non-European countries, extending its potential global reach.

193
UN Member States Adopting AI Ethics
67
Countries with AI Governance Initiatives
23%
Show Meaningful Coordination
340%
Increase in Cross-border AI Incidents

Racing Against Exponential Growth

AI development refuses to wait for regulations. The numbers tell a story of unprecedented acceleration that outpaces traditional governance mechanisms.

78 percent of organizations now use AI in at least one business function, up from 72 percent in early 2024 and 55 percent a year earlier. This explosive adoption rate creates immediate pressure for governance frameworks that can scale with technological progress.

Global AI Investment Growth (2020-2028)
$47B
2020
$118B
2022
$235B
2024
$632B
2028

Traditional regulatory approaches require years to develop while AI capabilities can double every few months in specific domains. This fundamental timing mismatch forces policymakers to develop entirely new approaches to international coordination.

Cross-border AI incidents have increased by 340% since 2022, highlighting the urgent need for international frameworks. Yet meaningful coordination exists in just 23% of countries with AI governance initiatives—revealing a dangerous gap between the global nature of AI systems and the fragmented response to governing them.

Current UN Architecture: Building Constitutional Scaffolding

The United Nations has made remarkable progress establishing groundwork for global AI governance. Rather than pursuing a single comprehensive treaty, the UN is creating constitutional scaffolding through multiple interconnected initiatives.

The March 2024 Breakthrough

The first consensus UN General Assembly resolution on AI represents a watershed moment. This resolution urges all member states to pursue safe, secure, and trustworthy AI for sustainable development. While not legally binding, it creates normative pressure that influences national policies worldwide.

The resolution establishes values baselines that transcend cultural and political differences. Countries from different economic and political systems found common ground on core principles—demonstrating that international AI cooperation remains possible despite broader geopolitical tensions.

"Governing AI for Humanity": The Blueprint

The UN Secretary-General's High-level Advisory Body on Artificial Intelligence released its final report in September 2024, presenting the most comprehensive attempt at international AI coordination to date. This blueprint emerged from extensive global consultations including 18 deep-dive discussions, over 50 consultation sessions across all regions, and written submissions from more than 150 organizations.

The HLAB-AI Framework Components:

  • International Scientific Panel on AI: Evidence-based assessments similar to climate science panels
  • Intergovernmental Policy Dialogue: Ongoing coordination forums for regulatory alignment
  • AI Standards Exchange: Technical harmonization across different national approaches
  • Global AI Fund: Financial support ensuring developing country participation
  • UN AI Office: Day-to-day coordination and capacity building

This approach recognizes that effective AI governance requires ongoing coordination rather than static rules. The HLAB-AI framework operates on creating consensus facts, aligning policies, and building capacity while avoiding approaches that exclude developing countries.

Summit of the Future: September 2024 Momentum

Member states adopted the Pact for the Future and Global Digital Compact at the Summit of the Future. These agreements reference digital and AI cooperation extensively, opening pathways for ongoing UN workstreams rather than pursuing one comprehensive mega-treaty approach.

The Global Digital Compact specifically creates mechanisms for technology transfer, capacity building, and inclusive participation that address real-world concerns about AI governance becoming another arena where wealthy countries set rules for everyone else.

Analyzing Global Investment Patterns

The global AI governance market size was estimated at USD 227.6 million in 2024 and is projected to reach USD 1,418.3 million by 2030, growing at a CAGR of 35.7%. This explosive growth in governance spending reflects both the urgency of the challenge and the business opportunities created by regulatory clarity.

AI Governance Market Growth Projection
$227M
2024
$485M
2026
$892M
2028
$1.4B
2030

Regional Investment Disparities

India plans to invest 74 billion rupees ($1.25 billion) in AI infrastructure, with 45 billion rupees ($543 million) for computer infrastructure and 20 billion rupees ($241 million) for startup financing. South Korea has announced similar massive investments, demonstrating how countries outside traditional AI powerhouses are positioning themselves for the governance conversation.

These investment patterns reveal a critical insight: countries aren't just spending on AI development—they're investing heavily in governance capabilities to ensure they have seats at the international coordination table.

What Constitutional Framework Could Emerge

Drawing from existing UN texts and expert proposals, a realistic constitutional package could emerge by 2026-2030. This wouldn't be a single document but rather an interconnected system of governance instruments.

Global AI Principles (Living Document)

An updated UNESCO-plus-UN-General-Assembly baseline covering human rights protection, human dignity, non-discrimination, transparency, accountability, and mandatory human oversight. This creates business compliance standards even before formal law catches up—similar to how ISO principles inform corporate policies.

Principle Category Key Requirements Implementation Mechanism
Human Rights Protection Non-discrimination, privacy, dignity National legislation alignment
Transparency Standards Explainable decisions, audit trails Technical standards development
Human Oversight Meaningful human control, intervention rights Regulatory requirements
Accountability Frameworks Clear responsibility chains, liability Legal system integration

Scientific Evidence Engine

A permanent, IPCC-style Scientific Panel on AI would track capability jumps, emerging risks, safety incidents, and evaluation methodologies. This ensures policy decisions build on scientific evidence rather than hype cycles or commercial pressure.

The panel's work becomes increasingly vital as AI capabilities evolve. Recent breakthroughs in areas like multimodal AI, autonomous systems, and reasoning capabilities require constant technical assessment to inform policy responses.

Capacity Building Infrastructure

A Global AI Fund would finance compute credits, open datasets, and skills development in developing countries. This addresses the stark reality that 87% of AI research happens in developed countries despite developing nations representing 85% of global population.

Implementation Reality: I've observed that successful international governance emerges incrementally through interconnected instruments rather than comprehensive single treaties. The UN approach follows this proven pattern while adapting to AI's unique characteristics.

Regional Initiatives: Competition or Coordination?

European Union: The AI Act Pioneer

The EU's AI Act, enforced since August 2024, represents the world's most comprehensive AI regulation. Its risk-based approach creates a template that other regions study closely:

  • Unacceptable Risk: Prohibited applications like social scoring systems
  • High Risk: Strict requirements for AI in critical sectors
  • Limited Risk: Transparency obligations
  • Minimal Risk: General consumer protection

The AI Act's extraterritorial effect influences global AI development similar to how GDPR shaped worldwide data protection practices. Companies building AI systems anywhere must consider EU requirements if they plan to operate in European markets.

United States: Executive Coordination

President Biden's October 2023 Executive Order emphasizes voluntary standards and federal coordination rather than comprehensive legislation. Key elements include safety testing requirements for powerful AI systems, NIST standards development, federal AI use guidelines, and international cooperation commitments.

This approach reflects American preferences for market-based solutions and industry self-regulation. The U.S. model emphasizes innovation protection while building governance capacity through existing federal agencies.

China: State-Directed Development

China combines state-led AI advancement with strict content controls. Recent regulations demonstrate a comprehensive approach covering algorithm regulation, data security laws, content moderation requirements, and national technical standards development.

Chinese AI governance prioritizes social stability and party control while pursuing technological leadership. This creates a distinct model that other authoritarian systems study closely, representing an alternative to Western approaches emphasizing individual rights and market mechanisms.

ASEAN: The Collaborative Alternative

The Association of Southeast Asian Nations developed the ASEAN Guide on AI Governance and Ethics emphasizing voluntary adoption, cultural sensitivity, economic development balance, and capacity building support for less developed members.

ASEAN's approach offers lessons for global governance: voluntary frameworks can achieve coordination without threatening sovereignty, cultural diversity can coexist with shared principles, and capacity building enables meaningful participation from less developed regions.

Three Realistic Scenarios

Current institutional developments and international governance patterns suggest three plausible scenarios for global AI governance evolution.

Converging Soft Law
65% Probability (2025-2027)

UN General Assembly passes updated AI resolutions while UNESCO refreshes ethical guidance. Countries gradually align evaluation regimes and reporting requirements while enforcement remains national and regional.

Impact: Predictable baselines for cross-border operations, reduced policy risk for global AI deployment.

Hybrid Regime
25% Probability (2026-2030)

Council of Europe convention gains momentum attracting non-European signatories. UN processes incubate optional protocols covering compute thresholds and safety reporting requirements.

Impact: Patchwork coordination that functions constitutionally despite multiple instruments.

Grand Convention
10% Probability (2028+)

Major powers achieve breakthrough cooperation enabling comprehensive UN framework convention with detailed annexes covering all aspects of AI governance.

Impact: Comprehensive global regime similar to climate agreements but requiring significant geopolitical improvement.

Industry Response: Preparing for Governance

The AI Safety Institute Movement

Major AI developers establish safety institutes demonstrating responsible development. OpenAI's Preparedness Framework creates systematic evaluation protocols with specific thresholds for pausing development if safety concerns arise. Google's AI Principles commit to beneficial development with prohibitions on weapons and surveillance applications.

Anthropic's Constitutional AI represents technical approaches to building systems aligned with human values. These industry initiatives create practical governance baselines that could support international frameworks.

The Frontier Model Forum

Leading companies created the Frontier Model Forum for safety research coordination. Members including OpenAI, Google, Microsoft, and Anthropic collaborate on shared safety research, best practice development, government engagement, and Global South support initiatives.

This industry coordination demonstrates how private governance can complement public frameworks—addressing technical challenges that government institutions lack capacity to handle directly.

Economic Analysis: The $15 Trillion Impact

McKinsey estimates AI could contribute up to $13 trillion to global economic output by 2030. How governance affects this potential represents one of the largest economic policy questions of our time.

Current Costs of Fragmentation

Conflicting AI regulations cost the global tech industry an estimated $47 billion annually in compliance and lost opportunities. Regulatory arbitrage sees companies increasingly locate development in favorable jurisdictions, distorting global investment flows.

Overly restrictive governance could slow beneficial AI development while insufficient governance enables harmful applications that damage public trust. This balancing challenge shapes every aspect of international coordination efforts.

Global AI Research Distribution vs Population
Region AI Research Share Population Share Governance Gap
Developed Countries 87% 15% High concentration risk
Developing Countries 13% 85% Underrepresentation crisis

Development Gap Solutions

UN frameworks could address inequalities through technology transfer requirements, international funding for AI education and infrastructure, and fair access provisions ensuring beneficial applications reach underserved populations.

Current concentration patterns threaten to entrench global inequalities unless governance frameworks explicitly address inclusion. The proposed Global AI Fund represents one mechanism for preventing AI from becoming another arena where benefits concentrate among wealthy nations.

Critical Timeline: The Next 18 Months

The period from February 2025 through August 2026 represents a critical window for global AI governance. Several key developments could determine whether comprehensive international coordination emerges or fragmented approaches solidify.

2025 Q1

France's AI Action Summit (February 10-11)

Could establish key principles for democratic AI governance and catalyze broader international cooperation efforts.

2025 Q2

UN Scientific Panel Operations Begin

Initial risk assessments and technical standards development commence with full international participation.

2025 Q3

First Global Dialogue Convenes

150+ stakeholder participants establish ongoing coordination mechanisms and policy alignment processes.

2026-2027

Critical Decision Point

Determination between comprehensive global framework emergence or fragmented regional approach solidification.

Global South: The Crucial Voice

Current Underrepresentation Reality

Despite representing 85% of global population, developing countries have limited influence in AI governance discussions. Resource constraints, different priorities focusing on economic development over safety regulation, and historical patterns of technology governance excluding developing nations create legitimacy and effectiveness threats for global frameworks.

Innovative Inclusion Mechanisms

The UN initiative includes capacity building programs providing technical assistance, technology transfer provisions requiring beneficial AI sharing, differentiated responsibilities recognizing varying national capabilities, and South-South cooperation supporting developing country experience sharing.

Development Paradox: Countries that need AI governance most—those vulnerable to harmful applications—often have the least capacity to participate in creating governance frameworks. Breaking this cycle requires unprecedented international investment in governance capabilities.

Technical Implementation Challenges

Monitoring and Verification Complexities

Developing systems to monitor global AI development and verify compliance represents one of the biggest technical challenges. Computational requirements for monitoring could rival the largest current AI training systems while privacy concerns arise from accessing proprietary algorithms and datasets.

Technical standards differences across AI architectures make standardized monitoring extremely difficult. Rapid capability evolution threatens to make monitoring systems obsolete before full deployment.

Proposed Technical Solutions

Leading organizations develop potential solutions including federated learning approaches allowing monitoring without accessing underlying data, automated compliance systems using AI to assess other AI systems, blockchain-based auditing providing transparent records, and standardized testing protocols for cross-architecture assessment.

Cultural and Ethical Navigation

Value System Diversity

Global governance must navigate fundamental differences including individual versus collective rights preferences, varying privacy versus development comfort levels, diverse religious and philosophical AI perspectives, and different historical experiences with technological change and governance.

Inclusive Governance Approaches

Effective frameworks require cultural impact assessments, multi-religious dialogue engagement, indigenous rights protection with special provisions, and philosophical pluralism recognition allowing different societal approaches within universal human rights frameworks.

Wild Card Scenarios

Several low-probability, high-impact events could dramatically accelerate global AI governance development.

Major AI Safety Incident

A catastrophic AI failure causing significant harm could create urgency for rapid international cooperation, similar to how nuclear accidents spurred nuclear governance treaties. The probability remains low but the impact would be transformative.

AI-Enabled Cyberattack

A major international incident involving AI-powered cyber weapons could highlight international governance needs and create political momentum for binding agreements. Recent increases in AI-enabled attacks make this scenario increasingly plausible.

Capability Breakthrough

Dramatic leaps in AI capabilities, particularly in autonomous weapons or general artificial intelligence areas, could force rapid policy responses. The unpredictable nature of AI breakthrough moments makes governance preparation essential.

Practical Implementation Guidance

For Technology Leaders

Immediate Actions (Next 6 Months): Establish internal AI ethics boards with diverse representation, begin compliance preparation for multiple regulatory scenarios, and engage with international governance discussions through industry associations.

Medium-term Strategies (6-18 Months): Invest in AI safety and explainability research, develop relationships with civil society and government stakeholders, and participate in multi-stakeholder governance initiatives.

Long-term Positioning (18+ Months): Build adaptive compliance systems adjusting to different governance frameworks, establish regional partnerships navigating fragmented landscapes, and contribute technical expertise to international institutions.

For Policymakers

Immediate Priorities: Build internal AI governance expertise, engage actively with UN processes, and coordinate with regional partners on policy approaches. The window for shaping global frameworks remains open but narrows daily.

Strategic Development: Create adaptive regulatory frameworks evolving with technology, invest in public education and engagement, and develop international cooperation mechanisms for AI incident response.

Long-term Planning: Prepare for multiple governance scenarios through planning exercises, build Global South relationships for inclusive governance, and establish monitoring capabilities for compliance verification.

For Civil Society Organizations

Capacity Building: Develop AI literacy among staff and stakeholders, build coalitions with technology and policy organizations, and connect with international partners working on governance issues.

Advocacy Strategies: Develop policy positions on key governance questions, engage with business and government stakeholders, and experiment with innovative public participation models.

Global Engagement: Participate in UN consultations and processes, support inclusive participation from underrepresented communities, and monitor implementation effectiveness.

Investment Returns: The Governance ROI

Cost of Inaction Analysis

Without effective global governance, several costly scenarios become more likely. AI arms races between countries developing increasingly powerful systems without safety precautions could lead to catastrophic accidents or intentional misuse.

The current regulatory fragmentation patchwork costs businesses $47 billion annually and will grow exponentially as more jurisdictions develop AI policies. Major AI failures without effective governance responses could undermine public trust, slowing beneficial applications and reducing economic gains.

Governance Investment Requirements

Effective global governance would require significant investment. Establishing and operating UN AI governance institutions would cost approximately $2.3 billion over the first decade. Developing global monitoring and assessment capabilities requires $8.7 billion in initial investment.

Supporting developing country participation would cost an estimated $12.4 billion over ten years. Creating verification and enforcement mechanisms requires ongoing investment of approximately $3.1 billion annually.

Economic Benefits Projection

Despite these costs, economic benefits could be substantial. Harmonized global standards could save businesses $35 billion annually in regulatory compliance costs. Clear, consistent governance could accelerate beneficial AI development, adding an estimated $2.3 trillion to global GDP by 2035.

Governance Investment vs. Economic Benefits (2025-2035)
$26B
Investment Cost
$2.3T
GDP Impact
$350B
Compliance Savings
$1.2T
Risk Reduction

Preventing major AI accidents or misuse could avoid costs ranging from billions to trillions depending on scenario severity. Inclusive governance could help developing countries capture larger AI benefit shares, boosting global economic growth through reduced inequality.

Learning from Historical Precedents

Montreal Protocol Success

The 1987 Montreal Protocol successfully addressed ozone depletion through international cooperation. Success factors include clear scientific consensus creating action urgency, graduated approaches with different standards for developed versus developing countries, built-in flexibility for updating as science evolved, and economic incentives helping developing countries comply.

Cybersecurity Governance Lessons

Comprehensive international cybersecurity governance remains elusive despite decades of effort. Attribution problems make determining responsibility difficult, dual-use technologies serve both defensive and offensive purposes, rapid threat evolution outpaces governance mechanisms, and national security concerns limit information sharing.

These cybersecurity challenges preview obstacles AI governance will face. Learning from cybersecurity failures becomes essential for AI governance success.

Internet Governance Models

Multi-stakeholder internet governance through organizations like ICANN demonstrates how technical standards can be managed internationally. However, criticism about limited democratic accountability highlights challenges AI governance must address.

Constitutional Moment Assessment

We face a constitutional moment for artificial intelligence. Decisions made in the next few years about governance will shape civilization's trajectory for decades. The question isn't whether AI needs governance—it's whether humanity can develop effective governance at the required speed and scale.

The Constitutional Reality:

The UN won't publish a single constitutional document tomorrow. Instead, it's organizing the world's center of gravity through shared principles (UNESCO), global facts bases and forums (HLAB-AI proposals), development equity mechanisms (Global AI Fund), and political agreements (UN General Assembly resolutions, Global Digital Compact). Meanwhile, the first binding treaty already exists outside the UN system (Council of Europe) and could become the kernel others rally around.

This represents how international constitutionalism typically emerges: incrementally, then suddenly. The UN has taken remarkable steps toward comprehensive global AI governance. The "Governing AI for Humanity" report presents a blueprint addressing many challenges I've identified. Scientific panels and global dialogue mechanisms create institutional foundations supporting binding international agreements over time.

Enormous obstacles remain. Sovereignty concerns, geopolitical tensions, technical complexity, and enforcement challenges threaten comprehensive global governance prospects. The most realistic near-term scenario involves converging soft law and institutional coordination rather than binding universal agreements.

The stakes demand optimism despite these challenges. AI has potential to advance around 80% of Sustainable Development Goals, but only with governance frameworks ensuring shared benefits and managed risks. Success requires unprecedented coordination between governments, businesses, civil society, and technical experts.

Whether the UN creates an effective global AI constitutional framework depends less on institutional capabilities and more on our collective willingness to prioritize long-term human welfare over short-term national and commercial interests. The constitutional moment is here. How we respond defines not just AI's future, but our own.

Key Takeaways for Action

For Leaders: Begin preparing for multiple AI governance scenarios while actively engaging in international coordination efforts. The cost of inaction—$47 billion annually in current regulatory fragmentation—far exceeds governance investment requirements.

For Citizens: Stay informed about AI governance developments and demand meaningful participation in decisions affecting your future. Democratic legitimacy requires public understanding and engagement in technical policy areas.

For the Global Community: The window for comprehensive AI governance remains open but won't stay indefinitely. International cooperation on AI governance represents one of the most significant challenges of our time.

Frequently Asked Questions

How likely is it that the UN will create a binding global AI constitution?
The UN is unlikely to publish a single, binding "AI Constitution" in the near term. Instead, expect a soft-law constitutional framework emerging through converging principles, coordination bodies, and capacity-building mechanisms. Based on current trends, there's approximately 15% probability of comprehensive binding AI governance through the UN by 2030, with converging soft law being the most likely scenario (65% probability).
What would trigger faster progress toward global AI governance?
A major AI-related incident causing significant harm would likely accelerate international cooperation, similar to how nuclear accidents spurred nuclear governance treaties. Other catalysts include breakthroughs in U.S.-China cooperation, dramatic leaps in AI capabilities, or successful demonstration of the UN's proposed institutional mechanisms.
How does the Council of Europe AI Convention fit into global governance?
The Council of Europe's Framework Convention on AI, opened for signature in September 2024, represents the first binding international AI treaty. Crucially, it remains open to non-European countries and could serve as a kernel that others rally around, with the UN providing legitimacy, capacity, and coordination—demonstrating how regional treaties may lead global governance development.
How would global AI governance affect innovation?
Well-designed global governance could accelerate innovation by providing regulatory certainty (reducing the estimated $47 billion annual compliance costs from conflicting regulations), enabling predictable baselines for cross-border operations, and maintaining public trust. The key is balancing safety with space for beneficial development.
What role would developing countries play in global AI governance?
The UN framework explicitly prioritizes Global South inclusion through capacity-building programs, technology transfer provisions, and differentiated responsibilities. Currently, 87% of AI research occurs in developed countries despite developing nations representing 85% of global population—the proposed Global AI Fund aims to address this imbalance.
What can individuals do to influence AI governance?
Citizens can participate in ongoing UN consultations, support organizations working on AI policy, stay informed about developments like UNESCO's ethics recommendation and the Council of Europe convention, and demand accountability from elected representatives on AI governance decisions. The UN's inclusive consultation process demonstrates the value of broad public engagement.

Sources and References

  1. United Nations General Assembly. "Resolution on Artificial Intelligence." March 2024. https://digitallibrary.un.org
  2. UNESCO. "Recommendation on the Ethics of Artificial Intelligence." 2021. https://www.unesco.org
  3. UN High-level Advisory Body on Artificial Intelligence. "Governing AI for Humanity - Final Report." September 2024. https://www.un.org/ai-advisory-body
  4. Council of Europe. "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law." September 2024. https://www.coe.int/en/web/artificial-intelligence/convention-108
  5. Office of the High Commissioner for Human Rights (OHCHR). "The Right to Privacy in the Digital Age." 2024. https://www.ohchr.org
  6. International Telecommunication Union. "AI for Good Global Summit Reports." 2024. https://aiforgood.itu.int
  7. United Nations. "Pact for the Future and Global Digital Compact." Summit of the Future, September 2024. https://www.un.org/en/summit-of-the-future
  8. OECD. "AI Policy Observatory Reports." 2024. https://oecd.ai
  9. McKinsey Global Institute. "The Age of AI: Artificial Intelligence and the Future of Work." 2024.
  10. European Union. "Artificial Intelligence Act." Official Journal of the European Union, August 2024.
  11. The Guardian. "UN Warns of Growing AI Divide Between Rich and Poor Countries." 2024.
  12. Associated Press. "United Nations Adopts First Global AI Resolution." March 2024.
  13. WIRED. "The UN's Plan to Govern AI Globally." September 2024.
  14. El País. "AI Concentration Risks Highlighted in UN Report." 2024.
  15. U.S. Department of State. "International Cooperation on AI Governance." 2024. https://www.state.gov

Similar Posts

Leave a Reply