Navigating AI Regulations Around the World

As AI continues to grow, governments around the world are stepping up to establish regulatory frameworks that ensure its ethical and responsible use. From safeguarding human rights to promoting transparency and accountability, these regulations aim to strike a balance between fostering innovation and mitigating risks. 

In this blog, we’ll explore key AI regulations from different regions, highlighting their objectives, approaches, and the impact they aim to achieve. We’ll keep this resource updated as new regulations are introduced, so bookmark this page as your go-to guide. 

United States

The United States is taking a state-driven and federal approach to AI regulation, with laws focusing on transparency, accountability, and consumer protection. Here are some key regulations shaping AI governance in the U.S.: 

California’s Generative AI Training Data Transparency Act (AB 2013)

California’s AB 2013 is the first U.S. law to mandate transparency in generative AI training data. It aims to enhance accountability, protect personal information, and provide users with greater insight into how AI outputs are generated. Signed in September 2024 and scheduled to take effect January 1, 2026, this law requires developers to publicly disclose detailed information about the datasets used to train their AI systems.  

Colorado Senate Bill 24-205: Consumer Protections for AI

Colorado’s Senate Bill 24-205 aims to protect residents from algorithmic discrimination in high-risk AI systems. Passed in May 2024 and set to take effect on February 1, 2026, the law prioritizes transparency, risk management, and fairness in critical areas such as employment, housing, and finance. By assigning clear responsibilities to developers and deployers, it ensures AI systems are used ethically and without bias in decisions that significantly impact individuals’ lives. 

Texas Responsible AI Governance Act (TRAIGA)

Signed in June 2025 and scheduled to take effect on January 1, 2026, TRAIGA establishes a regulatory framework for AI systems in Texas. The legislation focuses on transparency, risk management, and consumer protection, particularly for high-risk AI systems. It also includes a regulatory sandbox to encourage innovation while ensuring responsible AI development and deployment.

US Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312)

This proposed U.S. legislation introduces compliance requirements for generative AI, high-impact, and critical-impact systems. It aims to enhance transparency, accountability, and risk management while fostering innovation. Although not enacted, the bill reflects a growing push for structured AI governance, particularly in applications that affect public trust, safety, and individual rights. 

Europe

Europe is setting the standard for AI governance worldwide with its risk-based approach. Below are some of the region’s most influential regulations:  

The EU AI Act 

The EU AI Act is a landmark framework for regulating artificial intelligence globally. It introduces a risk-based classification system, categorizing AI applications into four levels: unacceptable, high, limited, and minimal risk. Each category is subject to specific rules and restrictions, requiring companies operating in the EU to implement compliance measures to mitigate legal and operational risks. Initially proposed in April 2021, the EU AI Act underwent final approval in 2024, with certain provisions becoming enforceable in 2025. 

Council of Europe Framework Convention on AI, Human Rights, Democracy, and the Rule of Law

This Framework Convention establishes a comprehensive legal framework to ensure AI activities respect fundamental human rights, democratic principles, and the rule of law. It addresses risks such as discrimination, privacy breaches, and threats to democratic processes. This applies to public authorities and private entities acting on their behalf, mandating context-specific measures to manage AI-related risks effectively. Signed in September 2024, its adoption is ongoing, with implementation timelines varying across member states as they finalize national commitments.

Other regions 

Across the globe, countries are introducing AI regulations that reflect their priorities and challenges. Here’s what’s happening in other regions around the world: 

Brazil’s AI Bill (PL 2338/2023) 

Brazil’s AI Bill aims to create a robust framework for the ethical and responsible development, deployment, and use of AI systems. It adopts a risk-based approach, imposing stricter regulations on high-risk systems that could impact public safety or fundamental rights. This bill emphasizes transparency, fairness, and alignment with Brazil’s General Data Protection Law (LGPD) to safeguard privacy. It also proposes the establishment of a regulatory authority to oversee compliance and promote innovation. Approve by the Senate in December 2024, the bill is still under legislative review, but its core measures are projected to take effect in 2026. 

South Korea’s Basic Act on AI Advancement and Trust 

South Korea’s Basic Act on AI establishes a regulatory framework to promote responsible AI development while maintaining public trust. The Basic Act, enacted in January 2025 and scheduled to take effect on January 22, 2026, emphasizes safety, transparency, and fairness, particularly for high-impact and generative AI systems. It also introduces initiatives such as a national AI control tower, an AI safety institute, and support for R&D, data infrastructure, and talent development. 

Japan’s Act on Promotion of Research and Development and Utilization of Artificial Intelligence-Related Technologies (The “AI Bill”) 

Japan’s AI Bill is the country’s first law explicitly regulating artificial intelligence. Enacted in May 2025 and taking effect almost immediately, this bill outlines core principles for AI research, development, and use, while establishing the government’s Fundamental Plan for AI. The legislation also creates an Artificial Intelligence Strategy Center to oversee national policies and promote ethical AI practices.  

Building a future of ethical AI 

As artificial intelligence continues to reshape industries and societies, the global regulatory landscape is evolving to ensure its ethical and responsible use. By understanding and navigating these frameworks, businesses and stakeholders can not only mitigate risks but also unlock AI’s full potential to drive innovation.