Global AI Regulations: Reviewing the Landscape of AI Laws in the EU, South Korea and the US

April 10, 2025

In 2024, between 300-400 bills were introduced by states that touched AI in some way. This year, 15 state-level AI bills have already been introduced to Congress. This sense of urgency is indicative of the rapid cross-industry impact of AI, with the burden on policymakers to govern it. The mixture of broad and ambiguous regulations has initiated a trend of “principles, guidelines and best practices” where enterprise leaders are trying to predict the future state of AI compliance requirements. In response, we will outline the landscape of enacted AI laws, specifically focusing on the European Union (EU) AI Act, the South Korean AI Framework Act and U.S. regulations in effort to clarify the motivations of policymakers and expose pain points that companies can address.

 

 

European Union (EU) AI Act

The EU’s AI Act is the world's first legal framework on AI to: 

  • Create a legal definition for AI
  • Define the role and requirements for AI actors
  • Outline prohibited uses cases for AI
  • Outline high-risk use cases for AI
  • Outline requirements for actors of high-risk systems

 

This act is important to businesses that operate in or outside of the EU because policymakers worldwide use it as a foundation for their own policies. When thinking about building your AI principles, the first takeaway from this act is to identify yourself as an AI provider (developer), deployer (user of third party/vendor tool) or to choose a path that is best for you.

 

     -   Providers are entities that develop an AI system, or have it developed on their behalf, and place it on the market

     -   Deployers are entities that use AI systems. Under the AI Act many of the legal obligations are placed on the providers.

 

Learn more about finding the right AI approach for your business in this Optiv blog.

 

The AI act applies a risk-based framework, determining the level of governance based on the potential severity of negative impacts. Therefore, the next step is recognizing the level of risk associated with your use case. Some prohibited use cases include: 

  • Deploying subliminal, manipulative or deceptive techniques
  • Exploiting vulnerabilities related to protected groups
  • Biometric categorization systems
  • Social scoring
  • Assessing the risk of recidivism
  • Compiling facial recognition databases
  • Inferring emotions in workplaces or educational institutions
  • ‘Real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement

 

AI systems are always considered high-risk if it conducts automated processing of individuals, personal data to assess various aspects of a person’s life or some high-risk use cases like: 

  • Non-banned biometrics
  • Critical infrastructure
  • Education and vocational training
  • Employment, workers management and access to self-employment
  • Access to and enjoyment of essential public and private services
  • Law enforcement
  • Migration, asylum and border control management
  • Administration of justice and democratic processes

 

High-risk uses cases require providers and deployers to implement and report robust risk management, data governance, documentation practices and transparency. If your use case does not fall under these guidelines, it is considered low-risk, and your only obligation is transparency to the user.

 

Read a high-level summary of the EU AI Act here.

 

 

South Korea AI Framework Act

South Korea is the second global entity to enact a comprehensive law on AI, known as the AI Framework Act. The provision of this law aims to protect citizens' rights and dignity, improve their quality of life and strengthen national competitiveness by instituting fundamental regulations toward developing AI. Some of the mandates include:

  • Implementing and revising a plan every three years to cultivate investment, culture, integration into industries, regulations and trust of AI
  • Promotion measures like AI learning, AI adaptation, support for medium, small and start-up enterprises, professional workforce development, AI-infrastructure and more
  • Establishment of the National AI Committee for AI policy making
  • Appointment of the Ministry of Science, ICT, AI Policy Center, and the AI Safety Institute to support policy implementation
  • Transparency, labeling, risk assessment, impact assessment and risk management requirements are imposed on products and services using high impact AI

 

The main differences between the EU AI Act and South Korea’s AI Framework Act are:

  • The EU’s act approaches AI from a risk perspective where South Korea’s act approaches regulation from a national advancement perspective
  • The EU’s act placed obligations of compliance based on actor type (provider, deployer, etc.) whereas South Korea’s makes no distinctions
  • Sanctions are much more severe in the EU AI Act

 

South Korea adopted a regulatory approach designed to position itself as a global leader in AI, while maintaining flexibility to adapt and refine regulations as its AI maturity evolves. This law will take effect on January 22, 2026.

 

 

US AI Regulations

Federal

Although no federal laws currently regulate the use of AI, legislation exists that outlines the structure of AI leadership across U.S. government agencies. The National Artificial Intelligence Initiative Act illustrates the roles and responsibilities as follows:

 

  • The National Artificial Intelligence Initiative Office will oversee and implement the U.S. national AI strategy and will serve as the central hub for federal coordination and collaboration in AI research and policymaking across the government, as well as with private sector, academia and other stakeholders

 

  • The National Science and Technology Council coordinates science and technology policy across the federal research and development enterprises and establishes clear national goals for science and technology policy and investment

 

  • The National Artificial Intelligence Advisory Committee advises the president on matters related to the U.S. national AI strategy initiative

 

  • The National Science Foundation (NSF) conducts studies on the current and future impact of AI on the U.S. workforce and provides grants to establish and support AI research, education and other related fields

 

  • The General Accountability Office conducts studies of AI computer hardware and computing required to maintain U.S. leadership in AI research and development

 

  • The National Institute of Standards and Technology develops voluntary standards for artificial intelligence systems

 

It is generally agreed that a comprehensive federal regulation on AI is necessary. This gap in AI policy prompted states to take the lead in setting the standard. Policy evolution, at the state level, will likely influence federal-level AI developments.

 

State

The following outlines AI laws and regulations across six U.S. states.

 

Colorado

Colorado's SB24-205 enacted bill is inspired by the AI Act and the California Consumer Privacy Act (CCPA). It pulls from topics like consumer privacy and data governance, reinforcing the idea that AI governance is more than just building a model. This bill also adds stricter requirements of providers and deployers of high-risk AI systems with more robust reporting and transparency expectations. Some of the additional requirements are:

 

Providers 

  • Provide documentation on how the model should and should not be used to the deployer
  • High-level summaries of data used
  • Potential limitations and risks that will arise from intended use
  • Mitigation strategies 

 

Deployers 

  • Must conduct an annual review for algorithmic discrimination and an impact assessment
  • Providing the consumer with an opportunity to correct any incorrect personal data
  • Providing the consumer with an opportunity to appeal

 

The burden of responsibility falls more on the deployer than the provider to validate the tool, report to enforcing authorities and to vet the third-party vender. The bill goes into effect on February 1, 2026.

 

Utah

S.B. 149 of Utah has a few major highlights, including:

  • Establishes liability for use of AI that violates consumer protection laws if not properly disclosed
  • Creates a regulatory AI agency
  • Enables temporary mitigation of regulatory impacts during AI pilot testing
  • Requires disclosure when an individual interacts with AI

 

This bill has been in effect since May 1, 2024.

 

California

California has enacted two AI bills. The first bill, AB 2013, requires any generative AI tool released since 2022 to post documentation on the type of data used to train the model. It explicitly lists the required information of the documentation, and the burden of the responsibility falls on the provider. This bill goes into effect in 2026.

 

The second bill, the California AI Transparency Act, requires providers of generative AI systems to: 

  • Deliver tools that will allow users to assess whether image, video or audio content has been created or altered using a generative AI
  • Conduct data provenance operations
  • Watermarking in metadata or on AI-generated image, video, or audio content
  • Ensure their third-party licensees maintain these disclosure requirements 

 

The California AI Transparency Act goes into effect January 1, 2026, and is the nation’s most comprehensive and specific AI watermarking law.

 

Maryland

The Labor and Employment - Use of Facial Recognition Services - Prohibition prevents an employer from using facial recognition services during an applicant's interview for employment unless the applicant consents by signing a waiver. The bill went into effect October 1, 2020.

 

Illinois

Illinois passed the Artificial Intelligence Video Interview Act, which requires transparency and consent when an entity uses AI in video interviews. The bill went into effect January 1, 2020.

 

New York

Although this bill only protects residents in New York City, it is worth noting. The Automated Employment Decision Tool Legislation requires that:

  • A bias audit be conducted on an automated employment decision tool prior to its use
  • That candidates or employees that reside in the city be notified about the use of such tools in the assessment or evaluation for hire or promotion
  • And be notified about the job qualifications and characteristics that will be used by the automated employment decision tool.

 

The bill went into effect on July 5, 2023.

 

 

Outlook for the Future

What should you do to prepare for coming and new AI legislation? Here are some practical steps to ensure your organization is prepared for how we will address and respond to AI in the future.

 

Provider vs. Deployer: Identify your status as a provider, deployer - or both! This seems to be an important distinction in the EU, Colorado and California, and will impact the rigor of regulations you are subjected to. In terms of compliance and documentation purposes, the EU has made it clear that the provider has more requirements than the deployer. However, the U.S. hasn’t figured that out yet. It is still the wild west, with California and Colorado being the only states with enacted AI laws centered around consumer protection and explicit guidelines for compliance. States, along with the rest of the U.S., are persistently introducing more bills. It may get more complicated to comply with the different or competing expectations of regulations.

 

High-Risk AI Systems: The governance of high-risk systems is a common theme in enacted policies. Regulators are not focused on making it difficult to use benign AI (spam filters, spell check, etc.). They are prioritizing mitigating the negative impact on important life opportunities. AI use as a provider or deployer will likely be subject to impact assessments, robust documentation about the tool, transparency and appeal processes for automated decisions. A compliance hack is to use low-risk AI systems until regulations are more clearly defined. This may be challenging for certain industries due to the high-risk nature of their work. However, if innovation and efficiency can still be achieved, it might save you a headache.

 

Vet the Vendor: If you are a deployer, the vendor-vetting process will be vital to your reputation and survival as a business. Meticulously written contracts outlining the definitions of an incident with respect to the model, data and human error, on-site visits to validate the vendor, robust pilot periods, continuous monitoring reports and so much more. Furthermore, a vendor may add AI features to legacy services without notice. Conducting a regular inventory and mandating transparency around AI updates of all your third-party tools will help prevent being in violation of regulations.

 

Expectation of More Comprehensive State Bills and Transparency Bills: The pillars of a comprehensive AI bill are that it is comprehensive in scope, coverage, rights and capacity. A bill is narrow: 

  • In scope if it applies only to a specific set of data types (data subjects: children, health data, financial data)
  • In coverage if it only includes a single industry, or only applies to a handful of companies
  • In rights if it only covers one or a couple AI/data rights
  • And in capacity if it does not account for the growth capabilities and new use cases of AI (generative AI and agentic AI)

 

Some bills, like SB 1047, have not been signed due to its limited governing potential. Given the recent advancements in AI policy from a global and state level, newly enacted bills will likely have a more comprehensive trend.

 

 

How Optiv AI Security Services Can Help

Navigating the regulatory landscape can be overwhelming, and to a greater extent when companies fall into an industry impacted by sectoral-specific AI policies. Explore Optiv’s offerings on AI governance, risk-management, readiness, strategy, security and literacy to help support the complex regulatory future of AI.

 

In conclusion, there are many risk frameworks and best practices that exist as references for your company to use. However, one of the first factors you should consider in establishing your AI governance processes is a clear understanding of what is required of you with respect to your industry, use case and position as an AI actor. Contact us here to talk with our experts about your AI security needs.

Andrew Carmona
Andrew Carmona is a senior consultant within data governance, privacy and protection (DGPP) with five years of experience as an AI governance professional. Andrew has in-depth experience cooperating with cross-functional teams to lead and influence AI governance strategies – on corporate and state levels – globally. He is known for successfully translating regulations into technical requirements and developing scalable AI policy frameworks for enterprise.