Could California’s SB 1047 Impact AI Security?

September 6, 2024

Senate Bill SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, is on the verge of being signed into law in California. If enacted, SB 1047 would require AI developers to implement safety measures to prevent egregious AI misuse resulting in “critical harm.” Such harms are defined in the bill as the use of covered AI models to develop weapons of mass destruction and cause mass casualty incidents or other “grave harms to public safety and security.” Covered AI models in this case use colossal computing power meeting one of two requirements: “trained on greater than 10^26 integer or floating-point operations, the cost of which exceeds...$100,000,000” or fine-tuning an above covered model at a cost >$10,000,000. Society has not yet reached this point where bad actors are leveraging high-powered AI models to cause mass casualties, but legislators are seeking to be proactive rather than reactive as AI innovation rapidly evolves.

 

Holding frontier AI model developers accountable for potential widescale disasters, SB 1047 proposes the following regulations:

 

  • Shutdowns for Safety: Developers must implement safety protocols, including a "full shutdown" of AI models and training (with consideration of the risk involved if the shutdown could severely impact critical infrastructure)
  • Annual Audits: Starting on January 1, 2026, developers are required to have annual third-party compliance audits
  • Defense of the Whistleblower: Developers and their contractors cannot retaliate against any employee who reports unsafe practices or compliance failures
  • Liability and Penalties: Developers may face penalties, including up to 10% of “the cost of the quantity of computing power used to train the covered model” for a first violation, if their AI models are used to cause "critical harm” and up to 30% for subsequent violations

 

 

The Race to Secure AI

To better understand the significance of SB 1047 on cybersecurity and for organizations outside of California, it is important to take a step back and review how technological innovations and safety regulations have been intertwined for decades. Whether it’s regulation to prevent the spread of biases in machine learning models or protect against data privacy violations associated with the Big Data revolution, we have often seen technology consumers and producers alike advocating for legislation to hold developers and corporations accountable for ensuring fairness, transparency and accountability. But as usual, technology advances faster than the government can regulate its production and use (cue the idiomatic “Wild West” headlines for crypto, Web3 and now AI).

 

In the rush to build and deploy new AI products and features, developers may not always think about the new risks that they are introducing into the environment if proper vetting, monitoring and training are not in place. From a cybersecurity perspective, this mindset can lead to attack surface expansion and exploitation of models to spread disinformation, disrupt critical infrastructure and launch cyberattacks or faster than ever. Regulations like SB 1047 will require developers to bake security into their models from the start.

 

As security-by-design approaches become a part of discussions surrounding AI regulations, Optiv clients find value in addressing the concept of secure AI. Optiv defines secure AI as a set of activities and solutions that work together across the AI lifecycle to unite technology, people and process against any risks. It involves embedding safety measures throughout the AI development process and ensuring transparency, accountability and ethics at every stage. This disciplined approach helps organizations to proactively keep up with rapidly changing AI technology solutions, while also meeting compliance requirements and protecting users from ethical violations or “critical harm.” Secure AI relies on foundational concepts of security by design, governance, data protection, risk management and threat modeling to achieve the goal of end-to-end security.

 

In the effort to help organizations adopt a secure AI approach, there is promise in the growing market landscape for AI security services designed to support developers and security teams throughout the journey of AI policy development, governance and risk management. When exploring such services, it is important to look for an investment incorporating both current and future regulations in AI governance advising to support compliance efforts and encourage responsible innovation. Such efforts can support an AI program maturation journey while ensuring security and governance is incorporated into the AI product and policy roadmaps early. Risk management is also a key component of AI security services. Mass casualties may seem obvious to prioritize as top risks. However, mature risk management practices, including threat models, risk assessments and risk registers, are needed to effectively address present-day and future AI use cases.

 

With an emphasis on security at the start, perhaps the battle between innovation and regulation does not need to be a battle at all. Both technical developers and lawmakers can focus on prioritizing ethics and safety.

 

 

What’s Next for AI?

In addition to SB 1047, Optiv governance experts reviewed several different proposed and passed AI regulations and frameworks, including the EU AI Act, the U.S. Executive Order on AI, the NIST AI Risk Management Framework and the U.S. Department of Health and Human Services (HHS) AI Rule. Common factors in these AI regulations include a prioritization of:

 

  • Ethical principles
  • Accountability and liability
  • Data protection and governance
  • Quality and safety
  • Transparency and clear explanations
  • Security and accuracy
  • Audits and certifications of AI systems

 

These central points will continue to be at the forefront of conversations surrounding AI and technology legislation. With a focus on safety protocols, annual audits and penalties against developers for compliance regulations, legislative efforts like SB 1047 are compelling businesses to think strategically about secure AI investments. California Governor Gavin Newsom could change the future of AI development with his final decision to pass or veto SB 1047. As the home of Silicon Valley technical innovation, California may set a precedent for other U.S. states looking to pass AI regulations. The debate intensifies over whether SB 1047 is a boon for humanity or a luddite dream. But ultimately a more nuanced approach is needed to carefully consider the impacts of regulating AI today for a more secure tomorrow.

Sara Faradji
Cybersecurity Technical Content Manager | Optiv
Sara Faradji is a Cybersecurity Technical Content Manager at Optiv, where she partners with leading cyber experts to produce cutting-edge, purpose-driven thought leadership. With 10 years of teaching and instructional design experience, she strives to place people at the center of cybersecurity communications. Her objective is to help emerging and established technical leaders to build their brand while aligning their technical writing with business strategies. As someone who shares the drive of security professionals to never stop learning, she earned her PhD in English from the University of Maryland, as well as her M.A. in Cultural Studies and B.A. in Global Studies from Carnegie Mellon University.
Brian Golumbeck
Director, Strategy and Risk Management | Optiv
Brian Golumbeck is a Practice Director within Optiv Risk Management and Transformation Advisory Services Practice. He has a history of leading challenging projects and building dynamic high impact teams. Mr. Golumbeck’s 25+ years working in Information Technology, include 20+ years as an information security professional. Brian is a Certified Information Systems Security Professional (CISSP), Certified in Risk and Information Systems Controls (CRISC), Certified Information Security Manager (CISM), Certificate of Cloud Security Knowledge (CCSK), EXIN/ITSMf ITIL Foundations, and Lean Six Sigma – Greenbelt.
Jennifer Mahoney
MANAGER, DATA GOVERNANCE, PRIVACY AND PROTECTION | OPTIV
Jennifer Mahoney has 18 years’ regulatory compliance experience in both consulting and enterprise environments. Her experience ranges from small businesses to Fortune 50 corporations particularly in the technology, state and local, manufacturing and pharmaceutical verticals. Areas of expertise include the General Data Protection Regulation (GDPR), the California Privacy Rights Act (CPRA) / California Consumer Privacy Act (CCPA), the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach Bliley Act (GLBA), the Personal Information Protection and Electronic Documents Act (PIPEDA), and many others.
Jon Miller
Sr. Product Marketing Manager | Optiv
Jon Miller is an experienced product marketing manager with a strong ability to deliver successful cybersecurity-focused marketing campaigns. Jon is well versed in the complex landscape of cybersecurity threats, solutions and digital transformation services. He focuses on go-to-market strategy and product launches that help Optiv clients improve their security posture and build resilience. Collaborating closely with Optiv and client leaders, Jon actively listens to client challenges and ensures that Optiv services authentically incorporate the client voice and needs.

Prior to Optiv, Jon spent his early career as a Product Manager and Marketer in the healthcare IT industry, specializing in healthcare data and analytics products. Over the past 10 years in healthcare IT, he has launched and expanded analytics product lines to strengthen providers’ abilities to improve patient care and health outcomes both in U.S. and international markets.
Maddy Maletz
Product Marketing Manager | Optiv
Maddy Maletz is a Product Marketing Manager at Optiv, focused on crafting clear, impactful messaging around the complexities of cybersecurity. With over three years of experience in the industry, Maddy is passionate about addressing client challenges and demonstrating how Optiv's solutions can help keep them secure in an ever-changing cybersecurity landscape. She is dedicated to driving successful campaigns that showcase Optiv’s expertise and commitment to client security.