A Single Partner for Everything You Need Optiv works with more than 450 world-class security technology partners. By putting you at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can.
We Are Optiv Greatness is every team working toward a common goal. Winning in spite of cyber threats and overcoming challenges in spite of them. It’s building for a future that only you can create or simply coming home in time for dinner. However you define greatness, Optiv is in your corner. We manage cyber risk so you can secure your full potential.
Could California’s SB 1047 Impact AI Security? Breadcrumb Home Insights Blog Could California’s SB 1047 Impact AI Security? September 6, 2024 Senate Bill SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, is on the verge of being signed into law in California. If enacted, SB 1047 would require AI developers to implement safety measures to prevent egregious AI misuse resulting in “critical harm.” Such harms are defined in the bill as the use of covered AI models to develop weapons of mass destruction and cause mass casualty incidents or other “grave harms to public safety and security.” Covered AI models in this case use colossal computing power meeting one of two requirements: “trained on greater than 10^26 integer or floating-point operations, the cost of which exceeds...$100,000,000” or fine-tuning an above covered model at a cost >$10,000,000. Society has not yet reached this point where bad actors are leveraging high-powered AI models to cause mass casualties, but legislators are seeking to be proactive rather than reactive as AI innovation rapidly evolves. Holding frontier AI model developers accountable for potential widescale disasters, SB 1047 proposes the following regulations: Shutdowns for Safety: Developers must implement safety protocols, including a "full shutdown" of AI models and training (with consideration of the risk involved if the shutdown could severely impact critical infrastructure) Annual Audits: Starting on January 1, 2026, developers are required to have annual third-party compliance audits Defense of the Whistleblower: Developers and their contractors cannot retaliate against any employee who reports unsafe practices or compliance failures Liability and Penalties: Developers may face penalties, including up to 10% of “the cost of the quantity of computing power used to train the covered model” for a first violation, if their AI models are used to cause "critical harm” and up to 30% for subsequent violations The Race to Secure AI To better understand the significance of SB 1047 on cybersecurity and for organizations outside of California, it is important to take a step back and review how technological innovations and safety regulations have been intertwined for decades. Whether it’s regulation to prevent the spread of biases in machine learning models or protect against data privacy violations associated with the Big Data revolution, we have often seen technology consumers and producers alike advocating for legislation to hold developers and corporations accountable for ensuring fairness, transparency and accountability. But as usual, technology advances faster than the government can regulate its production and use (cue the idiomatic “Wild West” headlines for crypto, Web3 and now AI). In the rush to build and deploy new AI products and features, developers may not always think about the new risks that they are introducing into the environment if proper vetting, monitoring and training are not in place. From a cybersecurity perspective, this mindset can lead to attack surface expansion and exploitation of models to spread disinformation, disrupt critical infrastructure and launch cyberattacks or faster than ever. Regulations like SB 1047 will require developers to bake security into their models from the start. As security-by-design approaches become a part of discussions surrounding AI regulations, Optiv clients find value in addressing the concept of secure AI. Optiv defines secure AI as a set of activities and solutions that work together across the AI lifecycle to unite technology, people and process against any risks. It involves embedding safety measures throughout the AI development process and ensuring transparency, accountability and ethics at every stage. This disciplined approach helps organizations to proactively keep up with rapidly changing AI technology solutions, while also meeting compliance requirements and protecting users from ethical violations or “critical harm.” Secure AI relies on foundational concepts of security by design, governance, data protection, risk management and threat modeling to achieve the goal of end-to-end security. In the effort to help organizations adopt a secure AI approach, there is promise in the growing market landscape for AI security services designed to support developers and security teams throughout the journey of AI policy development, governance and risk management. When exploring such services, it is important to look for an investment incorporating both current and future regulations in AI governance advising to support compliance efforts and encourage responsible innovation. Such efforts can support an AI program maturation journey while ensuring security and governance is incorporated into the AI product and policy roadmaps early. Risk management is also a key component of AI security services. Mass casualties may seem obvious to prioritize as top risks. However, mature risk management practices, including threat models, risk assessments and risk registers, are needed to effectively address present-day and future AI use cases. With an emphasis on security at the start, perhaps the battle between innovation and regulation does not need to be a battle at all. Both technical developers and lawmakers can focus on prioritizing ethics and safety. What’s Next for AI? In addition to SB 1047, Optiv governance experts reviewed several different proposed and passed AI regulations and frameworks, including the EU AI Act, the U.S. Executive Order on AI, the NIST AI Risk Management Framework and the U.S. Department of Health and Human Services (HHS) AI Rule. Common factors in these AI regulations include a prioritization of: Ethical principles Accountability and liability Data protection and governance Quality and safety Transparency and clear explanations Security and accuracy Audits and certifications of AI systems These central points will continue to be at the forefront of conversations surrounding AI and technology legislation. With a focus on safety protocols, annual audits and penalties against developers for compliance regulations, legislative efforts like SB 1047 are compelling businesses to think strategically about secure AI investments. California Governor Gavin Newsom could change the future of AI development with his final decision to pass or veto SB 1047. As the home of Silicon Valley technical innovation, California may set a precedent for other U.S. states looking to pass AI regulations. The debate intensifies over whether SB 1047 is a boon for humanity or a luddite dream. But ultimately a more nuanced approach is needed to carefully consider the impacts of regulating AI today for a more secure tomorrow. Update: On September 29, 2024, it was announced that California Governor Gavin Newsom had vetoed SB 1047. In his veto message, Governor Newsom indicated some concerns about limiting the focus of the proposed legislation to frontier models: "By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology." Newsom also expressed concern that the proposed controls in the bill were not flexible enough, noting: "While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it." Despite the veto, Newsom expressed a continued commitment to exploring AI safety initiatives. Optiv will continue tracking important decisions surrounding AI legislation to better guide our clients on governance and compliance best practices. By: Sara Faradji Cybersecurity Technical Content Manager | Optiv Sara Faradji is a Cybersecurity Technical Content Manager at Optiv, where she partners with leading cyber experts to produce cutting-edge, purpose-driven thought leadership. With 10 years of teaching and instructional design experience, she strives to place people at the center of cybersecurity communications. Her objective is to help emerging and established technical leaders to build their brand while aligning their technical writing with business strategies. As someone who shares the drive of security professionals to never stop learning, she earned her PhD in English from the University of Maryland, as well as her M.A. in Cultural Studies and B.A. in Global Studies from Carnegie Mellon University. By: Brian Golumbeck Director, Strategy and Risk Management | Optiv Brian Golumbeck is a Practice Director within Optiv Risk Management and Transformation Advisory Services Practice. He has a history of leading challenging projects and building dynamic high impact teams. Mr. Golumbeck’s 25+ years working in Information Technology, include 20+ years as an information security professional. Brian is a Certified Information Systems Security Professional (CISSP), Certified in Risk and Information Systems Controls (CRISC), Certified Information Security Manager (CISM), Certificate of Cloud Security Knowledge (CCSK), EXIN/ITSMf ITIL Foundations, and Lean Six Sigma – Greenbelt. By: Jennifer Mahoney MANAGER, DATA GOVERNANCE, PRIVACY AND PROTECTION | OPTIV Jennifer Mahoney has 18 years’ regulatory compliance experience in both consulting and enterprise environments. Her experience ranges from small businesses to Fortune 50 corporations particularly in the technology, state and local, manufacturing and pharmaceutical verticals. Areas of expertise include the General Data Protection Regulation (GDPR), the California Privacy Rights Act (CPRA) / California Consumer Privacy Act (CCPA), the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach Bliley Act (GLBA), the Personal Information Protection and Electronic Documents Act (PIPEDA), and many others. By: Jon Miller Sr. Product Marketing Manager | Optiv Jon Miller is an experienced product marketing manager with a strong ability to deliver successful cybersecurity-focused marketing campaigns. Jon is well versed in the complex landscape of cybersecurity threats, solutions and digital transformation services. He focuses on go-to-market strategy and product launches that help Optiv clients improve their security posture and build resilience. Collaborating closely with Optiv and client leaders, Jon actively listens to client challenges and ensures that Optiv services authentically incorporate the client voice and needs. Prior to Optiv, Jon spent his early career as a Product Manager and Marketer in the healthcare IT industry, specializing in healthcare data and analytics products. Over the past 10 years in healthcare IT, he has launched and expanded analytics product lines to strengthen providers’ abilities to improve patient care and health outcomes both in U.S. and international markets. By: Maddy Maletz Product Marketing Manager | Optiv Maddy Maletz is a Product Marketing Manager at Optiv, focused on crafting clear, impactful messaging around the complexities of cybersecurity. With over three years of experience in the industry, Maddy is passionate about addressing client challenges and demonstrating how Optiv's solutions can help keep them secure in an ever-changing cybersecurity landscape. She is dedicated to driving successful campaigns that showcase Optiv’s expertise and commitment to client security. Share: SB 1047 AI regulation AI Governance responsible AI secure AI frontier AI AI bias data privacy security by design AI risk security audit
Would you like to speak to an advisor? Let's Talk Cybersecurity Provide your contact information and we will follow-up shortly. Let's Browse Cybersecurity Just looking? Explore how Optiv serves its ~6,000 clients. Show me AI Security Solutions Show me the Optiv brochure Take me to Optiv's Events page Browse all Services