Since the inception of GenAI and the subsequent acceleration of AI tools and services, there has been an ongoing debate regarding how — and how much — to regulate this powerful, transformative technology.
One of the most important pieces of legislation surrounding the safe use of AI technologies, in the US at least, is Senate Bill 1047 (SB 1047), which passed the California Assembly Appropriations Committee and advanced to the Assembly floor for a final vote.
What is SB 1047?
SB 1047 is legislation proposed by Democratic Senator Scott Wiener. The ultimate aim of the bill is to enable the safe development of major AI systems by setting out a series of transparent safety standards.
The bill is directed at developers of large-scale AI models that cost over $100 million to train and supersede, by orders of magnitude, the capabilities of existing AI systems. It will require basic safeguarding, including pre-deployment safety testing, red-teaming, cybersecurity measures, guardrails, and post-deployment monitoring.
The bill also promises to protect whistleblowers and enable California’s Attorney General to act against the developer of any AI model that causes “severe harm” to Californians or if their negligence poses an “imminent” public safety threat. Finally, a new public cloud computing cluster, CalCompute, will ensure startups, researchers, and community groups develop large-scale AI systems that benefit the needs of communities in California.
Anthropic’s Amendments
From inception to the Assembly floor, it hasn’t been a straight road for SB 1047. It’s faced opposition from tech firms, investors, AI researchers, and US congressmen. However, while opposition from private entities often stalls the progress of fledgling laws, Senator Wiener hasn’t just listened to those voices; he’s amended the bill in response.
One of the companies leading the charge with requests for amendments was Anthropic; some of the organization’s suggestions have been accepted.
The amendments suggested by Anthropic surrounded liability and, in particular, the language used to describe how, why, and when an organization could face legal action. The wording in the bill has changed from developers’ requirement to show “reasonable assurance” that AI models don’t pose a catastrophic risk to “reasonable care.”
Criminal penalties have been removed, with penalties for perjury replaced by civil penalties. Significantly, AI developers can’t be sued for negligence before a disaster occurs.
“The Assembly will vote on a strong AI safety measure that has been revised in response to feedback from AI leaders in industry, academia, and the public sector,” said Senator Wiener.
“We can advance both innovation and safety; the two are not mutually exclusive. While the amendments do not reflect 100% of the changes requested by Anthropic — a world leader on both innovation and safety — we accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry.”
Closing Thoughts
Perhaps the most significant regulation to disrupt the tech industry in recent years was the EU’s GDPR; globally, companies were forced to reinvent processes and re-address standards to comply with the law’s stringent datra privacy guidelines. With the largest tech firms depending on users’ data to drive revenue, this presented a particularly complex challenge.
However, the challenge was met and although there are still ad-hoc instances where data protection measures are neglected, they are far from prolific. Furthermore, the public is aware of its personal data rights, meaning customers expect compliance. The same will undoubtedly be true for AI, with consumers well aware of the frameworks governing how the safety of AI tools and systems is managed.
Effective AI governance will require consensus among a majority or all stakeholders. While some may be hesitant about AI companies playing a role in setting the rules, it’s crucial for all parties to establish them from day one. AI is too influential for governments to take a dictatorial approach to regulation; there needs to be a substantial shift toward collaborative development.
Ultimately, removing liability for tech companies in the event of an AI disaster is out of the question. Let’s not forget the upheaval Section 230 has and continues to cause. However, as team Wiener says, SB 1047 balances AI innovation with safety, and that’s the key.
The AI Ecosystem Q2 2024 Report compiles the innovations, funding, and products highlighted in AI Ecosystem Reports from the second quarter of 2024. Download now for perspectives on the companies, innovations, and solutions shaping the future of AI.