Image 01 Image 03

Controversial Artificial Intelligence Safety Bill Passes California Legislature

Controversial Artificial Intelligence Safety Bill Passes California Legislature

Tech companies developing generative AI say the measure could drive AI companies from the state and hinder innovation.

https://sanfrancisco.cbslocal.com/2017/01/27/california-could-cut-off-feds-in-response-to-trump-threats/

A controversial bill requiring developers of advanced artificial intelligence (AI) models to adopt safety measures has just passed the state legislature and is poised to become California law.

The bill, SB 1047, would require developers of future advanced AI models to create guardrails to prevent the technology from being misused to conduct cyberattacks on critical infrastructure such as power plants.

Developers would need to submit their safety plans to the attorney general, who could hold them liable if AI models they directly control were to cause harm or imminent threat to public safety.

The bill, introduced by Sen. Scott Wiener (D-San Francisco), passed the state Assembly on Wednesday, with 41 votes in favor and nine opposed. On Thursday, the measure was approved by the state Senate in a concurrence vote. It now heads to Gov. Gavin Newsom’s office, though it’s unclear whether Newsom will sign or veto the bill.

“Innovation and safety can go hand in hand — and California is leading the way,” Wiener said in a statement.

To give you a clearer picture of who introduced the bill, let me show you, Scott Wiener.

Given the nature of what Wiener supports, it is little wonder that tech companies could drive AI companies from the state and hinder innovation.

Some Democrats in U.S. Congress, including Representative Nancy Pelosi, also opposed it. Proponents include Tesla, opens new tab CEO Elon Musk, who also runs an AI firm called xAI and has said he supports the bill.

The measure mandates safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. Developers of AI software operating in the state also need to outline methods for turning off the AI models if they go awry, effectively a kill switch.

The bill also gives the state attorney general the power to sue if developers are not compliant, particularly in the event of an ongoing threat, such as the AI taking over government systems like the power grid.

As well, the bill requires developers to hire third-party auditors to assess their safety practices and provide additional protections to whistleblowers speaking out against AI abuses.

A whole host of regulations related to AI are on the docket, including those preventing deep fakes and those focused on performers whose images are used in AI-generated video productions. California Gov. Gavin Newsom has until the end of the month to sign them.

The Democratic governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature. Newsom signaled in July he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation.

He warned earlier this summer that overregulation could hurt the homegrown industry. In recent years, he often has cited the state’s budget troubles when rejecting legislation that he would otherwise support.

Here is a look at some of the AI bills lawmakers approved this year.

Citing concerns over how AI tools are increasingly being used to trick voters and generate deepfake pornography of minors, California lawmakers approved several bills this week to crack down on the practice.

Lawmakers approved legislation to ban deepfakes related to elections and require large social media platforms to remove the deceptive material 120 days before Election Day and 60 days thereafter. Campaigns also would be required to publicly disclose if they’re running ads with materials altered by AI.

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments


 
 1 
 
 1
rhhardin | September 3, 2024 at 7:06 pm

It’s about time the law did something about eigenvectors.


 
 0 
 
 4
henrybowman | September 3, 2024 at 7:06 pm

So to prevent catastrophic damage to the state and its people, AI entrepreneurs must first seek nihil obstat from experienced politicians, every one of whom have been systematically destroying the state and impoverishing its people for decades.
Don’t bother to let me know how that works out.


 
 0 
 
 1
scooterjay | September 3, 2024 at 7:53 pm

Wiener is still around? I guess he got penal and penile mixed up.


 
 0 
 
 3
UnCivilServant | September 3, 2024 at 8:01 pm

Let me guess “Threat to public safety” includes providing information that doesn’t toe the party line.

Scott Weiner is trans groomer socialist.

It seems to me that under the current legal doctrine that software is a form of speech, this bill has a massive first amendment problem.


       
       2 
       
       0
      Danny in reply to thalesofmiletus. | September 4, 2024 at 4:43 am

      Blackmail is not protected speech.

      What is under discussion is

      “The bill, SB 1047, would require developers of future advanced AI models to create guardrails to prevent the technology from being misused to conduct cyberattacks on critical infrastructure such as power plants.”

      Do you intend to attack a power plant?


         
         0 
         
         3
        The Gentle Grizzly in reply to Danny. | September 4, 2024 at 5:32 am

        And, the bill will have all the power of firearms restrictions. Bad guys disregard the law.


         
         0 
         
         4
        Milhouse in reply to Danny. | September 4, 2024 at 6:55 am

        The AI models are not blackmail. They’re pure speech, and as such are protected, so California can’t regulate or license them. The fact that it is conceivable that some hypothetical person may or may not one day use them for an illegal purpose doesn’t change that.

        I mean that was exactly the government’s argument about cryptography. Someone may use your speech for an illegal purpose, so we can prevent you from speaking it now. And the courts correctly said, no you can’t.


 
 0 
 
 3
ThePrimordialOrderedPair | September 3, 2024 at 11:44 pm

Tech companies developing generative AI say the measure could drive AI companies from the state

That would actually be a very good thing.


     
     0 
     
     2
    wagnert in atlanta in reply to ThePrimordialOrderedPair. | September 4, 2024 at 10:01 am

    California seems intent on driving technology out of the state, to the benefit of Texas, Florida, and hopefully Georgia. When they have accomplished this and the state is vineyards and spinach fields from the ocean to the mountains, will they step back and admire their handiwork? No, they will bitterly lament the injustice of it all and raise gasoline taxes.

What exactly is stifling about no help at hacking from generative a.i.?

Do you really want anyone to be capable of hacking anybody?

To back up my point of view

“Some Democrats in U.S. Congress, including Representative Nancy Pelosi, also opposed it. Proponents include Tesla, opens new tab CEO Elon Musk, who also runs an AI firm called xAI and has said he supports the bill.”

I trust Elon Musk determining that an aide to hackers from generative ai is a problem over Pelosi a million times.


 
 0 
 
 1
iconotastic | September 4, 2024 at 9:57 am

The start of the Butlerian Jihad begins in California?


 
 0 
 
 1
wagnert in atlanta | September 4, 2024 at 10:04 am

“Proponents include Tesla, opens new tab CEO Elon Musk…”

……..?

Progressives always think passing a law that requires something like “safety” will make everyone safe. Because they’re not in touch with reality – particularly the par tof it we call “human nature.”

For a view of our future (perhaps our present), watch the “Person of Interest” series. It’s on Peacock.

Or read the Book of Revelation.

Technology and human nature cannot be stopped – except by God.

Leave a Comment

Leave a Reply

You must be logged in to post a comment.