Source: POLITICO
A California assemblymember sees an opportunity to make the state healthier with a new artificial intelligence report commissioned by Democratic Gov. Gavin Newsom’s office.
Assembly Health Chair Mia Bonta told our POLITICO colleagues in California that the report’s call for greater AI model transparency highlights one of her biggest concerns: that companies are pushing AI as a replacement for health workers and stand-ins for kids seeking therapy.
Bonta introduced a bill last month that would ban companies from marketing AI chatbots as licensed health professionals like nurses and psychologists. Her committee might soon take up another bill that would outlaw chatbots from luring kids with addictive reward structures.
Here’s more of Bonta’s conversation with POLITICO.
This interview has been edited for length and clarity.
Are you pursuing a broad approach or a more pointed approach to regulating AI in health care?
No pun intended, but I think our regulatory framework needs to be pretty surgical. … I’m particularly focused right now on making sure that health care professionals are not misrepresented to vulnerable communities.
Given the mental health care crisis that is happening for children right now, having them think that they are getting counsel and advice from a human being. … and having, in actuality, that be an AI-generated avatar, that’s a deep concern to me.
Do you think the state needs to create clearer rules for how AI chatbots are allowed to market themselves, particularly to children?
I do, and I think it’s very complicated. … We have to get very skilled in the Legislature to be able to make sure that we’re providing very clear language that doesn’t have unintended consequences around what we’re trying to regulate.
I think we were very close in the U.S. to adopting a regulatory framework that would have a robust application [to kids’ safety], and now we are not in a position to be able to rely on the federal government. That context is causing California to need to step up.
Is that because of President Donald Trump and the Republican-controlled Congress?
It is definitely because of an attitude that does not protect humanity, doesn’t protect data and privacy and doesn’t protect the basis of allowing us to use science and data and research to drive our decisions.
Do you see any gaps or missing perspectives in pieces of legislation dealing with kids’ safety and AI?
I think we always run the risk of not taking the time to hear the voices that don’t have the ability to be in the room.
If you take these broad conversations around AI regulation, and you are somehow not acting [on] the human components of how we need to shape this and making sure that we’re focused on traditionally disenfranchised communities — like youth and low-income people and people with disabilities and people of color and BIPOC [Black, Indigenous and people of color] communities — you are always going to come up with the wrong answer.