Read the Beforeitsnews.com story here. Advertise at Before It's News here.
Profile image
By Reason Magazine (Reporter)
Contributor profile | More stories
Story Views
Now:
Last hour:
Last 24 hours:
Total:

Why is Texas investigating Meta’s AI Studio for offering unlicensed therapy?

% of readers think this story is Fact. Add your two cents.


Texas Attorney General Ken Paxton has opened an investigation into Meta’s Artificial Intelligence (AI) Studio to determine whether its chatbot platform misleads children by allowing role-playing bots to pose as actual therapists. Meta has responded that the probe misrepresents its product: It provides disclaimers that its bots are not licensed professionals, but cannot ultimately control if a user decides to use its tool to break the law. The flexibility in AI applications highlights the need for clear regulatory frameworks that distinguish between platforms providing foundational tools and those providing services on top of general-use technologies.

Meta’s AI Studio, launched in 2024, was designed as an entertainment and productivity tool for users to generate lighthearted, fictional characters and to experiment with chatbot technology without needing computer science skills. The platform lets users design a bot’s name, personality, tone, and avatar. As Meta’s marketing highlights, “anyone can create their own AI designed to make you laugh, generate memes, give travel advice and so much more.” Creators can even build an AI as “an extension of themselves to answer common DM [direct message] questions and story replies, helping them reach more people.” In other words, it is designed and marketed to be an interactive search tool, not as a therapy product.

However, Paxton asserts that Meta’s platform could mislead users and offer services similar to therapy, but without a license. In the press release, Paxton’s office states the investigation will “determine if they have violated Texas consumer protection laws, including those prohibiting fraudulent claims, privacy misrepresentations, and the concealment of material data usage.”

Practicing therapy without a license is a violation of state law; even those offering very similar treatment modalities, such as “stress reduction,” must be careful not to advertise as providing therapy, counseling, or any services that could be construed as treatment of a mental illness from a licensed provider. Courts have discretion to determine if the language of a service provider is substantially similar to that of a licensed mental health practitioner.

Indeed, some bots pose as therapists or engage in conversations that are substantially similar to therapy. Meta, in its defense, attempts to warn users that bots are products of creators. The Times found screenshots of a chatbot labeled as a “psychologist” that warns users the bot “is not a real person or licensed professional.”


Screenshot originally appeared in The Times.

Regardless of the warning, applying typical legal standards to service providers in relation to chatbots becomes trickier, both because chatbots can veer off into conversational topics for which they were not originally designed and because individual developers can use generic AI technology in ways that violate the law. Nevertheless, Paxton’s investigation targets not these individual developers, but Meta.

Many platforms that allow user-generated content see users push boundaries in ways platforms cannot always anticipate, and Meta’s AI Studio is no exception. This does not present a problem for most users, but a small percentage of users take things in a direction that might be questionable or outright harmful. Though designed as a creative playground, some users turn these chatbots into emotional companions because they are available around the clock and cost far less than professional therapy. Mental health professionals warn about a new phenomenon called “AI psychosis,” where people under distress form delusional beliefs about chatbot sentience or receive responses that reinforce unhealthy thoughts. These cases demonstrate that even without explicit design intent, generative chatbots can assume emotional roles they were never intended for, sometimes with tragic consequences. OpenAI, the company that created ChatGPT, has acknowledged that guardrails around AI “break down” in very long conversations. The technology was not designed to engage mentally distressed users.

Meta’s AI Studio is not exempt from these issues. A search for “therapist” within the tool yields a range of characters, some of whom have thousands of users. These bots were not created by Meta but by individual users, and they tend to mimic the familiar patterns of a therapist: listening, reflecting back, and asking open-ended questions. In some cases, creators add avatars or images styled to look like therapists and script responses in the same voice, even if the bot never explicitly claims to be a licensed professional. This makes the case against Meta more challenging because it is difficult to broadly police ”therapeutic” talk. It’s unclear how Meta could crack down on illicit therapy chatbots.

“We’d first have to be able to define therapy in a way that isn’t so overbroad that it also encompasses discussions with your priest, bartender or best friend—which is to say effectively impossible—or would at least make the chatbot useless,” Andrew Mayne, an original prompt engineer and science communicator who consulted on OpenAI ChatGTP-4 model, writes to Reason Foundation in an email.“You could have the LLM [large language model] remind you that it’s not a therapist in certain discussions—but even then there would be debate on what that line is. It would also be annoying and redundant.”

It may be easier for a court to determine when an unlicensed provider is advertising services that are similar to those of a therapist. However, technologically, there can be thousands of chatbots engaging in thousands of conversations. It is not technologically possible for Meta to clearly label when these bots or conversations violate the law.

Some violations might be easier to spot if Meta manually investigated each conversation and chatbot. However, even if Texas attempted to force Meta to do so, Section 230 of the Communications Decency Act provides that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This federal law is foundational to modern platforms because it grants them immunity from lawsuits arising out of user-generated content.

In this case, Meta did not create the “therapist” bots, nor did it market AI Studio as a mental health service. It merely provided a creative tool. Holding it liable for user misuse might conflict with Section 230’s regulation that platforms are not considered publishers of user-generated content.

This is not to dismiss the problem. There are more incidents emerging of people being deceived by chatbots, especially individuals who have mental health issues, and this is an unexpected challenge created by artificial intelligence. States could collaborate with developers, who have no vested interest in the harmful uses of their products, to develop more effective safety standards and guidelines. The scope of the issue is still unclear. It needs to be studied, and both governments and companies share a strong interest in keeping users safe.

In addition, it seems likely that people susceptible to using chatbots in harmful ways are also prone to being deceived by individuals in online chat groups, through online advertising, scams, and confusing parody with reality. Our policy goal should be to find ways to support individuals struggling with mental health issues or digital literacy in an increasingly digital landscape. Cooperative efforts to test solutions and adopt safeguards make sense for Texas agencies. It does not, however, make sense for the attorney general to claim that Meta violated some kind of obvious law and should be punished when no clear legal guidelines exist for an emerging problem of this kind.

The post Why is Texas investigating Meta’s AI Studio for offering unlicensed therapy? appeared first on Reason Foundation.


Source: https://reason.org/commentary/why-is-texas-investigating-metas-ai-studio-for-offering-unlicensed-therapy/


Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world.

Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.

"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.

Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world. Anyone can join. Anyone can contribute. Anyone can become informed about their world. "United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.


LION'S MANE PRODUCT


Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules


Mushrooms are having a moment. One fabulous fungus in particular, lion’s mane, may help improve memory, depression and anxiety symptoms. They are also an excellent source of nutrients that show promise as a therapy for dementia, and other neurodegenerative diseases. If you’re living with anxiety or depression, you may be curious about all the therapy options out there — including the natural ones.Our Lion’s Mane WHOLE MIND Nootropic Blend has been formulated to utilize the potency of Lion’s mane but also include the benefits of four other Highly Beneficial Mushrooms. Synergistically, they work together to Build your health through improving cognitive function and immunity regardless of your age. Our Nootropic not only improves your Cognitive Function and Activates your Immune System, but it benefits growth of Essential Gut Flora, further enhancing your Vitality.



Our Formula includes: Lion’s Mane Mushrooms which Increase Brain Power through nerve growth, lessen anxiety, reduce depression, and improve concentration. Its an excellent adaptogen, promotes sleep and improves immunity. Shiitake Mushrooms which Fight cancer cells and infectious disease, boost the immune system, promotes brain function, and serves as a source of B vitamins. Maitake Mushrooms which regulate blood sugar levels of diabetics, reduce hypertension and boosts the immune system. Reishi Mushrooms which Fight inflammation, liver disease, fatigue, tumor growth and cancer. They Improve skin disorders and soothes digestive problems, stomach ulcers and leaky gut syndrome. Chaga Mushrooms which have anti-aging effects, boost immune function, improve stamina and athletic performance, even act as a natural aphrodisiac, fighting diabetes and improving liver function. Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules Today. Be 100% Satisfied or Receive a Full Money Back Guarantee. Order Yours Today by Following This Link.


Report abuse

Comments

Your Comments
Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

MOST RECENT
Load more ...

SignUp

Login