Should AI-Enabled Mental Health Chatbots be Mandated Reporters? 

Bonney, Lura | November 6, 2023
  1. Introduction 

Due to a shortage of mental health support, AI-enabled chatbots offer a promising way to help children and teenagers address mental health issues. Users often view chatbots as mirroring human therapists. However, unlike their human therapist counterparts, mental health chatbots are not obligated to report suspected child abuse. Legal obligations could potentially shift, necessitating technology companies offering these chatbots to adhere to mandated reporting laws. 

  1. Minors face unprecedented mental health challenges 

Many teenagers and children experience mental health difficulties. These difficulties include having trouble coping with stress at school, depression, and anxiety. Although there is a consensus on the harms of mental illness, there is a shortage of care. An estimated 4 million children and teenagers do not receive necessary mental health treatment and psychiatric care. Despite the high demand for mental health support, there are approximately only 8300 child psychiatrists in the United States. Mental health professionals are overburdened and unable to provide enough support. These gaps in necessary mental health care provide an opportunity for technology to intervene and support the mental health of minors.   

  1. AI-enabled chatbots offer a mental health solution 

AI-enabled chatbots offer mental health services to minors and may help alleviate deficiencies in health care. Technology companies design AI-enabled mental health chatbots to facilitate realistic text-based dialogue with users and offer support and guidance. Over forty different kinds of mental health chatbots exist and users can download them from app stores on mobile devices. Proponents of mental health chatbots contend the chatbots are effective, easy to use, accessible, and inexpensive. Research suggests some individuals prefer working with chatbots instead of a human therapist as they feel less stigma asking for help from a robot. There are numerous other studies assessing the effectiveness of chatbots for mental health. Although the results are positive for many studies, the research concerning the usefulness of mental health chatbots is in its nascent stages. 

Mental health chatbots are simple to use for children and teenagers and are easily accessible. Unlike mental health professionals who may have limited availability for patients, mental health chatbots are available to interact with users at all times. Mental health chatbots are especially beneficial for younger individuals who are familiar with texting and interacting with mobile applications. Due to the underlying technology, chatbots are also scalable and able to reach millions of individuals. Finally, most mental health chatbots are inexpensive or free. The affordability of chatbots is important as one of the most significant barriers to accessing mental health support is its cost. Although there are many benefits of mental health chatbots, most supporters agree that AI tools should be part of a holistic approach to addressing mental health. 

Critics of mental health chatbots point to the limited research on their effectiveness and instances of harmful responses by chatbots. Research indicates that users sometimes rely on chatbots in times of mental health crises. In rare cases, some users received harmful responses to their requests for support. Critics of mental health chatbots are also concerned about the impact on teenagers who have never tried traditional therapy. The worry is that teenagers may disregard mental health treatment in its entirety if they do not find chatbots to be helpful. It is likely too early to know if the risks raised by critics are worth the benefits of mental health chatbots. However, mental health chatbots offer a promising way to ease the shortage of mental health professionals.  

  1. The law may evolve to mandate reporting for AI-enabled therapy chatbots  

AI is a critical enabler for growth across the world and is developing at a fast pace. As a result of its growth, the United States is prioritizing safeguarding against potentially harmful impacts of AI [such as, privacy concerns, job displacement, and a lack of transparency]. Currently, the law does not require technology companies offering mental health chatbots to report suspected child abuse. However, the public policy reasons underlying mandatory reporting laws in combination with the humanization of mental health chatbots indicate the law may require companies to report child abuse.  

Mandatory reporting laws help protect children from further harm, save other children from abuse, and increase safety in communities. In the United States, states impose mandatory reporting requirements on members of specific occupations or organizations. The types of occupations that states require to report child abuse are those whose members often interact with minors, including child therapists. Technology companies that produce AI-enabled mental health chatbots are not characterized as mandated reporters. However, like teachers and mental health counselors, these companies are also in a unique position to detect child abuse.  

Mandatory reporting is a responsibility generally associated with humans. The law does not require companies that design AI-enabled mental health chatbots to report suspected child abuse. However, research suggests that users may develop connections to chatbots in a similar way that they do with humans. Often, companies creating mental health chatbot applications take steps to make the chatbots look and feel similar to a human therapist. The more that users consider AI mental health chatbots to be a person, the more effective the chatbot is likely to be. Consequently, technology companies invest in designing chatbots to be as human-like as possible. As users begin to view mental health chatbots as possessing human characteristics, mandatory reporting laws may adapt and require technology companies to report suspected child abuse.  

The pandemic compounded the tendency for users to connect with mental health chatbots in a human-like way. Research by the University of California illustrates that college students started to bond with chatbots to cope with social isolation during the pandemic. The research found that many mental health chatbots communicated like humans which caused a social reaction in the users. Since many people were unable to socialize outside of their homes or access in-person therapy, some users created bonds with chatbots. The research included statements by users about trusting the chatbots and not feeling judged, characteristics commonly associated with humans.  

As chatbot technology becomes more effective and human-like, society may expect that legal duties will extend to mental health chatbots. The growth in popularity of these chatbots is motivating researchers to examine the impacts of chatbots on humans. When the research on personal relationships between chatbots and users becomes more robust, the law will likely adapt. In doing so, the law may recognize the human-like connection between users and chatbots and require technology companies to abide by the legal duties typically required of human therapists.  

While the law may evolve and require technology companies to comply with the duty to report suspected child abuse, several obstacles and open questions exist. Requiring technology companies that offer AI-enabled mental health chatbots to report suspected child abuse may overburden the child welfare system. Mandatory reporting laws include a penalty for a failure to report which has led to unfounded reporting. In 2021, mandatory reporting laws caused individuals to file over 3 million reports in the United States. However, Child Protection Services substantiated less than one-third of the filed reports indicating the majority of reports were unfounded. False reports can be disturbing for the families and children involved. 

Many states have expanded the list of individuals who are mandated reporters, leading to unintended consequences. Some states extended mandatory reporting laws to include drug counselors, technicians, and clergy. Requiring companies offering mental health chatbots to follow mandatory reporting laws would be another expansion and may carry unintended consequences. For example, companies may over-report suspected child abuse out of fear of potential liability. Large volumes of unsubstantiated reports could become unmanageable and distract Child Protection Services from investigating more pressing child abuse cases.  

Another open question with which the legal system will have to contend is how to address confidentiality concerns. Users of mental health chatbots may worry that a mandatory disclosure requirement would jeopardize their data. Additionally, users may feel the chatbots are not trustworthy and that their statements could be misinterpreted. Such fears may stigmatize those who need help from using mental health chatbots. A decline in trust could have the inadvertent effect of disincentivizing individuals from adopting any AI technology altogether. The legal system will need to consider how to manage confidentiality and unfounded report concerns. Nevertheless, as chatbots become more human-like, the law may require technology companies to comply with duties previously associated with humans.  

  1. Conclusion  

AI-enabled mental health chatbots provide a promising new avenue for minors to access support for their mental health challenges. Although these chatbots offer numerous potential benefits, the legal framework has not kept pace with their rapid development. While the future remains uncertain, it is possible that legal requirements may necessitate technology companies to adhere to mandatory reporting laws related to child abuse. Consequently, technology companies should contemplate measures to ensure compliance.