Accessing mental healthcare in the United States can be challenging due to inconsistent insurance coverage and a shortage of mental health specialists to meet the country’s needs, which results in expensive care and lengthy wait times. Artificial intelligence (AI) is a solution, as shown in a recent commentary in the Journal of Pediatrics.

Image Credit: Media_Photos/Shutterstock.com
AI-powered mental health applications are widely available on the market, ranging from chatbots that simulate real therapists to mood trackers. Even though they might provide an affordable and easily accessible means of addressing the deficiencies in the current system, excessive dependence on AI for mental healthcare, particularly for children, raises ethical questions.
Although the majority of AI mental health apps are unregulated and intended for adults, there is increasing discussion about their potential use with children. The University of Rochester Medical Center’s (URMC) assistant professor of health humanities and bioethics, Bryanna Moore, PhD, wants to ensure that ethical issues are discussed in these conversations.
No one is talking about what is different about kids how their minds work, how they are embedded within their family unit, how their decision-making is different. Children are particularly vulnerable. Their social, emotional, and cognitive development is just at a different stage than adults.
Bryanna Moore, Assistant Professor, Health Humanities and Bioethics, University of Rochester
Indeed, AI chatbots for mental health may hinder kids’ social growth. Research indicates that youngsters think robots have “moral standing and mental life,” which raises concerns that young children may grow dependent on chatbots at the expense of developing positive human interactions.
The social context of a child—their interactions with peers and family—significantly impacts their mental health. Pediatric therapists do not treat kids separately because of this. To protect the child and involve family members in the therapeutic process, they monitor the child's social and familial relationships. AI chatbots may fail to step in when a child is in danger because they lack access to this crucial contextual information.
AI chatbots and AI systems in general tend to exacerbate current health disparities.
AI chatbots and AI systems also generally tend to exacerbate already-existing health disparities.
AI is only as good as the data it is trained on. To build a system that works for everyone, you need to use data that represents everyone. Unfortunately, without really careful efforts to build representative datasets, these AI chatbots would not be able to serve everyone.
Jonathan Herington, Study Co-Author and Assistant Professor, Departments of Philosophy and Health Humanities and Bioethics, University of Rochester
A child's gender, race, ethnicity, where they reside, and their family's relative affluence all influence their likelihood of encountering unfavorable childhood events such as abuse, neglect, incarceration of a loved one, or seeing violence, substance misuse, or mental illness in the home or community. Children who have experienced these events are more likely to require specialized mental health care and are less likely to have access to it.
Herington added, “Children from lower-income families may be unable to afford human-to-human therapy and thus come to rely on these AI chatbots in place of human-to-human therapy. AI chatbots may become valuable tools, but should never replace human therapy.”
Most AI therapy chatbots are not yet regulated. The Food and Drug Administration of the United States has only approved one AI-based mental health app for individuals suffering from serious depression. Without restrictions, there is no way to prevent misuse, a lack of reporting, or disparities in training data or user access.
Moore added, “There are so many open questions that have not been answered or clearly articulated. We are not advocating for this technology to be nixed. We are not saying get rid of AI or therapy bots. We are saying we need to be thoughtful in how we use them, particularly when it comes to a population like children and their mental health.”
Şerife Tekin, PhD, an associate professor at SUNY Upstate Medical’s Center for Bioethics and Humanities, collaborated with Moore and Herington on this commentary. Tekin examines the ethics of applying AI in medicine and the philosophy of cognitive science and psychiatry.
Going forward, the team hopes to collaborate with developers to better understand how they create AI-powered therapeutic chatbots. They are particularly interested in learning whether and how developers incorporate ethical or safety considerations into the development process and how much research and engagement with children, adolescents, parents, pediatricians, or therapists inform their AI models.
Journal Reference:
Moore, B., et al. (2025) The Integration of Artificial Intelligence-Powered Psychotherapy Chatbots in Pediatric Care: Scaffold or Substitute? The Journal of Pediatrics. doi.org/10.1016/j.jpeds.2025.114509.