March 9, 2023
Mental Health | Tea Leaves
  • The rise of AI in mental health care has providers and researchers increasingly concerned over whether glitchy algorithms, privacy gaps, and other perils could outweigh the technology’s promise and lead to dangerous patient outcomes. AI-enabled chatbots like Wysa and FDA-approved apps are helping ease a shortage of mental health and substance use counselors, but the American Psychiatric Association estimates there are more than 10,000 mental health apps circulating on app stores, and nearly all are unapproved. KoKo, a mental health nonprofit, recently used ChatGPT as a mental health counselor for about 4,000 people who weren’t aware the answers were generated by AI, sparking criticism from ethicists. The fear is now concentrated on whether the technology is beginning to cross a line and make clinical decisions and what the Food and Drug Administration is doing to prevent safety risks to patients. (Article here)