Generative AI in Psychiatry

Journal Article Annotations
2025, 1st Quarter

Generative AI in Psychiatry

Annotations by Liliya Gershengoren, MD
March, 2025

  1. A Pilot Analysis Investigating the Use of AI in Malingering.
  2. Is Artificial Intelligence the Next Co-Pilot for Primary Care in Diagnosing and Recommending Treatments for Depression?

PUBLICATION #1 — Generative AI in Psychiatry

A Pilot Analysis Investigating the Use of AI in Malingering.
Scott A Gershan, Esther Schoenfeld, Declan J Grabb.

Annotation

The finding:
The study demonstrates that AI (artificial intelligence), specifically ChatGPT-3.5, can generate highly accurate and nuanced information about psychotic symptoms and malingering tactics. AI models can provide information on psychiatric symptoms, simulate forensic evaluations, and potentially be used by individuals who are malingering to craft convincing false narratives. While current limitations exist, the study highlights the growing risk of AI being leveraged in psychiatry for secondary gain including evading legal consequences.

Strength and weaknesses:
The study is the first of its kind to investigate the use of AI in malingering and provides a structured evaluation of AI-generated responses. The research highlights the potential risks of AI in forensic psychiatry and emphasizes the importance of adapting assessment techniques accordingly. The study was limited to ChatGPT-3.5, whereas more advanced models may perform differently. Additionally, the evaluation of AI responses was subjective, relying on three forensic psychiatrists without a standardized rubric. The study did not explore AI’s potential to manipulate structured forensic assessments, such as psychometric testing.

Relevance:
For consultation-liaison (C-L) psychiatrists, this study underscores the evolving challenges posed by AI in clinical and forensic settings. C-L psychiatrists, who often evaluate patients with suspected malingering in medical and legal contexts, should be aware that AI can be used to fabricate symptoms. This has implications for malingering assessments in the medical setting. As AI capabilities advance, psychiatrists will need to refine their diagnostic tools and consider integrating AI detection strategies into clinical assessments.


PUBLICATION #2 — Generative AI in Psychiatry

Is Artificial Intelligence the Next Co-Pilot for Primary Care in Diagnosing and Recommending Treatments for Depression?
Inbar Levkovich.

Annotation

The finding:
The study explores the potential of AI in primary care for diagnosing and treating depression. AI can enhance diagnostic accuracy by detecting early signs of depression through extensive data analysis, including medical records and behavioral patterns. AI can also personalize treatment by predicting the most effective therapeutic interventions based on an individual’s characteristics. The paper discusses both the benefits and challenges of AI in mental health care, including concerns about biases and ethical considerations.

The paper provides a thorough review of AI applications in depression care, highlighting its potential to improve diagnostic accuracy and personalize treatment. It synthesizes evidence from multiple sources and discusses the advantages of AI-driven screening, prognosis assessment, and treatment recommendations. The study is theoretical and does not include empirical testing of AI models in real-world clinical settings. It also acknowledges limitations related to AI bias, ethical concerns, and the variability in AI performance across different demographic groups. Additionally, the discussion of AI’s effectiveness does not fully address potential risks such as over-reliance on technology in clinical decision-making.

Relevance:
For CL psychiatrists, this study highlights the growing role of AI in mental health care, particularly in primary care settings where depression often goes underdiagnosed. AI tools could assist CL psychiatrists in refining diagnostic assessments, improving treatment recommendations, and integrating digital health solutions into collaborative care models. However, the paper also underscores the importance of maintaining clinical oversight, ensuring ethical AI deployment, and addressing disparities in AI performance across diverse populations.