IN THIS ISSUE: Somatic Symptom and Redlated Disorders | Posters | Visiting Professors | Distinguished Service | A&E Abstracts
Authors: Michael MacIntyre, MD, et al.
Abstract: The rapid advancement of artificial intelligence (AI) and machine learning are providing new tools to clinicians. AI tools have the potential to process vast amounts of data in a short amount of time, providing new insights and changing how we approach complicated health care problems.
AI has the potential to assist clinicians in medical decision-making capacity assessments by providing additional insights to an evaluation process that currently lacks universal objective standards.
However, say the authors, significant concerns remain making it unlikely to replace human evaluators anytime soon.
“AI remains highly susceptible to biased inputs and thus biased decisions, raises questions about autonomy, and creates uncertainty for who is accountable for the ultimate decision of capacity,” they say.
In this paper, the authors explore these ethical considerations of using AI for capacity assessments and conclude: “While we acknowledge AI may not be ready to replace physicians in determining patient medical-decision making capacity, these new technologies have significant near-term potential as a tool to screen patients, uncover physician biases, and guide next steps after a capacity determination has been made.”
Several fields may be especially primed for significant changes due to AI. In radiology; AI tools have already been shown to outperform human radiologists, both in speed and accuracy, on specific tasks such as identifying malignancies on breast imaging.
But while much research has focused on AI’s utility for providing objective answers to complex questions, as AI progresses, it will be used for highly nuanced, complex assessments involving both emotional states and human behavior.
For example, a recent survey found an online chatbot provided not only higher quality responses to patient questions than physicians, but was also seen as more empathetic. In another study, a deep learning algorithm was trained to interpret facial expressions to categorize pain among a cohort of post-operative surgical patients. The algorithm dramatically outperformed human nurses performing the same tasks.
In the assessment of capacity, a patient must be able to understand, appreciate, and weigh the risks and benefits of any treatment to which they consent. Assuring concordance between the patient’s underlying values and current preferences is also part of the process. “Without an appropriate assessment of capacity, medicine becomes paternalistic, with those most vulnerable at the greatest risk of losing autonomy in their own health care,” say the authors.
They add: “The lack of universal objective standards, and the highly personal nature of decision-making, may initially seem poorly suited to computers. However, the ‘large language models’ in use today are massive, with hundreds of layers of artificial neurons. They can consider billions of variables simultaneously and are constantly improving.
“It is also worth considering that ‘messy’ problems like assessing capacity are notoriously difficult for even the best-trained human practitioners,” say the authors. “In such areas, the bar for an AI tool to be ‘good enough’ is much lower than in more straightforward tasks like interpreting chest radiographs.”
The adoption of any AI tool for decision-making assessments will have challenges. AI must be trained, and output is only as good as the data used to train the model. As a result, AI is highly susceptible to perpetuating or even magnifying biases already present in capacity assessments, say the authors. AI algorithms are neither transparent nor explainable, and clinicians using these tools might be unaware of how specific biased or incorrect information affected outputs, preventing an opportunity for a physician to intervene.
Beyond that, ethical challenges may arise regarding if and when a physician should intervene and ‘override’ an AI model if that model were previously demonstrated to be more consistently accurate than typical care clinical assessments.
Concerns also arise about AI removing an essential interpersonal element from capacity assessments due to an overreliance on technology. “Ideally, the informed consent process should be used as a tool to facilitate patient autonomy and self-respect,” say the authors.
They add: “Caution must be exerted, particularly to minimize harm from using this new technology in ways for which it was not intended or capable of doing accurately.”
Importance: This paper details the ethics issues clinicians and researchers must consider when considering the use of AI as a medical decision-making tool—and provides links to the latest substantive research on the subject.
“When high-stake decisions are made about an individual’s life, such as whether to withdraw care or move forward with a high-risk procedure, traditionally a responsible party is accountable for the ultimate decision,” say the authors. “In modern medical systems, when patient autonomy is overridden due to lack of capacity, the decision is made by a treating physician, often with consultation and input from Psychiatry consultants specializing in capacity assessments, service chiefs, and hospital ethics committees.
“However, if a decision were to be made by an AI algorithm, it is less clear how to improve the system should a bad outcome occur. If AI algorithms are ever to be involved, it is essential that guidelines be established regarding when and how a physician may ‘overrule’ an algorithm.
“While one might be tempted to afford the treating physician final authority to overrule an AI algorithm output, this approach carries its own challenges. If AI is demonstrated to be consistently valid, performing a task more efficiently and accurately than a human doctor, physician input runs the risk of being a disadvantage to patients. Given the known errors in human judgment and the limitations of purely clinical decision-making, to ignore a robust AI model’s verdict may be akin to a primary physician ignoring the read of a radiologist or a resident dismissing the direction of the attending physician.”
AI holds tremendous potential for Psychiatry as the technology rapidly improves and develops the ability to understand human cognitive and emotional states. AI may prove to be a vital tool in the essential task of ensuring that patients have medical decision-making capacity. It offers the possibility of increasing the reliability of a capacity assessment, which are currently subject to the nuances of any given evaluator’s personal experiences and style. AI can consider orders of magnitude—more data much more quickly than any human practitioner. This allows AI systems to utilize far more information in making important clinical decisions, with the potential to minimize the impact of human biases.
However, medical systems remain far from being ready to safely incorporate AI tools at this time. Numerous issues must be addressed including how this novel tool might be effectively implemented, what checks and balances a treatment provider may be able to employ with such a computer model, and assurances that the AI model is not providing biased results derived from biased data.
Availability: Published by Psychiatry Research.
Authors: Patrick McGorry, MD, et al.
Abstract: The research team set out to answer the question: What are the optimal type, timing, and sequence of interventions for individuals at ultra-high risk of psychosis?
In this randomized trial involving 342 participants aged 12 to 25, a specialized psychological intervention (cognitive-behavioral case management [CBCM]) and a psychopharmacological intervention (CBCM and antidepressant medication) were not more efficacious than control conditions in improving remission and functional recovery.
The Staged Treatment in Early Psychosis (STEP) trial took place within the clinical program at Orygen, Melbourne. Participants seeking treatment and meeting criteria for ultra-high risk of psychosis were recruited between April 2016 and January 2019.
“Enhancing the intensity of treatment with psychological interventions or medications was challenging to implement with fidelity and adherence in this largely primary care-based sample but, nevertheless, could not be demonstrated to produce any benefit over and above continuing a simpler form of care,” say the authors. “Low remission and high relapse rates confirm the sustained vulnerability and substantial morbidity of the ultra-high risk population and highlight the need to conduct further adaptive trials, develop new treatments, provide sustained specialist care, and identify subgroups for whom treatments can be tailored.”
Importance: To date, clinical trials have not established the optimal type, sequence, and duration of interventions for people at ultra-high risk of psychosis. These findings show that addition of sequentially more specialized psychosocial and antidepressant treatment for individuals who did not remit did not lead to superior outcomes—underscoring the need for further adaptive trials, treatment innovation, and an extended duration of care for relapse prevention.
Availability: Published by JAMA Psychiatry.
Authors: Barna Konkolÿ Thege, PhD, CPsych, et al
Abstract: Assessing addictive behaviors comprehensively and efficiently is a challenge in both research and clinical practice. Consequently, the authors tested the psychometric properties of the Generalized Screener for Substance and Behavioral Addictions (SSBA-G), a novel, brief screening tool measuring functional impairment resulting from both substance and behavioral addictions.
The SSBA-G was developed from the Screener for Substance and Behavioral Addictions and tested in four samples including university students in Canada and the US, as well as community adults in Canada and Hungary.
Results indicated good-to-excellent sensitivity and moderate-to-good specificity—indicating that SSBA-G is a psychometrically sound and efficient measure of addiction-related impairment across substances and excessive behaviors, say the authors.
While such a general tool cannot provide a detailed account of individuals’ involvement in different addictive behaviors, say the authors, the tool is efficient in:
Importance: Addictive disorders are a major public health concern both for their prevalence and detrimental impact; therefore, their proper assessment is of utmost importance both in health promotion and the clinical setting. While the traditional approach to this assessment has been target-addiction-specific, the authors argue that the rapidly changing landscape of addictions warrants a more generalist, economic approach.
Availability: Published by Psychiatry Research.
Authors: Ole Köhler-Forsberg, MD, PhD, DMSc, et al.
Abstract: Every third to sixth patient with medical diseases receives antidepressants, but regulatory trials typically exclude comorbid medical diseases. Meta-analyses of antidepressants have shown small to medium effect sizes, but generalizability to clinical settings is unclear, where medical comorbidity is highly prevalent.
So, what is the evidence for use of antidepressants to treat or prevent comorbid depression in patients with medical diseases?
The authors identified 176 individual systematic reviews of randomized clinical trials in 43 medical diseases. They quantitatively summarized and meta-analyzed the results of 52 meta-analyses of antidepressant effects in 27 medical diseases.
Results indicated sufficient quality of the individual meta-analyses but rather low quality of the meta-analyzed clinical trials. Compared with placebo, antidepressants showed better efficacy and worse tolerability and acceptability, and were more likely to prevent depression.
“Antidepressants are effective and safe for the treatment and prevention of depression in patients with medical diseases, but too few large, high-quality trials exist,” say the authors.
“Trials that have examined antidepressant treatment for depression comorbid with medical disease tend to be more heterogeneous than pivotal trials for regulatory purposes, ranging from small academic trials to larger consortia in a wide range of different medical diseases in different clinical or geographical settings. This situation results in a complex evidence landscape for the treatment of comorbid depression that requires careful examination.
“Currently, to our knowledge, there is no direct quantitative comparison of the evidence across all individual pharmacological strategies in comorbid depression. Moreover, the quality of the meta-analyses and the included RCTs has not been evaluated, which is an indispensable step before treatment recommendations can confidently be made.”
Importance: The authors add: “Despite the paucity of large and well-conducted RCTs studying antidepressants for comorbid depression in medical diseases, this review and meta-analysis demonstrates that antidepressants are efficacious and safe for comorbid depression in medical diseases, with effect sizes that are similar to those reported for antidepressants in MDD without medical comorbidity. Furthermore, antidepressants can prevent the development of depression in some medical diseases, but this finding should be weighed against potential adverse effects.
“It is important to screen for and manage comorbid depression in patients with medical diseases, and clinicians should choose treatments based on patient preferences and the antidepressant’s risk-benefit ratio. “Future large, high-quality RCTs should include head-to-head comparisons between antidepressants to expand the knowledge on potential differences in efficacy and safety between antidepressants for depression comorbid with medical diseases, and to allow more specific treatment recommendations for distinct medical diseases.”
Availability: Published by JAMA Psychiatry.
You must be logged in to view this page
Please login below or register as a new user
Not an ACLP member? View Journal contents and abstracts here.
The Academy of Consultation-Liaison Psychiatry is a professional organization of physicians who provide psychiatric care to people with coexisting psychiatric and medical illnesses, both in hospitals and in primary care. Our specialty is called consultation-liaison psychiatry because we consult with patients and liaise with their other clinicians about their care.
With nearly 2,000 members, the Academy is the voice of consultation-liaison psychiatry in the US with international reach.
Please browse our website, read our journal, Psychosomatics, and come to our annual meeting which is in November each year. In 2020 it will be a virtual event – see www.CLP2020.org for more details.
If you are a C-L Psychiatrist and not yet a member please join our great organization and welcoming, inclusive community. Please visit this page for details on joining.
Michael Sharpe, MA, MD, FACLP
ACLP President