ACLP-Logo

Advancing Integrated Psychiatric Care
for the Medically Ill

Glossary of Terms

Glossary of Terms

Hi, everyone. Since this issue of ACLP News focuses on business, I’ve been asked to provide a glossary of business- and quality-related vocabulary words to help you decode quality dashboards and payment contracts. I hope it’s helpful. I received a little help from Carol Alter, MD, who leads the American Psychiatric Association’s Committee on Quality and Performance Measurement, on this one—I think you’ll see why I needed it by the time you get to MIPS.

Quality measurement (synonyms: performance measurement, or metrics): any measurement that is used to assess the quality or performance of a product or service. Anyone can design a quality or performance metric for use on a personal or local level.

Quality Measure: This term usually describes a quality measurement that has been approved by CMS (Centers for Medicare and Medicaid Services) for use in a federal payment program (i.e., money is at stake). As a prerequisite for approval of a Quality Measure, validation research must be conducted in a range of clinical and administrative settings to ensure that it accurately and consistently captures the clinical process or outcome being targeted. Ideally, the use of a valid quality measure should not place undue burden on clinicians and should drive performance improvement by drawing more attention to the targeted process or outcome. Complete information can be found at https://qpp.cms.gov/.

National Academy of Medicine quality aims: Effectiveness, efficiency, timeliness, safety, equitability, and patient-centeredness. These have been promoted by the National Academy of Medicine (formerly the Institute of Medicine) as appropriate targets for the development of Quality Measures. It is a good idea to consider them when conducting a quality improvement project at any level.

MACRA: The Medicare Access and CHIP Reauthorization Act of 2015. This legislation advanced the use of Quality Measures to determine reimbursement rates by CMS. One program created by MACRA was MIPS, and this is the part of it you are most likely to see if you are running a clinical service. On quality dashboards MIPS measures may be labeled as “MIPS/MACRA.”

MIPS: Merit Based Incentive Payment System. MIPS applies to Quality Measures in four domains--quality, promoting interoperability (formerly known as “meaningful use”), improvement activities, and cost—to determine whether providers or health systems receive a payment adjustment (i.e., a penalty or bonus). If you are working in a general hospital system (as C-L psychiatrists often are), your hospital system or physicians’ organization should provide you with a list of the MIPS measures your service is responsible for. Several MIPS measures are applicable to psychiatric care in medical settings, and many more are applicable to primary care practices. Collaborative care is an example of a way in which C-L psychiatrists become involved with efforts to improve performance on MIPS measures. However, the list of MIPS measures you receive from your hospital system may still contain 0 items because “ownership” of these measures typically gets assigned to the primary team. The APA maintains a list of which MIPS measures pertain to behavioral health, and it can be found at https://www.psychiatry.org/psychiatrists/practice/quality-improvement/quality-measures-for-mips-quality-category.

Outcomes measure: a measurement of a product or service’s end result. On a service that provides psychiatric care, examples of outcomes measures might include depression symptoms after treatment, or suicide rates.

Process measure: a measurement of adherence to a process that is thought to be consistent with producing a high-quality result. One might choose to measure a process rather than an outcome because processes can be measured in real-time, whereas the time lapse between process and outcome may be prohibitively long. Some outcomes are also multifactorial, and in those cases measuring only the outcome may not yield easily interpretable information about the service or product one is trying to evaluate.

Patient-centered outcome: an outcome that patients actually care about.

Measurement-based care (MBC): This term describes the use of symptom measurement tools (such as the PHQ-9) at regular intervals to inform treatment steps. The theory behind MBC is that it overcomes “therapeutic inertia” (the notion that prescribers are often slow to advance or discontinue treatment plans because it is not clear whether or not the patient is improving at the time of the encounter based on how he or she subjectively reports feeling that day) and facilitates reaching optimal treatment (or stopping an unhelpful treatment) in a more timely and decisive manner than usual care. A compelling body of research indicates that treatment strategies that use MBC (including collaborative care) lead to improved outcomes.

Lean: This is an approach to quality improvement that focuses on eliminating waste within a system. Waste is all around you. Anything that is paid for and is not used is waste, including your time. Any time that patients spend in the healthcare system not recovering from their illnesses is also waste—for example, appointment lag times, or boarding in the emergency department for a psychiatric bed. Lean interventions improve efficiency, but because errors are usually the greatest source of waste, there is obsessive attention to eliminating errors wherever possible.

PDSA cycle: Plan-do-study-act cycle. It provides a structure for making rapid, iterative changes to a product or service in the context of a quality improvement project. The underlying idea is that no matter how well you plan, you will always encounter unforeseen problems when you try something new, and therefore it is necessary to make adjustments as you go. “Plan” refers to making a protocol. “Do” refers to trying out the protocol you came up with in the real world. “Study” refers to measuring how successful the plan was and what problems you encountered. “Act” refers to adjusting the plan for the next round in response to what you’ve learned from the first round. Repeat as many times as is necessary until you have a perfect product. This method contrasts with a research trial, in which the protocol must be specified in advance and typically doesn’t change in response to real-time data.