FDA to regulate more AI & software tools as devices, guidance indicates

FDA also seeks new digital health regulatory paradigm in Pre-Cert Program report

In the waning days of FDA’s fiscal year, the U.S. Food and Drug Administration (FDA) issued the greatly anticipated final guidance “Clinical Decision Support Software,” which aims to clarify the scope of the FDA’s oversight of clinical decision support (CDS) software intended for use by health care professionals. Compared to the September 2019 draft version, the new final guidance notably eliminates FDA’s prior approach of leveraging risk factors to guide the agency’s willingness to exercise enforcement discretion over some categories of products that qualify as medical devices. This updated approach appears to position more software products and AI tools within the realm of FDA regulatory authority than was the case under the September 2019 draft guidance.

At the same time, FDA made minor conforming revisions to the Medical Device Data Systems, Medical Image Storage Devices, and Medical Image Communications Devices final guidance, as well as the Policy for Device Software Functions and Mobile Medical Applications final guidance, and FDA indicated that those guidances will continue to apply to CDS software that supports or provides recommendations to patients or caregivers.

FDA also recently issued a report on its Digital Health Software Precertification (Pre-Cert) Pilot Program concluding that the approach described in its working model is not practical to implement under current statutory and regulatory authorities. We analyze these developments below.

Device vs. non-device CDS

The 21st Century Cures Act amended Section 520 of the Federal Food, Drug, and Cosmetic Act (FDCA) to exclude certain software functions, including some CDS, from classification as a medical device and, consequently, FDA regulation. To be considered non-device CDS under the Cures Act, software functions must:

  1. Not acquire, process, or analyze medical images, signals, or patterns (Criterion 1),

  2. Display, analyze, or print medical information about a patient or other medical information (e.g., clinical practice guidelines) (Criterion 2),

  3. Support or provide recommendations to a healthcare professional (HCP) about prevention, diagnosis, or treatment of a disease or condition (Criterion 3), and

  4. Enable independent review of its recommendations so that the HCP need not rely primarily on the software’s recommendations to make a clinical decision about a patient (Criterion 4).

If a CDS product does not meet all four of these Cures Act criteria, then, unless some other guidance or policy applies, it would be considered “device CDS” that is regulated by FDA.

Notably, the September 2019, draft guidance “Clinical Decision Support Software,” updated the framework for FDA's oversight of CDS products, and, among other things, emphasized clarifying how Criterion 4 above applies to machine learning-based and proprietary algorithms. The final guidance goes even further in illustrating how each criterion should be interpreted, in particular providing more helpful explanations of which functions do and do not meet each specific criteria – something industry and stakeholders had complained was lacking in the draft guidance.

Specifically, as it relates to Criterion 1, FDA provides greater granularity as to what is considered “medical images, signals, or patterns”. A “medical image” not only includes those images generated by use of medical imaging systems (e.g., computed tomography (CT), x-ray, ultrasound, magnetic resonance imaging (MRI)) to view any part(s) of the body or images acquired for a medical purpose (e.g., pathology, dermatology) but also those that were not originally acquired for a medical purpose but are being processed or analyzed for a medical purpose. A “signal” includes those signals that typically require use of either: · An IVD or a signal acquisition system that “measures a parameter from within, attached to, or external to the body for a medical purpose and often includes but is not limited to the use of sensors (e.g., electrocardiogram (ECG) leads) along with electronics and a software function that is used for signal generation (e.g., ECG); Collections of samples or specimens such as tissue, blood, or other fluids (e.g., conducting a pathological study using software such as digital pathology); or use of radiological imaging systems (e.g., computed tomography (CT)) and a software function for image generation.” A “Pattern” refers to multiple, sequential, or repeated measurements of a signal or from a signal acquisition system. Where a software function assesses or interprets the clinical implications or clinical relevance of a signal, pattern, or medical image, the software functions do not meet Criterion 1 because they “acquire, process, or analyze” and are therefore considered a device function.

Criterion 2 includes software functions that display, analyze, or print patient-specific information, such as demographic information, symptoms, certain test results, patient discharge summaries, and/or other medical information (such as clinical practice guidelines, peer-reviewed clinical studies, textbooks, approved drug or medical device labeling, and government agency recommendations). Here, FDA elaborated on what it considers “medical information” for purposes of determining non-device CDS; characterizing it as the type of information that “normally is, and generally can be, communicated between HCPs in a clinical conversation or between HCPs and patients in the context of a clinical decision” and generally includes data or results from devices (including IVD test(s)) that are provided as a single discrete test or measurement result. The same test or measurement result, however, if provided in a more continuous sampling would be considered a pattern or signal and the CDS function would remain a device function. By way of example, consider the distinction between providing the results of a blood glucose lab test and a continuous glucose monitor reading. The first might be viewed as displaying medical information and a non-device function while displaying the latter would be a device function because the continuous sampling transforms the information into a pattern or signal.

With respect to Criterion 3, FDA states that in determining whether a software function supports or provides recommendations to an HCP, it will consider (1) the level of software automation, and (2) the time-critical nature of the decision the HCP will be taking. FDA explains that these factors impact whether a software function enhances, informs, or influences an HCP’s decision-making (satisfying Criterion 3), or instead, substitutes, replaces, or directs the HCP’s decision-making (failing Criterion 3). FDA also introduced the concept of “automation bias” as part of Criterion 3, in that the more singular or specific the software output, the more likely an HCP is to over-rely on it, positioning the software beyond merely informing the HCPs decision making. FDA further explains that it considers software that provides a specific preventive, diagnostic, or treatment output or directive as not satisfying Criterion 3 because such software is intended to direct the HCP to take a specific action. This includes software that provides a specific preventative, diagnostic or treatment course, treatment plan, or follow-up directive, and additionally contemplates software that merely informs the HCP that a specific patient “may exhibit signs” or has a certain risk probability or risk score for a specific disease or condition. What’s more, the final guidance explains that the more time-critical the ensuing clinical decision, the more likely the HCP is to not independently review the software’s recommendation – which also makes it fail Criterion 4.

FDA also further clarifies the expectations surrounding Criterion 4, whereby the agency appears to permit greater flexibility in how the software can enable independent review of its recommendations so that the HCP need not rely primarily on the software’s outputs to make a clinical decision. FDA recommends that the software or software labeling include the purpose or intended use of the product, along with the intended HCP user and patient population and should not be intended for use in critical, time-sensitive tasks or decisions, because under such use an HCP is unlikely to have sufficient time to independently review the basis for the recommendations. FDA also recommends that the software or software labeling identify the required input medical information, with plain language instructions on how the inputs should be obtained, their relevance, and data quality requirements. The software or software labeling should provide a plain language description of the underlying algorithm development and validation that forms the basis for the CDS implementation, and the software output should provide the HCP user with relevant patient-specific information and other knowns/unknowns for consideration (e.g., missing, corrupted, or unexpected input data values), including how the algorithm logic was applied for the patient, in order to enable the HCP to independently review the basis for the recommendations and apply his or her judgment when making the final decision.

Other notable changes between the draft and final guidance

In the earlier draft guidance FDA stated its intention to leverage factors developed by the International Medical Device Regulators Forum (IMDRF) to apply a risk-based policy for defining CDS software functions it considered to be a device and determining whether CDS devices will be actively regulated. Notably, the final guidance removes nearly all mentions of this risk-based policy, along with the draft guidance’s lengthy discussion of CDS software functions that would be considered subject to FDA enforcement discretion (i.e., technically a medical device but which FDA chooses not to actively regulate) “based on [the agency’s] current understanding of the risks of these devices.” In the final guidance, FDA merely references the IMDRF framework as a potential input into its assessment of CDS risk and associated regulatory implications but does not use it to create a category of software that it believes meets the definition of a medical device but for which it plans to exercise enforcement discretion. With just a brief mention it is unclear how or if FDA intends to utilize the IMDRF framework or how it should be considered by software developers. The final guidance further modifies FDA’s position from the draft guidance by eliminating the potential for agency enforcement discretion for software that does not provide the HCP with a means to evaluate the underlying basis of its output recommendation and software that is intended for use by patients or caregivers. Software developers are therefore left to rely on FDA’s Policy for Device Software Functions and Mobile Medical Applications and General Wellness guidance documents to potentially avoid active FDA regulation in such instances.

The final version of the guidance also significantly departs from the draft in the “Examples” section, where FDA details specific types of CDS functions that it will and will not regulate as medical devices, asserting with greater clarity the kinds of software products and uses where FDA intends to exercise its authority. Relatively consistently with the draft guidance, FDA indicates that software functions which apply reviewable reference information or clinical guidelines to patient symptoms/data are not considered medical devices. These include:

  • Software that provides diagnostic or treatment options for a particular disease/condition (e.g., pneumonia) based on clinical guidelines or other established evidence

  • Software that notifies clinicians of drug-allergy contraindications or interactions between drugs that may cause adverse reactions

  • Software that analyzes patient demographic data and clinical notes in order to provide an HCP with a list of follow-up options for consideration

Alternatively, products that FDA says in the final guidance it does intend to regulate include:

  • AI tools designed to analyze images, vital sign patterns, and other physiological information to detect abnormalities or identify patients that may develop a certain condition

  • Tools designed to warn caregivers of sepsis

  • Software that analyzes a provider’s report to identify whether the HCP should initiate a particular type of therapy based upon a scoring algorithm

Additional resources

In conjunction with the issuance of the final guidance, FDA also published a graphic to provide a visual overview of certain policies described in the guidance and examples of non-device and device CDS software functions.

FDA also simultaneously issued a “Digital Health Policy Navigator” aiming to help stakeholders better understand how to interpret the agency’s various digital health policies. This tool guides users through a series of questions based on the published digital health policies, to help a user assess whether a particular software function meets the device definition and, if so, whether it is likely to be actively regulated by FDA as a device. The tool directs users to the appropriate policies (guidance documents) to learn more.

In accordance with the finalized CDS software guidance, FDA also issued minor conforming updates to the following related guidances:

Digital health needs new regulatory paradigm, Pre-Cert report says

FDA has also just published a report on its 2017 Digital Health Software Precertification (Pre-Cert) Pilot Program, which is an effort by FDA to fast-track digital health products by reducing regulatory hurdles for developers of Software as a Medical Device (SaMD), i.e., software-only products that meet the FDCA definition of a medical device through up-front evaluation of the company’s processes and capabilities. In April 2018, as we discussed here, FDA released updates to its Software Precertification (Pre-Cert) Pilot Program, including a working model (v0.1) reflecting the agency's vision of the pilot and outlining its most critical components. Subsequently, in January 2019, this working model was updated (v1.0) and a corresponding Test Plan released to guide the Pilot program.

Then, in September 2020, FDA released an eight-page update on the status of Pre-Cert, which ran in 2019 with nine company participants. The update, which we summarized online here, highlighted what FDA learned from activities conducted to test the program as presently envisioned, and how it will use this information for the next iteration of building and testing. FDA’s update indicated that “more work is needed” to understand how information collected to address the product’s total life cycle namely during Excellence Appraisals, Review Determination, and Real-World Performance monitoring can be leveraged to support the Streamlined Review for introduction of new devices to market.

In its latest report on Pre-Cert, FDA similarly spotlighted the limitations of the pilot program, citing how the agency was unable to use a broad sample of devices and could not limit the scope of any resulting device classifications. FDA further opined that it was limited by its ability to collect only information from program participants that was submitted voluntarily.

Nevertheless, the pilot program showed that the “rapidly evolving” digital health technologies “in the modern medical device landscape could benefit from a new regulatory paradigm, which would require a legislative change,” according to the report. FDA said that “excellence appraisals” in such a new paradigm could benefit from these attributes:

  • The ability to keep pace with the speed of technology innovation, leveraging information that exists across the total product lifecycle to provide timely assurance of safety and effectiveness of devices, including modified devices, for public health.

  • The ability to objectively and continually assess an organization’s ability to deliver devices with a commitment to a culture of quality and organizational excellence.

  • Ongoing visibility into Key Performance Indicators (KPIs), Real-World Performance (RWP) metrics, and other data that are transparent and objective, enabling timely and targeted actions to resolve issues, creating opportunities to prevent adverse events, and increasing regulatory compliance.

  • Regulatory decision support tools that clearly and consistently communicate FDA regulatory policies, which support frameworks for transparent organizational appraisals and communication of device performance by manufacturers to advance safe and effective use of devices by users.

FDA concluded that the approach described in the Pre-Cert working model “is not practical to implement under our current statutory and regulatory authorities.”

Next steps

On October 18, FDA will host a webinar for industry, health care providers, and others interested in learning more about the CDS final guidance. While the final CDS guidance provides more clarity, it also brings a significant change in the agency’s treatment of health related software and as a result brings many more software functions into the realm of FDA regulation. As the CDS document appears to increase the world of products regulated by the agency, medical device and software developers will want to carefully review the guidance to ascertain whether additional actions may be necessary to ensure compliance with FDA regulations.

If you have any questions about clinical decision support software, the Pre-Cert program, or FDA’s regulation of digital health products more generally, please contact the Hogan Lovells attorney with whom you regularly work or any of the authors of this alert.

 

Authored by Jodi K. Scott, Suzanne Levy Friedman, and Wil Henderson

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.