In 2019, we saw increased interest and adoption of machine learning (ML) and artificial intelligence (AI) technology in healthcare. Organizations have been piloting solutions that range from helping diagnose patients, to ensuring the privacy of their data. While the industry is beginning to see some benefits from these tools, many end-users are starting to ask important questions like: how does the tool work, or where are my data stored?
Similarly, in the last year, we have also seen organizations increasingly send and store their data at third-party vendors instead of on-premises. The combination of these two trends has raised concerns about data protection and the vendor’s appropriate use of data.
These conversations are driving the three biggest topics in 2020 for machine learning and AI in healthcare: accountability, interpretability, and transparency.
Accountability of machine learning systems allows organizations to trust that the system is doing its designed-for task, track what data sets were used to train machine learning algorithms, and identify data quality issues. In the hospital setting, these ML systems direct care decisions, so effort must be taken to detect bias or other data issues.
Interpretability in machine learning ensures that organizations can understand why a system makes a decision. For example, if a system predicts patient discharge, it is important to understand which features led to its decision. Interpretability is essential to build trust in machine learning systems, especially in the complex environments of clinical care.
Transparency of data usage in machine learning systems allows organizations to know where their data are stored, how their data are used in machine learning models, and if their data are combined with other data sets. Currently, once data are sent to a third-party vendor, healthcare organizations do not have visibility into what is done with the data. Better transparency ensures that healthcare data is protected and used only as intended.
The adoption of machine learning solutions in healthcare will continue in 2020, along with new policy guidelines for AI/ML in healthcare. In April 2019, the FDA released a discussion paper titled Proposed Regulatory Framework for Modification to Artificial Intelligence/Machine Learning (AI/ML) – Based Software as Medical Device (SAMD)1, which identifies the tension between AI/ML software and regulatory agencies. AI/ML software continually learns, evolves, and improves at a rapid pace while regulatory agencies seek to control the environment and understand the implications of the technology before the technology impacts patient care.
The FDA’s discussion paper indicates a shift in the regulatory framework is coming. Accountability, interpretability, and transparency will be at the focal point of the discussion to ensure that these technologies can be utilized to improve patient care, while understanding the risks to healthcare organizations and patient data.
 Dept. of Health & Human Servs., U.S. Food & Drug Admin., Proposed Regulatory Framework for Modification to Artificial Intelligence/Machine Learning (AI/ML) – Based Software as Medical Device (SaMD), Discussion Paper and Request for Feedback (Apr. 2019), https://www.fda.gov/media/122535/download.