What’s ahead for AI and Machine Learning in healthcare?

In 2019, we saw increased interest and adoption of machine learning (ML) and artificial intelligence (AI) technology in healthcare. Organizations have been piloting solutions that range from helping diagnose patients, to ensuring the privacy of their data. While the industry is beginning to see some benefits from these tools, many end-users are starting to ask important questions like: how does the tool work, or where are my data stored?

Similarly, in the last year, we have also seen organizations increasingly send and store their data at third-party vendors instead of on-premises. The combination of these two trends has raised concerns about data protection and the vendor’s appropriate use of data.

These conversations are driving the three biggest topics in 2020 for machine learning and AI in healthcare: accountability, interpretability, and transparency.

Accountability of machine learning systems allows organizations to trust that the system is doing its designed-for task, track what data sets were used to train machine learning algorithms, and identify data quality issues. In the hospital setting, these ML systems direct care decisions, so effort must be taken to detect bias or other data issues.

Interpretability in machine learning ensures that organizations can understand why a system makes a decision. For example, if a system predicts patient discharge, it is important to understand which features led to its decision. Interpretability is essential to build trust in machine learning systems, especially in the complex environments of clinical care.

Transparency of data usage in machine learning systems allows organizations to know where their data are stored, how their data are used in machine learning models, and if their data are combined with other data sets. Currently, once data are sent to a third-party vendor, healthcare organizations do not have visibility into what is done with the data. Better transparency ensures that healthcare data is protected and used only as intended.

The adoption of machine learning solutions in healthcare will continue in 2020, along with new policy guidelines for AI/ML in healthcare. In April 2019, the FDA released a discussion paper titled Proposed Regulatory Framework for Modification to Artificial Intelligence/Machine Learning (AI/ML) – Based Software as Medical Device (SAMD)1, which identifies the tension between AI/ML software and regulatory agencies. AI/ML software continually learns, evolves, and improves at a rapid pace while regulatory agencies seek to control the environment and understand the implications of the technology before the technology impacts patient care.

The FDA’s discussion paper indicates a shift in the regulatory framework is coming. Accountability, interpretability, and transparency will be at the focal point of the discussion to ensure that these technologies can be utilized to improve patient care, while understanding the risks to healthcare organizations and patient data.


Dept. of Health & Human Servs., U.S. Food & Drug Admin., Proposed Regulatory Framework for Modification to Artificial Intelligence/Machine Learning (AI/ML) – Based Software as Medical Device (SaMD), Discussion Paper and Request for Feedback (Apr. 2019), https://www.fda.gov/media/122535/download.

Risks of ‘Black Box’ Machine Learning in Compliance and Privacy Programs

Recent machine learning advances have the potential to revolutionize patient care through better clinical risk prediction and precision medicine. Rightfully so, the compliance and privacy communities are adapting these machine learning methods to help protect patient data. While these technologies will likely help detect and prevent future breaches , careful consideration must be taken to understand the risks of these machine learning methods when applied to compliance and privacy programs.

Healthcare providers access electronic medical records systems millions of times per day, which are recorded in audit logs. Manual processes to review these audit logs for inappropriate behavior do not scale. Machine learning algorithms have the potential to automate the detection of snooping, identify theft and other threats by learning characteristics of good, bad and anomalous access patterns. However, many types of modern machine learning models are uninterpretable to humans. As a result, the ‘black box’ machine learning models make it so that compliance and privacy officers do not know which ‘privacy policies’ the system is applying, nor if they are correct.

The interpretability of machine learning models is an active area of research. While some types of machine learning problems can be sufficiently addressed with predictions without explanations describing why the prediction is made, this paradigm is risky for compliance and privacy problems. An informal adage from the HHS Office for Civil Rights (OCR) is: “What is your policy and can you demonstrate you are following your policy to regulators?” If you cannot state what the machine learning algorithm is doing, how can you define what your policy is or even defend it to regulators?

The lack of interpretability also raises concerns about incorrectly learned privacy policies. Consider a training data set in which most accesses to hypertension patients are appropriate. Would the machine learning algorithm learn a policy that states that “all accesses to hypertension patients are appropriate?” Obviously, a diligent compliance officer would not want to deploy such a broad and arbitrary policy. Unfortunately, the compliance officer may have no means to identify or remedy these issues.

Machine learning algorithms for compliance and privacy may be better applied if they keep the compliance and privacy officer “in the loop.” Compliance officer in-the-loop machine algorithms leverage large-scale data analytics to identify trends and patterns in access data, but then recommend the policy (or reason for appropriate or inappropriate access) to the compliance officer . The compliance officer then has the opportunity to accept the policy or reject it. As such, the compliance officer is setting the policy. The auditing system can then apply the learned policy going forward. This supervision allows compliance officers to not only defend their policies if audited by the OCR, but also take advantage of a broad class of available machine learning algorithms today.

Machine learning and artificial intelligence are extremely useful tools to help compliance officers audit at scale. However, when left unchecked, policies can be incorrectly learned, leaving the hospital at risk. Be sure you can explain and defend exactly what and why your tool makes decisions to the OCR.

References:

Fabbri D, Frisse M, Malin B. The Need for Better Data Breach Statistics. JAMA Internal Medicine. 2017.

Fabbri D, LeFevre K. Explaining accesses to electronic medical records using diagnosis information. Journal of the American Medical Informatics Association. 2013.

Fabbri D and LeFevre K. Explanation-based auditing. Proc. VLDB. 2012

Empowering Compliance Officers With Technology

Big Data and Artificial Intelligence technology are improving medical data privacy and security every day. New technologies promise increased efficiency, improved accuracy, and better risk management. But to fully realize the potential of these technologies and maximize outcomes, we need tools that also empower the compliance officer to succeed.

See the rest of the article on The Compliance and Ethics Blog

Leveraging Deep Mind’s Block Chain EMR Access Log

Machine learning (ML) offers incredible promise in the diagnosis and treatment of advanced medicine. Whether it is IBM’s Watson or Google’s DeepMind Health, it seems like many of the world’s biggest technology companies are getting involved in innovative approaches to improving patient care. One area gaining more ML healthcare interest is data privacy and security. For example, DeepMind has started to take important steps to enhance the security of clinical data by creating tamper-proof logs of access using block chains.

At Maize Analytics, we think that machine learning has a roll to play, not only to improve patient’s health, but also to improve data privacy and security. Just as ML systems can help doctors and nurses better evaluate and treat patients, Maize’s technology can empower compliance officers to better protect the privacy of patients.

Maize’s technology takes the symptoms provided by access logs – the “who,” “what,” “where,” and “when” of a record’s access – and uses novel ML techniques to determine the diagnosis of “why” the access took place. Our peer-reviewed and published work has shown that Maize can filter 95-99% of all accesses, allowing privacy officers to focus on the real threats.

We know that the work of a compliance officer can be just as stressful and high stakes as that of a doctor or nurse and that’s why we are committed to putting the same high-powered machine learning technology to work to improve outcomes.

Read more about the technology in the Compliance Today Magazine