The First Step to Protect Against IT Hacking

IT hacking is an ever-growing issue in the healthcare industry, with 2020 being one of the biggest years for cybersecurity attacks to date. COVID-19 played a large role in the uptick in attacks as many threats targeted remote workers and the fears surrounding the ongoing pandemic. The largest known breach of 2020, Blackbaud, was estimated to have affected at least two dozen providers and over 10 million patients.1 This breach was noted to have been caused by unauthorized access to a system that allowed hackers to extract PII. 

A common theme in cyber attacks is attackers obtaining access to systems through an authorized user’s account via email phishing. The access then goes undetected, allowing intruders to extract information for months at a time. If unauthorized access is one main method hackers use to extract information, the first line of defense is to review user access on a consistent basis. 

Organizations with user access review processes in place have better vision on whether an employee’s system access is appropriate or unnecessary, limiting the number of avenues cyberattackers can utilize.  

There are four employee types to be reviewed during user access reviews to ensure all areas of the organization are protected: current employees, new employees, non-employees (e.g. consultants), and terminated employees. 

For current employees, reviewing system access on a regular basis ensures they have access to the systems they need. A key element of user access reviews is ensuring that the minimal amount of access is given to an employee in order to perform their job function. Limiting access to only the necessary systems provides stronger protection to data. 

The idea of minimal necessary access carries over to new employees and the process for provisioning their access. Outlining the necessary systems required for their role in the beginning, and only provisioning access to those systems, is optimal to minimize risk; if more access is needed, it can be given when that time comes. 

Non-employees are people who do some sort of work within an organization but are not employed. These can be researchers, contractors, consultants, freelancers, subcontractors, etc. While they are providing services for an organization, non-employees require access to systems and information, but their access privileges need to be monitored to ensure they only have access to systems for the period of time they are working. 

Lastly, there is a rising issue of terminated employees inappropriately accessing sensitive systems because their access rights were never deprovisioned. Processes should be configured to remove terminated employees’ access across all systems, rather than limiting deprovisioning to the main activity directory system.

In 2020, a former employee of Cisco accessed a protected computer and deleted 456 virtual machines, costing the company $2.4 million dollars to rectify.2 Reviewing all terminated employees’ access to ensure their access has been revoked correctly is an important final step in protecting against cyber attacks. 

Unfortunately, and perhaps surprisingly, correctly revoking all access for an employee remains difficult in modern organizations due to a number of reasons including human error, the large numbers of systems that are deployed within organizations, and a lack of visibility of all accounts within those systems. Moreover, manual processes that require managers to remember to manually request each account is deprovisioned are likely to result in gaps and potential vulnerabilities.

The review of all current and former users’ access rights is an ongoing process as people come and go, get promoted or move departments. To minimize organizational risk, IT teams should conduct quarterly access reviews (at least) so only the necessary system access is granted to employees, and terminated users have had all their access privileges revoked. Moreover, automated processes and tools should be configured to provide managers visibility to all accounts in an organization so they can easily be deprovisioned, without having to remember each step.

IT hackers continue to go after healthcare organizations. Data protection starts with user access reviews to quickly identify and remove any unnecessary access, which will limit the avenues for attackers to exploit. Implementing processes to conduct regular and complete user access reviews can put healthcare organizations in a better position to catch and mitigate risk from cyberattacks. 


Protecting Patient Privacy during COVID-19

With the rapid spread of COVID-19 across the country, and increasing numbers of infected patients at hospitals, compliance and privacy teams are taking extra precautions to protect sensitive patient information. Here are some tips to ensure your organization is protecting patient privacy during the COVID-19 outbreak include:

1. Stay up-to-date on all announcements from the Department of Health and Human Services (HHS) Office for Civil Rights (OCR). While rules and regulations under the HIPAA Privacy Rules are still operable and enforceable, the OCR has released several waivers for the disclosure of Personal Health Information (PHI) during the COVID-19 crisis. Some of these include Enforcement Discretion for community-based testing sites, business associates, and telehealth services. These announcements are critical for compliance and privacy teams to ensure they are staying compliant during this time. It is important to continually check the OCR website for any new information, visit the OCR website here.

Maize also has a page of these resources for quick access, find it here.

2. Daily tracking of COVID-19 patients. It is important to monitor accesses for all COVID-19 patients on a daily basis to ensure inappropriate accesses are found and mitigated in a timely manner.

3. Notify all employees to stay vigilant. During this pandemic, there has been an increase in cyberattacks on healthcare organizations. It is important for compliance and privacy teams to inform all employees of these risks, and communicate procedures to report suspicious activities. Scams have included calls from people claiming they work for the OCR, baiting healthcare employees to divulge PHI, phishing, and malware emails

Protecting patient information is always important, but during a pandemic, the significance of compliance and privacy teams within healthcare organizations becomes heightened. We hope these tips will help, and we thank you and all the employees at your organization for the work you have been doing to help during this time.

Engagement with Executive Management: How to Arm Compliance with Specific Data That Informs Decision Making

I was recently listening to a webinar when someone asked a question that I often ask: “how do I get business executives to care as much about compliance as I do?” I expected the answer to be the same one I have heard a hundred times, “you have to make them understand the risks… you have to make sure they understand the potential for personal liability.. you have to explain the government’s expectations… etc.” The answer the speaker gave was more insightful; she said, “you can’t”. She went on to explain that if you, as the compliance officer, are not the individual in your company who cares most about compliance, who is the most excited about your compliance program, then you are probably in the wrong position.

I think rather than asking how to get business executives excited about compliance, we should ask how we can frame our compliance metrics in a way that supports the things that make the business executives really excited about the work compliance does.

Metrics: What’s Important to Executives?

Many of us in the compliance field produce benchmarking data for the board and executive management teams. A small sampling of typical metrics include:

  • Number of hotline calls received (by location, business unit, anonymous/identified, allegation, etc.)
  • Length of time to respond to hotline call
  • Source of hotline awareness
  • Number and type of privacy violations
  • Number of active compliance investigations (by type, location, allegation, etc.)
  • Length of time to close investigation
  • Number of training programs delivered
  • Training completion rates
  • Policy dissemination acknowledgements

While these metrics can give important information about the performance of the compliance program, they don’t really convey meaning to the executive team and its impact on the business. I would argue that it is often difficult to engage executive management in your compliance program because you are not providing them with any information that is framed in a manner that helps them mange their critical strategic and operational priorities. 

So, let’s think about some of those business priorities. In my experience, healthcare executives are focused on quality, revenue, costs, growth, patient, employee and physician satisfaction, and reputational, financial and operational risk. How do you use these priorities to effectively show executives what is going well in your compliance program and what requires their attention? How can your metrics help executives understand their risk position? How do you help executives establish a meaningful risk tolerance level?

To answer these questions, you first need to determine the types of data you will provide. Generally, there are two types of metrics: process metrics and outcome metrics. 


Process Metrics and Outcome Metrics

Process metrics are those data that show program effectiveness (hotline reports received, number responded to timely, trainings completed, policies distributed, etc.). Process metrics should include an indication of how the measure is trending over time and some indication of criticality to help your executives understand those data that require their attention, those that don’t, and those that should be celebrated.

Outcome metrics are those data that show the results of your auditing, monitoring, and investigation programs which address specific risk areas (new physician coding audit, focused claim coding audits, employee access audit, etc.). Outcome measures should be tied to your risk assessment priorities and are often easier to align with strategic priorities.


Gather and Connect Metrics

Your metrics should derive from the seven elements of an effective compliance program , your risk assessment priorities, and specific risk areas. It is important, however, that you don’t try to use data to develop metrics for every aspect of your compliance program. Remember, your executives are getting data from various departments across the organization, and data fatigue is a very real problem. Copious amounts of data will cause your executive’s eyes to glaze over and the messaging you are trying to convey will be lost.

Consider aggregating some of your department data into a few key metrics that can drive a story aligned with the organizational strategy. For example, you may want to take all of your compliance program effectiveness measures and provide a single effectiveness score which can be trended over time. Similarly, you can take specific risk area measures that affect one of the key strategic priorities and aggregate them into a single strategy score (i.e. provide a Readiness for Growth measure that combines your auditing results that affect Growth).

Keep in mind that not every audience requires the same data. Your compliance Committee may need significantly more information about specific reporting elements than other members of your executive management team. Know what actions, decisions, or discussion you want to elicit from the group and tailor your data and metrics specific to the audience charter.

Finally, consider how to connect your information with other information gathered by the organization. For example, if Quality is collecting information specific to patient satisfaction, think about how your data may inform the quality data. Are you seeing more hotline calls coming in from units that are reporting poor patient satisfaction? Are you seeing more data breaches from units that are reporting poor patient satisfaction? When you can integrate your data with other data collected in the organization executives can better understand what the data means on an overall scale.


Engage Executives

To engage your executive management in your compliance program you need to provide them with information that can help inform their strategic priorities. This approach requires a different mindset by compliance officers. Most compliance data provided to the executive team is designed to express potential compliance risk without being tied more closely to the organization’s strategic priorities. However, the data your provide should include both process measures as well as outcome measures and should be tailored to the audience you are presenting the information to. By aligning your compliance metrics with the organization’s strategic priorities, you are seen as a partner in achieving organizational goals rather than just managing goals separate from the rest of your organization.

As your organization’s compliance professional, you have a lot of data available to you. Your challenge is taking all that data and leveraging it into meaningful and actionable information for your executives that aligns with the organization’s strategic, financial, and operational objectives. This engagement will form the partnership you need to minimize risk and grow your program’s visibility.

OIG HHS Healthcare Compliance Program Tips 


Margaret has over twenty years of experience in healthcare compliance, including roles as Cheif Compliance officer for large integrated health systems providing services in multi-state geographies. She is recognized as an industry thought leader and speaker, including addressing the US Senate Finance Committee and other government agencies. Margaret is also the past President and current member of the Board of Directors of the Society of Corporate Compliance and Ethics (SCCE) and the Health Care Compliance Association (HCCA) supporting and promoting integrity programs nationally and internationally.

4 Tips for Building a Successful Access Monitoring Process

Monitoring is the 5th element of the 7 elements of an effective compliance program. It is a continuous task that compliance and privacy teams must do to ensure any inappropriate accesses are detected and resolved in a timely manner. When discussing how your team should go about monitoring, it is important to remember to design a process in line with your team and healthcare facility’s priorities.

When looking to build a successful monitoring process, 4 things need to be considered:

1. The Subjects

The first thing to determine is the subjects of your monitoring effort. Some questions that your team should consider:

– What/Who are your monitoring? — Layout the parameters of what and who your team will be monitoring. Know what information needs to be monitored (patient accesses, VIPs, newborns, employees who have made previous inappropriate accesses, etc.) and what systems to pull data from.

– What are you looking for? — Map out what is appropriate and inappropriate for your facility. For example, it should be noted whether self-accesses are appropriate or not at your healthcare facility.

2. Methods

As stated by the HIPAA Security Rule provision on Audit Controls (45 C.F.R. § 164.312(b)), covered entities are required to implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information (ePHI).  Therefore, your team must have a method in place to monitor ePHI.

Layout the method of your team’s monitoring processes and the tools used to monitor EMR accesses. It’s best for that method to be documented so new team members can be easily on-boarded. Moreover, leverage software systems to help automate the process, so you can focus on suspicious behavior rather than time-consuming false positives. 

3. Frequency

The monitoring frequency is an important parameter of your process because it determines how often accesses are reviewed. Put a formal process in place that notes how often your team will monitor subjects. The frequency may change depending on the subjects and the size of the facility. A schedule that lays out the amount of time and which team members are assigned work can help everyone stay on track to meet reporting requirements. 

4. Reporting

Discuss what information needs to be reported, how those reports will be presented, and who the reports will need to be sent to. It is important to define the specific metrics you present to compliance managers, department heads and executive committees. Remember that different people in organizations like to consume data in different ways, so they might like raw data or aggregate results.

When building this process, ensure all rules and regulations are being adhered to. Over time, this process and schedule may change as your team gets into its flow. If changes need to be made, discuss it with your compliance or privacy team and make the necessary changes over time. Always update your documentation to reflect the updated process (it is helpful to keep versions of the documentation in case your team wants to look back on what changed).

A monitoring process is required per HIPAA, but building out a successful program will also help your team better manage day-to-day tasks and ensure the proper data are being monitored. Going through each of these 4 points will help define and build your monitoring program, so your team can implement it and better protect your patients’ privacy.


Maize Analytics Patient Privacy Monitoring Solution can help streamline this process by reviewing up to 99% of the access log for your compliance team.



OCR Cyber Newsletter January 2017 

Managing Healthcare Insider Security Threats

Often when discussing hospital security threats, external breaches are the main focus. However, recent evidence shows those breaches are not the biggest concern to hospitals – they’re more concerned with breaches that can happen within their own halls, by their own staff.

HIMSS Media recently conducted a study on behalf of SailPoint, and the general consensus was that healthcare provider organizations are highly concerned about threats posed by insiders. 43% of healthcare provider respondents said they were more concerned about insider threats to data than external breaches. Given this concern, it would be assumed that these organizations have the technology in place to help them audit internal accesses; but this is currently not the case. Instead, the top tactics to thwart insider threats are training and awareness programs for users. While these are both important, training can only do so much, and it cannot be the complete process for preventing and detecting internal threats.

The best way to combat insider threats is by combining a training and awareness program with technology. With machine learning, user-based analytics, and artificial intelligence programs that monitor ePHI access, hospitals can catch inappropriate access to patient data. Although these programs have recently been on the rise in the healthcare industry, only 48% of healthcare provider organizations use access behavior monitoring and analytics as part of their approach to detecting insider threats. Many compliance officers are still using manual solutions for their internal auditing, which is time-consuming and cannot scale with millions of accesses per day.

There’s always a level of uncertainty when adding a tool to an auditing process. Users wonder if it will actually help, or if it will add more work to their day, making their job more difficult. When exploring potential tools, users should search for a system that is easy to use, ensures smooth integration into their current process, and will allow them to review and approve auditing policies so they can explain what the machine algorithm is doing and define their policy to regulators if needed.

It’s clear that insider threats are a high-priority concern, yet healthcare provider organizations are only beginning to leverage the powerful technology available to monitor these accesses. A proper training and awareness program combined with an auditing system that can detect and report on unauthorized access is vital to all of these organizations.

Contact us for more information on how Maize can help you manage insider threats to your healthcare institution.

Get the full report here

Risks of ‘Black Box’ Machine Learning in Compliance and Privacy Programs

Recent machine learning advances have the potential to revolutionize patient care through better clinical risk prediction and precision medicine. Rightfully so, the compliance and privacy communities are adapting these machine learning methods to help protect patient data. While these technologies will likely help detect and prevent future breaches , careful consideration must be taken to understand the risks of these machine learning methods when applied to compliance and privacy programs.

Healthcare providers access electronic medical records systems millions of times per day, which are recorded in audit logs. Manual processes to review these audit logs for inappropriate behavior do not scale. Machine learning algorithms have the potential to automate the detection of snooping, identify theft and other threats by learning characteristics of good, bad and anomalous access patterns. However, many types of modern machine learning models are uninterpretable to humans. As a result, the ‘black box’ machine learning models make it so that compliance and privacy officers do not know which ‘privacy policies’ the system is applying, nor if they are correct.

The interpretability of machine learning models is an active area of research. While some types of machine learning problems can be sufficiently addressed with predictions without explanations describing why the prediction is made, this paradigm is risky for compliance and privacy problems. An informal adage from the HHS Office for Civil Rights (OCR) is: “What is your policy and can you demonstrate you are following your policy to regulators?” If you cannot state what the machine learning algorithm is doing, how can you define what your policy is or even defend it to regulators?

The lack of interpretability also raises concerns about incorrectly learned privacy policies. Consider a training data set in which most accesses to hypertension patients are appropriate. Would the machine learning algorithm learn a policy that states that “all accesses to hypertension patients are appropriate?” Obviously, a diligent compliance officer would not want to deploy such a broad and arbitrary policy. Unfortunately, the compliance officer may have no means to identify or remedy these issues.

Machine learning algorithms for compliance and privacy may be better applied if they keep the compliance and privacy officer “in the loop.” Compliance officer in-the-loop machine algorithms leverage large-scale data analytics to identify trends and patterns in access data, but then recommend the policy (or reason for appropriate or inappropriate access) to the compliance officer . The compliance officer then has the opportunity to accept the policy or reject it. As such, the compliance officer is setting the policy. The auditing system can then apply the learned policy going forward. This supervision allows compliance officers to not only defend their policies if audited by the OCR, but also take advantage of a broad class of available machine learning algorithms today.

Machine learning and artificial intelligence are extremely useful tools to help compliance officers audit at scale. However, when left unchecked, policies can be incorrectly learned, leaving the hospital at risk. Be sure you can explain and defend exactly what and why your tool makes decisions to the OCR.


Fabbri D, Frisse M, Malin B. The Need for Better Data Breach Statistics. JAMA Internal Medicine. 2017.

Fabbri D, LeFevre K. Explaining accesses to electronic medical records using diagnosis information. Journal of the American Medical Informatics Association. 2013.

Fabbri D and LeFevre K. Explanation-based auditing. Proc. VLDB. 2012

Often Forgotten Risk to Patient Data

Academic medical centers and health systems with large research groups present unique challenges for EMR access monitoring programs, according to a recent talk by Dr. Daniel Fabbri, Founder and CEO of Maize Analytics. Dr. Fabbri, also an Assistant Professor of Bioinformatics and Computer Sciences at Vanderbilt University, spoke on balancing the need to provide electronic medical record (EMR) access to researchers, while also ensuring patient privacy. The talk was part of the Health Care Compliance Association’s recent Research Compliance Conference held in Austin, Texas.

In his presentation, “Strategies to Effectively Monitor Researchers’ Access to the EMR,” Dr. Fabbri dove into the risk posed by researchers with access to electronic health records, and what makes monitoring researchers’ accesses so difficult. Approaches used to monitor clinician accesses are not directly transferable to detect researcher misuse, he says.

“Over the past several years we’ve seen the threat of breaches from insiders growing,” says Dr. Fabbri. “Researchers are a special class of insider whose work can involve seemingly erratic access patterns, making them difficult to monitor. As a result, standard methods, like rules-based auditing and anomaly detection, are not sufficient for monitoring researchers, creating a significant risk to patient data,”.

The talk spurred a discussion about procedures, processes and tools covered entities can employ to ensure researchers’ EMR accesses comply with HIPAA and institutional policies. According to Dr. Fabbri, this starts with researchers’ applications to the Institutional Review Board (IRB).

“Including structured diagnosis and procedure codes in the IRB application provides a guide for compliance officers to understand what constitutes appropriate access for each researcher,” he says.

Covered entities can integrate IRB submission data within their access monitoring tools to more effectively detect inappropriate behavior, continues Dr. Fabbri. “These research-aware monitoring tools can identify when a project goes beyond the listed research scope and alert the compliance department.”

The presentation—attended by researchers, compliance teams, and HIPAA officers alike—concluded with Dr. Fabbri providing monitoring recommendations and guidance so that healthcare organizations can monitor the various types of access to patient data.

Three Essential Elements of the Compliance Toolkit

Compliance officers regularly navigate one of the most complex systems in our country—health care. To protect patient privacy, they are charged with creating (and enforcing!) policies that align with changing regulations, while juggling practical limitations at their own facility.

There are many resources available to help compliance teams develop effective programs. The Office of Inspector General provides online education, and private companies offer products that address everything from technical needs to emotional stressors associated with the job.

Our team recently attended the Health Care Compliance Association’s regional conference in Dallas, Texas, where we had the opportunity to listen and learn from compliance experts about other ways to support compliance teams. We heard from Bret Bissey, MBA, FACHE, CHC, CMPE, and Healthcare Compliance Executive with over 30 years experience. He spoke on “What Every Compliance Officer Needs in Their Toolkit.” Three themes emerged:

1) Support. Compliance teams deserve access to the board (or hospital executives), an appropriate budget, and a respectable level of authority. Without these elements, it is hard for compliance teams to implement changes that steer staff toward a culture of compliance.

2) Independence. By acting independently from clinical operations, compliance officers can remain objective. What if a senior-level physician, or board member, violates a policy? Compliance officers must be empowered to make proper decisions without fear of retaliation. Independence allows this—but it must be clear who, or what policy, validates this independence.

3) Metrics. Certifications, analytics, audits, and documentation are essential elements of any compliance program. Quantitative data are not only important to measure success, but they can also help “sell” compliance programs to staff. Data can support compliance teams in showing why policies are needed.

As compliance teams work to oversee all aspects of healthcare operations, it’s easy to see why so many products have emerged to support their day-to-day activities. Compliance teams can choose resources and tools that integrate with their workflow. Tools designed to help teams attain organizational goals—that also keep compliance officers feeling supported and motivated—are most likely to lead to success.

SIEM to PIEM: Privacy Information and Event Management Systems

Some in the privacy community have looked to their security counterparts to adapt SIEM tools to the challenges of protecting patient data. However, there are stark differences between network monitoring and EMR access auditing. Privacy Information and Event Management (PIEM) systems are an emerging class of privacy monitoring system geared for medical record protection.

Is Accuracy a Fair Metric to Evaluate EMR Auditing Systems?

There has been a lot of talk about new EMR access monitoring systems. These systems leverage various types of machine learning and artificial intelligence algorithms to identify and rank suspicious behavior. However, parsing their claims is often difficult for two primary reasons: (i) there is no shared data set to evaluate these methods, and (ii) claims are made using different evaluation metrics.

Putting aside the issue of a shared data set for now, lets consider some of the different metrics used today (e.g., false positive rates, false negative rates, true positive rates, true negative rates, recall, precision, and accuracy, among others), and if they tell the entire story about a system’s quality.

To do that, lets consider the following example and an auditing system that uses a Boolean model in which the system marks each access as suspicious or not (i.e., not a probabilistic model).

The system audits 100 accesses in a day.
The system marks 10 as suspicious.
Of the 10 suspicious, 5 are actually inappropriate and 5 are actually appropriate.
Of the 90 not marked as suspicious, 7 are actually inappropriate (and not detected)
Given this example, the system would have the following metric values:

True Positives: 5
True Negatives: 83
False Positives: 5
False Negatives: 7
Accuracy is defined as the total number accesses correctly classified as appropriate and inappropriate: (83 true positives + 5 true negatives)/ 100 = 88%

Recall is defined as the number of inappropriate accesses detected over all inappropriate accesses that occurred: 5/(5 + 7) = 42%

Precision is defined as the number of inappropriate accesses detected over all accesses the system thought are suspicious: 5/10 = 50%.

So how did the system do? Let’s compare it to a simple auditing system that never thinks any access is suspicious. It would have the following metric values:

True Positives: 0
True Negatives: 88
False Positives: 0
False Negatives: 12
Accuracy: (88 + 0)/100 = 88%
Recall: 0/12 = 0%
Precision: 0 / 0 or undefined
As this example shows, the simple auditing system has the same accuracy as the more advanced auditing system – even though it did not find any inappropriate activity. This result occurs because the prior distributions of the appropriate and inappropriate classes are not equal; there are many more appropriate accesses than inappropriate. This distribution skew can make simple (and bad) auditing systems look good. In the real world, the distributions are likely skewed even more (i.e. 99% to 1%), compounding this problem.

If accuracy is not a fair metric, what metrics should you consider? The combination of precision and recall, known as an F-1 score, is one good alternative. F-1 scores that are closer to a value of 1 mean the system is able to find most inappropriate behavior with good precision. In our example, the first auditing system has a better F-1 score than the simple system.

In the next post, we will discuss how to evaluate systems that use a probabilistic model to identify suspicious behavior (i.e., an access can be 70% suspicious and 30% not), and how the area under the receiver-operating characteristic (or AUC ROC) is a better metric and is robust to data skew.