Skip to main content

Are Your AI Solutions Secure? 5 Considerations + 10 Questions

Analysis  |  By Mandy Roth  
   January 29, 2019

With the growing popularity of AI solutions, health systems need to dig deeper to manage security risk.

When evaluating artificial intelligence (AI) solutions for a health system, cybersecurity may not be one of the first considerations. Perhaps it should be. In an age of increasing security threats, data management by outside parties opens the door to additional risk.

How can health system executives ensure that their AI and machine learning endeavors protect patient data, meet HIPAA compliance, and minimize security threats?  

HealthLeaders spoke to representatives at Clearwater and Digital Reasoning about this issue. The two Nashville-based companies announced a three-year cyber risk partnership today, which fortifies Digital Reasoning's artificial intelligence- (AI) powered solutions with Clearwater's cybersecurity and HIPAA compliance program.

Clearwater CEO Steve Cagle and Kenny Pyatt, senior director of engineering at Digital Reasoning, provide an overview of the issues related to providing secure AI solutions, as well as questions healthcare executives should ask vendors to ensure their AI solutions are secure.

5 Considerations
 

1.  An explosion of data is part of the challenge.

"The amount of data is increasing rapidly; I've heard reports of a 48% increase per year," says Cagle. At the same time, more points of access amplify the risk while "cyber criminals and cyber-attacks are becoming much more sophisticated in nature," the Clearwater CEO says. "In simple terms, it's really a matter of trying to keep up with all that, and that has proven extremely difficult to do."

2. Healthcare organizations are late to the game.

"Unfortunately healthcare has been catching up a bit when it comes to cyber security," says Cagle. "The industry has made great strides over the last several years, but, compared to other industries, they're just not there yet." He cites limited resources and budgets as primary factors in this dynamic. "The technology has outpaced the security and security, and oftentimes, has not been designed into the solution," he says. In addition, some organizations still struggle with HIPAA compliance.

3. Can you pry open the "black box?"

When partnering with AI technology companies, says Pyatt, "healthcare executives must insist upon transparency, communication, and a willingness to open the 'black box.' Without an ethical, secure, and transparent partnership, healthcare executives won't even know the potential risks." 

4. The alphabet soup factor complicates matters.

Health systems should ensure that outside parties are compliant with mandates and guidelines issued by multiple organizations. People may not be familiar with the Office for Civil Rights (OCR), which is the HIPAA "enforcement arm" of the U.S. Department of Health and Human Services, says Cagle. Pyatt says that companies like his should also follow National Institute for Standards and Technology (NIST) guidelines. "It is basically the government organization that defines what security means for the government," he says, "but it's a great set of standards across all technology."

5. If you haven't conducted an enterprise wide security risk assessment, you should.

"You always want to begin with a risk analysis," Cagle recommends. This involves a system-by-system assessment of the vulnerabilities and threats that can exploit those vulnerabilities in an organization. It also reviews the controls in place for each of those systems and the impact to the organization if a breach occurs.

"By doing that, you can identify where you have the most risk," says Cagle. Once you "understand where those exposures are, you [can] risk-rate them and … identify the best way to go about reducing risk to a level that's acceptable to [your] organization."

10 Questions to Ask When Deploying AI Solutions
 

Cagle and Pyatt suggest the following questions as a starting point to help minimize risk, ensure cyber security, and ensure validity of AI solutions:

  1. How will you process and store our data?
     
  2. Describe the environment where data is stored; who will have access to it?
     
  3. Are you willing to share the results of all experiments conducted and monitored model performance over time?
     
  4. Can you demonstrate results with validated data and methods supported by peer-reviewed research?
     
  5. What encryptions will you use to protect the data? "It's not good enough to be compliant," says Pyatt. "You also have to implement high-grade encryption."
     
  6. What deidentification measures do you employ to protect patient's identity? Pyatt explains that based on the work Digital Reasoning does, the company often removes patient names, phone numbers, dates of birth, and similar identifying characteristics.
     
  7. What measures do you have in place to keep up with new compliance standards as they change? The guidelines are not static, Cagle says. For example, after OCR investigates data breaches, the organization issues corrective action plans. It's essential to stay on top of these updates to ensure your organization meets the latest compliance criteria.
     
  8. How quickly can you assess our security risk at any given point in time during our engagement?
     
  9. Is there a business associate agreement in place with any third parties involved that governs how data will be protected?
     
  10. How will our companies work together if we experience a breach?
     

About the Experts
 

Clearwater provides enterprise-class cyber risk management solution to hundreds of healthcare providers and their partners through its IRM|Pro platform and experienced professional services team who provide insights and actions to address compliance, cyber, and patient safety risks. The company was the 2018 Best in KLAS winner for Cybersecurity Advisory Services, and was the 2017 and 2018 Black Book Marketing Research winner in Compliance and Risk Management Solutions. Clearwater is endorsed by the American Hospital Association.

Digital Reasoning deploys AI-powered care management software to health systems to augment the care team and accelerate the entire care process. The company was recognized by PitchBook in 2018 as Tennessee’s most valuable startup, and began its foray into healthcare through a partnership with HCA Healthcare.

Mandy Roth is the innovations editor at HealthLeaders.

Photo credit: Shutterstock


Get the latest on healthcare leadership in your inbox.