Data privacy and artificial intelligence in health care

0
711
Data privacy and artificial intelligence in health care


March 17, 2022 – The use of artificial intelligence (AI) in the delivery of health care continues to grow rapidly. Access to patient medical data is often central to the use of AI in the delivery of health care. As the exchange of medical information between patients, physicians, and the care team through AI products increases, protecting an individual’s information and privacy becomes even more important.

The expanded use of AI in health care has generated significant focus on the risks and safeguards to the privacy and security of the underlying data, leading to increased scrutiny and enforcement. Entities using or selling AI-based health care products need to take into account federal and state laws and regulations applicable to the data they are collecting and using that govern the protection and use of patient information and other common practical issues that face AI-based health care products.

Below, we outline several important data privacy and security issues that should be considered when creating AI-based products or deciding to use such products in health care delivery.

Register now for FREE unlimited access to Reuters.com

Data use through de-identification

The collection and use of patient health information in AI products may inevitably implicate the Health Insurance Portability and Accountability Act (HIPAA) and various state privacy and security laws and regulations. It is important for AI health care companies as well as institutions using AI health care products to understand whether HIPAA or other state laws apply to the data. One avenue to potentially avoid these regulations may be de-identifying the data before it is uploaded into an AI database.

What it means to be de-identified will vary depending on what laws and regulations apply to the data being used. For example, if the patient information is covered by HIPAA, de-identification of the protected health information (PHI) requires the removal of certain identifiers or an expert’s determination that data can be considered de-identified. Even if the data is initially de-identified in compliance with the applicable standard, AI products present some unique challenges to the de-identification process.

As an AI product develops and expands, often new data elements are added to the AI system or the amount of data in a particular element is increased creating a potential for privacy issues. In some cases, the additional data is collected to address potential algorithmic bias in the AI system, as a marketable AI product should be seen as trustworthy, effective, and fair.

AI-based products create additional privacy challenges especially when de-identified data is used to try and address potential bias issues. As more data is added to the AI systems, the potential to create identifiable data also increases, especially as the increased sophistication of AI systems has made it easier to create data linkages where such links did not previously exist. As the amount and number of data elements increases, it is important to continually assess the risk that the AI systems are generating identifiable patient data where it was once de-identified.

Vendor due diligence — data access, data storage, and ransomware

Performing sufficient vendor due diligence is key before entrusting any third party with patient data, including their PHI. How the data is collected (e.g., directly from patient records) and where it is ultimately stored create two significant due diligence points. In both cases, failure to conduct appropriate due diligence can lead to legal and monetary consequences.

In the case of collection, entities allowing system access to collect data face not only legal requirements but also potentially significant liability if the data is not properly protected. AI technology is equally vulnerable to manipulation like any other technology, and networks connecting patient data with patient care should be secured. In this time of increased ransomware attacks and attackers’ particular focus on health care, any external access points need to be thoroughly vetted and monitored to limit these potential threats.

In particular, examining how an entity handles access management to the data as well as ensuring the entity institutes high-level data governance and management to protect and manage the processing of the patient data should be standard in any due diligence efforts related to AI products. One other critical piece that is often overlooked is adding a high-level risk assessment and potential risk mitigation efforts to determine if the potential vulnerabilities outweigh the risk for access to such a product.

Health care entities need to critically consider, without assuming, whether direct access is the only way in which the AI product can function or provide value to the health care entity or whether there are cost-effective alternatives, such as a separate database of information that the entity pulls and populates within direct access to the main system.

In the case of storage, AI health care companies should consider these same diligence questions as any obligation of the company will generally need to be passed down to its vendors. Often, these companies may not see such vulnerabilities as high risk and would prefer to spend their limited funds in other places. But if the vendor fails to perform as needed, the reputational risk of that failure could occur and prove that ultimately the cost of such diligence is a better investment if the alternative is having your product become unmarketable because of reputational damage.

Security safeguards to protect health care data

Privacy cannot be had without security. Appropriate security safeguards must be adopted to maintain privacy as well as to build trust in the technology. Security safeguards that AI companies should be considering are:

Enhanced compliance monitoring: Information systems and data should be audited and monitored routinely to determine if any data compromises are occurring. There are many affordable third-party products available today to help with such monitoring and should be considered as part of any information security program.

Access controls: It is essential to understand who will have access to the data and algorithms and ensure strict controls appropriate to the level of access are provided.

Training: Personnel and vendors must be educated on their access limitations, limitations on data use, and security obligations with regards to the data. In particular, this should include any limitations found in patient consents or authorizations.

Conclusion

There is much excitement about the benefits that AI technologies can bring into health care. Protecting data privacy is an important component in ensuring the long-term use and success of these AI products. Without patient and physician trust that these AI-based products are considering and maintaining patient data privacy, such advances may be short-lived.

Register now for FREE unlimited access to Reuters.com

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.

Jason Johnson is a partner at the firm in the Healthcare, Privacy & Cybersecurity and Intellectual Property practice groups. He focuses his practice on the legal aspects of innovations in digital health, the privacy and security of data under U.S. and European laws, and the complex regulatory and compliance issues related to clinical research and business matters. His clients include academic medical centers, health care technology companies, emerging to late-stage biotechnology companies, pharmaceutical and medical device companies and other health care and research related organizations. He can be reached at JJohnson@mosessinger.com.

Pralika Jain is an associate in the firm’s health care practice. She counsels companies at all stages of the life cycle, from formation to investment to exit, building and protecting IP, and providing strategic advice to founders and investors in connection with complex transactions. As counsel to companies and institutions in the health care and technology industries, she regularly advises on matters related to privacy and data laws across jurisdictions. She can be reached at PJain@mosessinger.com

Linda A. Malek is a partner at Moses & Singer LLP and chair of the firm’s Healthcare and Privacy & Cybersecurity practices. Her practice concentrates on regulatory, technology and business matters in the health care industry. She can be reached at LMalek@mosessinger.com.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here