Now that artificial Intelligence (AI) is being adopted across healthcare systems, how safe is your personal health information?

AI is changing the way doctors diagnose health conditions and treat patients. There’s a lot of optimism about its potential for improving health outcomes.

However, because AI requires massive amounts of information to learn, you may wonder who will be able to see your private medical records and how safe they are.

This article explores the benefits of AI in healthcare, the privacy risks, and the complex tools engineers are using to keep your information safe. It also provides practical steps to take control of your digital health information.

The benefits of AI in healthcare

If you find the healthcare system confusing, slow, and difficult to navigate, AI might improve your experience by simplifying processes and improving how things work.

AI systems can analyze medical images, like X-rays and CT scans, and spot signs of disease that a human eye might miss. For example, AI has proven to be as effective or more effective than human experts at detecting pneumonia, skin cancer, and heart conditions.

AI also allows for personalized care. Instead of a “one-size-fits-all” approach, AI can analyze your specific genetic makeup, medical history, prescribed meds, allergies, and lab results to predict which treatments will work best for you. It can help doctors create individualized plans for managing chronic conditions like diabetes by monitoring glucose levels and daily activities.

AI also helps with time consuming administrative tasks, such as composing letters to patients and adding to a patient’s medical notes from a healthcare professional’s verbal input.

Ultimately, AI can allow healthcare professionals to spend more time with the people in their care.

While the benefits are clear, people are concerned that their data is being used to train these AI models.

For example, to learn how to spot a tumor or predict a heart attack, AI models study millions of patient records. What are the risks of having your data included in this data set? Is there a way you could be identified and your health information seen widely by others?

One major concern is re-identification. Hospitals often strip names and social security numbers from data before sharing it with researchers, a process called de-identification. However, this might not be enough.

By using “linkage attacks,” hackers can combine an anonymous medical record with public information, such as voter registration lists or social media profiles, to identify you. Studies suggest that with just 15 pieces of information, nearly any American could be re-identified from an “anonymous” dataset.

There is also a gap in the law. You might have heard of HIPAA (Health Insurance Portability and Accountability Act), the law that protects your medical records at the doctor’s office. However, HIPAA AI compliance is tricky.

HIPAA applies to doctors and hospitals, but may not cover health apps or wearable tech sold by commercial companies. This means that while your hospital protects your data, a private company might legally sell the health data you voluntarily give them to advertisers.

Because of these risks, computer scientists have developed ways to train AI without exposing your data. They include:

Federated learning

Traditionally, to teach an AI, all data had to be moved to a central server. This created a “honeypot” of data that hackers could target. Federated learning (FL) is changing this.

In FL, the AI model “travels” to the hospital’s secure server, learns from the data there, and then leaves. Instead of leaving with sensitive patient data, it leaves with model updates, which are the mathematical, abstract patterns it learns from the data. This means your data stays on the hospital’s local server.

Differential privacy

Differential privacy involves adding random information, or “noise,” to a dataset so that the general patterns remain clear to the AI while the specific details of individuals are hidden. This makes it mathematically impossible to tell if your specific medical record was included in the AI’s training.

As technology companies expand into healthcare, the ethics of AI use are in question.

Companies like OpenAI are addressing this by creating dedicated health experiences that keep personal health data separate from general chat data to prevent it from being used in public AI training. They use “purpose-built encryption,” which securely scrambles data so only authorized users can access it.

However, ethics also involves fairness. If an AI system is trained mostly on data from one group, such as young white males, it may not perform well for older adults or people of different backgrounds. This problem, known as algorithmic bias, highlights the need to use diverse data to ensure AI in medicine benefits everyone equally.

Steps for staying in control of your medical data

You can take these steps to protect your medical data privacy:

  1. Check who you’re giving your data to: Before downloading a health app, check if it’s connected to a hospital or insurance company. If it is, HIPAA likely protects it. If it’s a commercial app, read the privacy policy to see if the company sells data to third parties.
  2. Use privacy settings: If you use tools like ChatGPT for health questions, try using ChatGPT Health instead. Alternatively, look for settings that prevent your conversations from being used to train the company’s AI models.
  3. Ask questions: If you’re concerned about what will happen to your data once a health professional records it, ask them if they use federated learning or differential privacy technologies to protect it.
  4. Enable security features: Turn on multi-factor authentication for any account containing health data. This adds an extra layer of protection to your digital health information.

AI is revolutionizing healthcare, offering a multitude of benefits, from faster diagnosis and treatment to reduced administrative burden. However, it learns from sensitive personal information that requires protection.

New technologies such as federated learning and differential privacy are helping secure patient information.

To stay in control, check how apps you use handle your data, check your privacy settings, and ask healthcare professionals how your data will be used.