recent
Latest Articles

Securing the Future of Healthcare Data with AI: My Journey to Stop Cyber Threats Before They Harm Lives

Securing the Future of Healthcare Data with AI: My Journey to Stop Cyber Threats Before They Harm Lives

By Dr. Yasmin Makki Mohialden, with contributions from Saba Abdulbaqi Salman, Maad M. Mijwil, Nadia Mahmood Hussien, Mohammad Aljanabi, Mostafa Abotaleb, Klodian Dhoska, and Pradeep Mishra


Illustration of secure AI healthcare system using federated learning, encryption, and anomaly detection to prevent data poisoning attacks.

A World Where Healthcare and Technology Are One

Not long ago, medical records were kept in locked cabinets. Today, they exist in cloud servers, hospital databases, and AI-driven applications. Artificial Intelligence (AI) is transforming healthcare — from predicting diseases to helping doctors make life-saving decisions faster.

But with great power comes great risk.
If the data that trains these AI systems is corrupted, the consequences can be disastrous:

  • Wrong diagnoses given to patients.
  • Delayed treatments that cost lives.
  • Loss of trust in the healthcare system itself.

One of the most dangerous threats we face is a data poisoning attack — a cyberattack where fake or harmful data is secretly inserted into the information that AI learns from. This “poison” can cause AI systems to make deadly mistakes.

 

Why I Took on This Challenge

When I began this research with my colleagues, we asked ourselves:
How can we ensure that AI in healthcare remains safe, trustworthy, and resilient — even when someone is trying to attack it?

We knew the answer would require more than just one solution. It would need a new kind of defense — one that protects data privacy, stops cyber threats early, and works across different countries and hospitals without ever moving sensitive patient data.

That vision became the foundation for our research: Enhancing Security and Privacy in Healthcare with Generative AI-Based Detection and Mitigation of Data Poisoning Attacks.

 

 A Global Effort

This was not a journey I took alone.
Our team brought together expertise from Iraq, Russia, Albania, and India — blending skills in artificial intelligence, cybersecurity, healthcare systems, and data science.

We came from different cultures and disciplines, but we shared one mission: protect the integrity of healthcare AI so it can save lives, not endanger them.

 

The Core of Our Solution

We combined three powerful technologies to create a defense framework:

1. Federated Learning — Training Without Sharing Data

In traditional AI training, all the data is collected in one central location. In healthcare, this is risky because patient data can be exposed or stolen during transfer.

With federated learning, the AI model learns directly from data stored in each hospital or clinic — without that data ever leaving the building. Only the “knowledge” gained (in the form of model updates) is sent to a central system, keeping private information safe.

Concept Diagram of Federated Learning Workflow
Figure 1 - Concept Diagram of Federated Learning Workflow


2. Homomorphic Encryption — Calculations Without Unlocking Data

Even when AI needs to process data, we keep it encrypted the entire time using homomorphic encryption. This means the AI can make calculations without ever “seeing” the actual data.

It’s like having a chef who can cook a meal without opening the sealed ingredients — the food is prepared without exposing what’s inside.

 

3. Autoencoder-Based Anomaly Detection — The AI That Spots Trouble

We also needed a way to spot poisoned or unusual data quickly. For this, we used an autoencoder, a type of AI that learns what “normal” looks like.

Once trained, if it encounters strange or corrupted data, the difference becomes obvious. We measure this difference using Mean Squared Error (MSE). If the error is too high, we flag it as suspicious.


Architecture of Autoencoder Model


How We Tested Our Framework

We began with synthetic healthcare datasets — realistic but fictional patient data including:

  • Age
  • Gender
  • BMI (Body Mass Index)
  • Blood Pressure
  • Cholesterol Levels
  • Health status label


Example of Generated Dataset

We then deliberately “poisoned” parts of the data by adding Gaussian noise — mimicking the type of subtle attacks hackers might use.

Using federated learning, the model trained across multiple simulated “hospitals” without centralizing the data. The data stayed encrypted, and the autoencoder kept watch for anomalies.


The Results

The results were clear:

  • The framework successfully detected poisoned data in multiple scenarios.
  • Privacy was never compromised — no raw patient data was shared.
  • The system adapted well to different dataset sizes and variations.

 

What We Learned

Our method is promising, but there are challenges ahead:

  1. Performance Speed — Homomorphic encryption is secure but computationally heavy, which can be a hurdle for real-time hospital environments.
  2. Threshold Precision — Setting the right anomaly detection sensitivity is key. Too strict, and harmless data gets flagged; too loose, and attacks can slip through.

 

Why This Research Matters

Healthcare is a prime target for cyberattacks:

  • Medical data is valuable on the black market.
  • Disruption in hospitals can cost lives.
  • AI is becoming mission-critical in diagnosis, treatment, and logistics.

By combining privacy-first training with proactive anomaly detection, we created a defense that can be adapted beyond healthcare — into finance, transportation, and national infrastructure.

 

The Human Side

This research is not just about AI and encryption — it’s about trust.

When a patient shares their medical history, they trust that it will be used only to help them. Doctors trust that the AI tools they use will be accurate.

Our mission was to ensure that this trust is never broken by invisible cyber threats.

 

Acknowledging the Team

While I present this under my Nordic R&D Bridge profile, the work belongs to our entire team:

  • Saba Abdulbaqi Salman
  • Maad M. Mijwil
  • Nadia Mahmood Hussien
  • Mohammad Aljanabi
  • Mostafa Abotaleb
  • Klodian Dhoska
  • Pradeep Mishra

Their combined expertise made this project possible.

 

The Road Ahead

Next steps for us include:

  • Deploying this system in real hospital networks.
  • Optimizing encryption speed for real-time operations.
  • Extending this framework to other critical sectors.

Cyber threats evolve every day — our defenses must evolve even faster.

 

Final Thoughts

When people talk about AI in healthcare, they imagine futuristic robots diagnosing diseases. But the real breakthrough is in protecting the data and systems behind those robots.

The future of healthcare AI depends not just on what it can do — but on whether we can trust it completely.

Our work is one step toward making that trust a reality.

 

Reference

Published in Jordan Medical Journal, Supplement 1, 2024 — “Enhancing Security and Privacy in Healthcare with Generative Artificial Intelligence-Based Detection and Mitigation of Data Poisoning Attacks.” DOI: 10.35516/jmj.v58i3.2712

 


author-img
Yasmin Makki Mohialden

Comments

No comments
Post a Comment

    Stay Updated with Nordic R&D Bridge

    google-playkhamsatmostaqltradent