An imaging company donated X-rays and CT scans of its patients to an artificial intelligence company. How did it happen?

An imaging company donated X-rays and CT scans of its patients to an artificial intelligence company. How did it happen?

Australia’s largest provider of radiology services, I-MEDhas provided anonymized patient data to an artificial intelligence company without the patient’s explicit consent, Crikey recently reported. The data included images such as X-rays and CT scans, which were used to train the artificial intelligence.

This prompted investigation by a citizen Office of the Australian Information Commissioner. Follows I-MED data breach With patient documentation dates back to 2006.

Reports say there are enraged patients avoiding I-MED.

I-MED privacy policy mentions data being shared with “research bodies authorized by Australian law”. But only 20% of Australians read and understand the privacy policy, so it’s understandable that these revelations shocked some patients.

So how did I-MED share patient data with another company? How can we ensure that patients have choices about how their health information is used in the future?

Who are the key players?

Many of us have had tests done at I-MED: it is a private company with over 200 radiology clinics in Australia. These clinics perform imaging tests, such as X-rays and CT scans, to assist diagnose diseases and guide treatment.

I-MED has established cooperation with an AI start-up Harrison.ai in 2019 Annalise.ai is their joint venture to develop artificial intelligence for radiology. There were I-MED clinics first users Annalise.ai systems.

I-MED was buying out other companiesand it is for sale, reportedly for A$4 billion.

There are enormous commercial interests at stake and many patients potentially affected.

Why would an AI company want your medical images?

Artificial intelligence companies want your X-rays and CT scans because they need to “train” their models on enormous amounts of data.

In the context of radiology, “training” an AI system means exposing it to lots of images so that it can “learn” to identify patterns and suggest what might be wrong.

This means that data is extremely valuable to both AI startups and enormous tech companies because AI is, to some extent, made up of data.

You may think it’s the wild west out there, but it’s not. In Australia, there are many mechanisms that control the utilize of your health information. One layer is Australian privacy legislation.

What do privacy laws say?

It is likely that the I-MED images were “sensitive information“under Australian Privacy Act. This is because they enable the identification of a given person.

The law limits the situations in which organizations can disclose this information beyond its original purpose (in this case, providing you with health care benefits).

One is when a person has given agreementWhich NO that seems to be the case here.

Another situation is if a person “reasonable to expect” disclosure, and the purpose of disclosure is directly related to the purpose of collection. Based on the facts available, this also appears to be an abuse.

This leaves open the possibility that I-MED relied on the disclosure of information “necessary for research or the compilation or analysis of statistics relevant to public health or public safety” for which obtaining consent from individuals unworkable.

Artificial intelligence needs data to learn.
Victor Ochando/Shutterstock

The companies I repeated publicly that the scans were there deidentified.

Most de-identified information falls outside the scope of the Privacy Act. If the risk of re-identification is very low, de-identified information can be used with little legal risk.

But de-identification is complexand context matters. At least one expert suggested that these scans were not sufficiently de-identified to take them beyond legal protection.

Changes to the Privacy Protection Act have increased penalties for invading people’s privacy, although the Office of the Australian Information Commissioner is underfunded and enforcement remains a challenge.

How else is our data protected?

In Australia, there are many more layers governing health data. We will consider only two.

Organizations should have a data management framework that defines who is responsible and how it should be done.

Some enormous public institutions have very mature frameworks, but this is not the case everywhere. In 2023 researchers argued that Australia urgently needs a national system to ensure greater consistency in these activities.

There are also hundreds of human research ethics committees (HRECs) in Australia. Any research should be approved by such a committee before it is initiated. These committees apply National Statement on Ethical Conduct in Human Research evaluation of applications in terms of research quality, potential benefits and harms, fairness and respect for participants.

But National Health and Medical Research Council recognized that human research ethics committees need more support – particularly to assess whether AI research is of good quality, with low risks and likely benefits.

How do ethics committees work?

Ethics committees for research involving human subjects determine, among other things, what type of consent is required for research.

Annalise.ai’s published research has been approved, sometimes from multiple ethics committees dealing with human researchincluding consent to “withdraw consent.” What does this mean?

Traditionally, research involves opt-in consent: individual participants agree or decline to participate before the study begins.

However, in artificial intelligence research, researchers generally want permission to utilize part of the existing expansive data lake already created by regular healthcare.

Scientists conducting this type of research typically ask for a “waiver”: permission to utilize data without explicit permission. In Australia this can only be approved by the Human Research Ethics Committee and under certain conditionsincluding that the risks are low, the benefits outweigh the harms, privacy and confidentiality are protected, obtaining consent is “impractical” and “there is no known or probable reason to believe that participants would not consent.” These issues are not always uncomplicated to determine.

Waiving consent may seem disrespectful, but it involves a hard compromise. If researchers ask 200,000 people for permission to utilize elderly medical records for research, most will not respond. The the final sample will be small and biasedand the research will be of lower quality and potentially useless.

For this reason, people are working on alternative models. One example is “consent to rule”, where governance structures are established in partnership with communities, individuals are asked to consent to the future utilize of their data for any purpose approved within those structures.

Listen to consumers

We are at a crossroads when it comes to the ethics of artificial intelligence research. Both decision makers AND Australians agree that we must utilize high-quality Australian data to build sovereign health AI capabilities and health AI systems that work for all Australians.

But the I-MED case shows two things. It is critical to engage with Australian communities on when and how health data should be used to create AI. Australia must rapidly strengthen and support our existing infrastructure to better manage AI research in a way Australians can trust.

Leave a Reply

Your email address will not be published. Required fields are marked *