Imagine a model of artificial intelligence that can employ heart scanning to guess in what racial category you will probably be introduced – even if he was not told what breed is or what to look for. Sounds like science fiction, but it’s real.
My last examinationwhich I conducted with my friends, I discovered that the AI model can guess whether a patient identified as black or white based on heart images with a maximum of 96% accuracy – despite the lack of clear information about racial categories.
This is a striking discovery that questions the assumptions regarding the objectivity of artificial intelligence and emphasizes the deeper problem: AI systems not only reflect the world – they absorb and play prejudices built into it.
Get your messages from real experts, straight to the inbox. Sign up to our daily newsletter to receive the latest reports from news and research in Great Britain, from politics and business to art and science. Join the conversation for free today.
First, it is significant to be clear: The breed is not a biological category. Up-to-date genetics shows that there is a greater variation in alleged racial groups than between them.
The breed is a social structureA set of categories invented by societies to classify people based on perceived physical characteristics and ancestors. These classifications do not mapp Purely to biology, but they shape everything, from experience, to access to care.
Despite this, many AI systems are now learning to detect and potentially act on these social labels, because they are built using data shaped by A world that treats the race as if it was a biological fact.
AI systems are already transforming healthcare. May Analyze x -rays of the chestIN Read the heart scanning and mean potential problems faster than human doctors – In some casesin a few seconds, not minutes. Hospitals accept these tools To improve performance, reduce costs and standardize care.
Biasty is not a mistake – it is built -in
But no matter how sophisticated, AI systems are not neutral. Are trained in the scope of actual data-and this data reflects the actual inequalitiesIN including those based on Breed, gender, age and socio -economic status. These systems can learn To treat patients otherwise Based on these features, even if nobody programms them.
One main source of bias This is unbalanced training data. If the model learns primarily from lighter patients, for example, it can fight To detect conditions in people with darker skin.
Dermatology research I have already shown this problem.
Even language models such as chatgpt are not resistant: One found one examination Evidence that some models still reproduce antiquated and false medical beliefs, such as the myth that black patients have thicker skin than white patients.
Sometimes AI models seem right, but for the wrong reasons – the phenomenon is called Abbreviation. Instead of learning the intricate features of the disease, the model may rely on irrelevant, but easier to detect tips in data.
Imagine two hospital departments: one uses a scanner and to treat ponderous patients with Covid-19, another uses a B scanner in milder cases. AI can learn to associate the scanner and with a grave illness – not because he understands the disease better, but because he chooses imaging artifacts specific to the A.
Now imagine that a seriously patient is scanned with a scanner B. The model can mislead them as less patients – not because of a medical error, but because he has learned the wrong abbreviation.
The same type of defective reasoning may apply to the breed. If there are differences in the dissemination of the disease between racial groups, AI can learn to identify the breed instead of the disease – with threatening consequences.
In the study of heart scanning, scientists found that the AI model did not really focus on the heart, where there were few observable differences related to racial categories. Instead, he drew information from areas outside of the heart, such as subcutaneous fat, as well as image artifacts – unwanted distortions such as movement blur, noise or compression that can reduce image quality. These artifacts often come from the scanner and can affect the way AI interpretation.
In this study, black participants had BMI with a higher than average, which may mean that they had more subcutaneous fat, although this was not directly examined. Some studies have shown that black people usually have less visceral fat and Smaller waist circumference in a given BMI, but more subcutaneous fat. This suggests that artificial intelligence could receive these indirect racial signals, not anything significant for the heart itself.
This is significant because when AI models learn the race – or rather social patterns that reflect racial inequality – without understanding the context, the risk is that they can strengthen or deteriorate existing disproportions.
It’s not just about honesty – it’s about safety.
Solutions
But there are solutions:
Diversify training data: Studies have been demonstrated this creation Data sets more representative It improves the performance of artificial intelligence between groups – without harm to accuracy for anyone else.
Build transparency: Many AI systems They are considered “black boxes” Because we do not understand how they draw conclusions. Thermal maps were used in the heart scanning examination to show which parts of the image influenced the AI decision, creating form Explaining artificial intelligence This helps doctors and patients to trust (or ask) the results – so that we can catch when it is used incorrect shortcuts.
Treat the race carefully: researchers and programmers must recognize this The race in the data is a social signalThis is not a biological truth. This requires thoughtful behavior to avoid consolidating harm.
AI models are able to detect patterns that even the most trained human eyes can miss. This makes them so powerful – and potentially so threatening. He learns from the same defective world we do. This includes the way we treat the race: not as a scientific reality, but as a social lens through which health, opportunities and risk are uneven.
If AI systems learn our shortcuts, they can repeat our mistakes – faster, on a scale and less responsibility. And when life is on the line, it is a risk that we cannot afford.