False and misleading health information online and in social media is growing thanks to the rapid development of deep and generative artificial intelligence technology (AI).
This allows you to manipulate movies, photos and sound of respected healthcare workers-on an example they look as if they support false healthcare products or obtain confidential health information from Australians.
So how do this type of health fraud work? What can you do to see them?
Access to online health information
In 2021, three out of four Australians over 18 years of age He said they had access to health services – such as Consultation of Sophie with doctors – online. One study 2023 showed 82% Australian parents consulted with social media about health -related issues, as well as consultations of doctors.
But Height around the world In disinformation related to health (or, incorrect material material) and disinformation (where people are deliberately misleading) is exponential.
WITH Medicare E -Lail and SMS -Yfor sale False pharmaceuticalsAustralians are exposed to loss of money – and damage to their health – acting in accordance with false advice.
What is Deepfake technology?
Some The emerging area of health fraud It is related to the employ of AI generative tools to create Deepfake movies, photos and audio recordings. These deep cabinets are used to promote false healthcare products or keep consumers to share confidential health information to people who, as they think, can be trusted.
Deepfake is a photo or video of a real person or recording the sound of his voice, which has been changed, so that the person seems to do or say something that he did not do or say.
Until now, people have used software to edit photos or video to create false images, such as applying someone’s face to another person’s body. Adobe Photoshop even advertises the ability of his software to “face“To” make sure everyone looks absolutely the best “in family photos.
Although the creation of deep wardrobes is not fresh, doctors and healthcare organizations raise the alarm bells regarding speed and hyperrealism, which can be achieved using generative AI tools. When these deep cabinets are made available via social media platforms that raise Disinformation range Importantly, the potential of damage also increases.
How is it used in health fraud?
For example, in December 2024. Victoria Diabetes He drew attention to the employ of Deepfake movies depicting experts from Baker Heart and Diabetes Institute in Melbourne, promoting diabetes supplement.
. media release Australia was explained from Diabetes that these films were not real and were made using AI technology.
None of the organizations approved the supplements or approved false advertising, and the doctor presented in the film warn to fraud.
This is not the first time when images (false) doctors were used to sell products. In April 2024, fraudsters were used Deepfake paintings by Dr. Karl Kruszelnicki To sell pills to Australians via Facebook. While some users reported posts on the platform, they were told that the ads did not violate the platform standards.
In 2023, Tik Tok Shop has been examinedwith sellers manipulating justified films Tik Tik Tik to (falsely) supporting products. These deep cabinets received over 10 million views.
What should I watch for?
2024 Review of over 80 scientific research I found several ways to combat online disinformation. They included social media platforms notifying readers of unverified information and teaching the ability to read and write digital older adults.
Unfortunately, many of these strategies focus on written materials or require access to true information to verify the content. Identification of deep attacks requires various skills.
The Australian Esafeta Commissioner provides Helpful resources lead people in identifying deep attacks.
Importantly, they recommend considering the context itself. Ask yourself – is this something I would expect that this person will say? Does it look like a place where I would expect this person?
The Commissioner also recommends that people look and listen carefully to check:
-
Blur, trimmed effects or pixeling
-
Skin inconsistency or discoloration
-
Video inconsistencies, such as defects and changes in lighting or background
-
Sound problems, such as poorly synchronized sound
-
Irregular flashing or movement that seems unnatural
-
Luki of content in the story or speech.
Maya Lab/Shhutterstock
How else can I keep safety?
If you have your own images or voices changed, you can Contact the Esafeta Commissioner Directly to facilitate in removing this material.
The British Medical Journal has also published specific tips for coping with deep accusations related to healthadvising people:
-
Contact a person who supports the product to confirm that the image, video or sound is justified
-
Leave a public commentary on the site to ask if the claims are true (this may also make others criticize the content they see and hear)
-
Apply the internet platform reporting tools to mark false products and report accounts that make disinformation available
-
Encourage others to question what they see and hear, and to look for advice from healthcare.
This last point is crucial. As with all health -related information, consumers must make informed decisions in consultation with doctors, pharmacists and other qualified healthcare staff.
Because generative technologies of artificial intelligence are becoming more and more sophisticated, the government is also a key role in ensuring the safety of Australians. Edition in February 2025 the long -awaited Online safety review explains it.
The review recommended that Australia adopts the obligation to care for “damage to mental and physical well -being” and solemn damage to “teaching or promotion of harmful practices”.
Given the potentially harmful consequences of compliance with Deepfake health advice, the obligation of care regulations is needed to protect Australians and support them to make appropriate health decisions.