New Technique Could Protect Online Images from AI Misuse

Australian researchers have developed a method to prevent artificial intelligence from learning from images without permission. The technique subtly changes photos, artwork, and other visual content so AI systems cannot use them, while humans see no difference.

The project is led by CSIRO in collaboration with the Cyber Security Cooperative Research Centre and the University of Chicago. The approach aims to help artists, organisations, and social media users protect their content from AI training or deepfake creation.

The method places a limit on what AI models can learn from protected images. It provides a mathematical guarantee that AI cannot extract meaningful information, even if the system is retrained or modified.

Dr Derui Wang, a CSIRO scientist, said the technique gives users more control over their content. “Existing methods rely on guesswork or assumptions about AI behaviour,” Dr Wang said. “Our approach provides a clear guarantee that unauthorised AI cannot learn from the content beyond a set threshold.”

The method could be applied automatically across platforms. Social media sites could embed the protective layer into every image uploaded, reducing deepfake risks, preventing intellectual property theft, and giving users stronger control over their data.

Currently, the technique works with images, but researchers plan to extend it to text, music, and video. The results have been tested in lab settings. The code is publicly available on GitHub for academic research, and the team is inviting collaboration from sectors including AI safety, defence, cybersecurity, and ethics.

The research paper, Provably Unlearnable Data Examples, was presented at the 2025 Network and Distributed System Security Symposium and received the Distinguished Paper Award.

State
ALL