CSIRO is helping Australia adopt artificial intelligence (AI) safely and responsibly. AI is already used in medicine, energy, agriculture, and science. More than half of Australian organisations use AI. Nearly half of Australians have tried generative AI, surpassing adoption rates in the US and UK.
Responsible design is crucial. Poorly designed AI can lead to biased decisions, privacy breaches, financial losses, and harm to people who rely on these systems. CSIRO’s Responsible AI (RAI) team focuses on safety, security, privacy, and reliability in AI systems. Human oversight is built into all stages, allowing people to check and question AI decisions.
This year, CSIRO published Engineering AI Systems, a practical guide to building and operating AI safely. It explains principles used by CSIRO researchers and engineers to make AI transparent and reliable. Acting Research Director Dr Qinghua Lu said people must be able to verify AI outputs, especially in healthcare, finance, and national security.
CSIRO is working with the Audit Office of NSW to explore AI for government auditing. The project combines AI tools with audit practices to ensure supervision and responsible use. Early results are encouraging, with predictive analytics helping manage large-scale documents while keeping humans in control.
The agency also partnered with the National AI Centre to release Guidance for AI Adoption. The guide presents six core practices for organisations using AI. It covers clear labelling, watermarking, and metadata to show when content is AI-generated.
Dr Lu said trust is key to AI adoption. By using scientific methods, CSIRO helps ensure AI in Australia is safe, fair, and useful. This approach supports innovation and delivers real benefits for individuals, businesses, and communities.
