Albanese Government should draw on human rights principles to mitigate risks of AI

The Albanese Government should underpin artificial intelligence laws in human rights principles to protect people from exploitation and ensure advances and uses of AI benefit the wider community, the Human Rights Law Centre said today.

As people in Australia increasingly take up the use of AI across various sectors, there is a growing awareness of the potential threats the technology pose if left unregulated. AI has already been misused for propagating harmful misinformation campaigns,[1] creating and distributing sexually abusive deepfake content,[2] and entrenching discrimination in policing and healthcare.[3]

In a submission to the Senate Select Committee Inquiry on Adopting Artificial Intelligence, the Human Rights Law Centre recommended that Australia adopt a risk-based approach to AI regulation, grounded in international human rights law and principles.

Human rights law provides a robust framework for accountability and oversight which complements ethics in the governance of AI. This is the case in Europe, Brazil and Canada, which have all incorporated or proposed incorporating human rights into their approaches to AI regulation.

Quotes attributed to David Mejia-Canales, Senior Lawyer at the Human Rights Law Centre:
“Advances in technology should serve our communities, not put people at risk of harm. We need laws to ensure that artificial intelligence technology and the corporations driving it forward are transparent and accountable to people.

“The Albanese Government should follow the world-leading examples set by Europe, Brazil and Canada to ensure that Australia’s regulation of artificial intelligence is grounded in human rights laws and principles.”

Read the Human Rights Law Centre’s submission here

[1] Ali Swenson and Kelvin Chan, ‘Election disinformation takes a big leap with AI being used to deceive worldwide,’ Associated Press, (Online, 14 March 2024) < https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd>.

[2] Cassandra Morgan and Holly Hales, ‘Student AI deepfake images reflective of porn crisis’, Australian Associated Press (Online, 12 June 2024) < https://www.aap.com.au/news/student-deepfakes-reflective-of-school-porn-crisis/>.

[3] Will Douglas Haven, ‘Predictive policing algorithms are racist. They need to be dismantled’, MIT Technology Review, (Online, 17 July 2020) < https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/>

Media contacts:
Thomas Feng
Acting Engagement Director
Human Rights Law Centre
0431 285 275
thomas.feng@hrlc.org.au