Skip to main content Skip to main navigation

Fighting disinformation and hate speech

KEY PROJECT | Democratic Freedoms

Disinformation is used to create division and to polarise our communities for political or financial gain. The Human Rights Law Centre advocates for legal reforms to prevent its spread and penalties for politicians who deliberately mislead the public.


Disinformation targets people’s fears and anxieties and is used to create division and to polarise our communities for political or financial gain. Once unleashed, it travels seamlessly across social media, between newspapers, talkback radio and messaging apps. Digital platforms and media companies profit from it, and some politicians build a platform on it.

Big tech companies like Meta and Google have the power and influence to amplify the information – and disinformation – that forms the basis for people’s decisions and beliefs. These companies are making record profits tracking our behaviour and selling our attention, while simultaneously spreading toxic disinformation and hate speech through our democracy on a level never before seen in history.

The proliferation of online disinformation and hate speech through digital platforms represents a serious and growing threat to Australia’s democracy. Australia’s regulation of online civic space lags behind other jurisdictions’ regulation of digital platforms, including mass collection of personal data.  Without adequate human rights protections, the problem will only get worse.

To date, the Australian Government has pursued a model that allows this powerful industry to play by its own rules.  In October 2024, we launched Rights-First: Principles for Digital Platform Regulation which called for the Australian Government to step in and stop big tech regulating itself, while harmful misinformation and disinformation poisons Australia’s democracy.

In November 2024, the Australian Government committed to introducing a Digital Duty of Care. This important step forward is a key recommendation from our report. A digital duty of care will require social media platforms such as Facebook, Instagram, and X to take reasonable steps to protect users from foreseeable harm, as an important first step to address the growing risks of online harm.