Law Reform For Addictive Tech Platforms

A major US court case against a social media giant is pushing lawmakers to look beyond online content and confront how addictive platform design is putting users, especially children at risk.
Updated on

Lawmakers are starting to treat the design of big tech platforms as a dangerous product in its own right, as legal action in the United States over allegedly addictive social media features aims to protect children but could also reshape how digital businesses operate worldwide.

Right now, social media sits at the centre of everyday life, especially for young people, with Australian teenagers spending roughly 14 hours a week scrolling, posting and watching online. Over the past decade, regulators have mostly tried to tackle online harms by policing content, relying on complex rules about what should be allowed or removed and often treating platforms as passive hosts of other people’s speech rather than active designers of attention‑grabbing systems.

The latest US case takes a different path by focusing on how platforms are built - infinite feeds, reward‑driven notifications and algorithms tuned to maximise time on site, backed by internal targets to lift user engagement by double‑digit %. Several major platforms have already chosen to settle rather than argue in front of a jury that these mechanics are harmless. This suggests the industry recognises that design choices, not just toxic posts or videos, sit at the heart of problems like compulsive use, exposure to harmful material and scam‑driven advertising.

This shift in focus matters for Australia because current laws such as the Online Safety Act mainly target the most extreme categories of harmful content and give limited authority to challenge the systems that keep people trapped on platforms. After almost a decade of content‑led policy, children are no safer online and bans on underage use are creeping in, which appears to be a sign that the original strategy has failed to keep pace with evolving digital risks.

A stronger duty of care for digital products, built into upcoming reforms to the Online Safety Act, seems to offer a more direct way to reduce harm for the many Australian users who resemble the young people at the centre of overseas lawsuits. This would mean clearer obligations on platform design, firmer liability if companies ignore foreseeable risks and more practical remedies for families rather than expecting individuals to run expensive product‑liability cases against global tech firms.

Similar lessons appear relevant for Australia’s emerging AI regulation, which currently leans towards a light‑touch model that may repeat the same mistakes made with social media by waiting years before acting on obvious design‑driven dangers. If AI‑powered products embed the same engagement‑at‑all‑costs logic into search, education or work tools, Australia could face another round of bans and patch‑up fixes instead of having set strong safety rules from the start.

Australia once led in setting up specialist internet regulators, but those frameworks seem tuned to yesterday’s problems and not the always‑on, algorithmic systems that now shape attention, behaviour and financial risk. Updating the law to focus on product safety, create clearer standards for platform and AI design and tie corporate liability directly to preventable harms looks like the next necessary step if Australians are to have meaningful protection in a digital environment built to keep them hooked.

Sources

Updated on

Our Daily Newsletter

Everything you need to know across Australian business, global and company news in a 2-minute read.