Australia’s eSafety Commissioner has recently proposed new protocols to combat the spread of deepfake child abuse material and pro-terror content. The aim is to hold technology giants accountable for their role in disseminating harmful online content. This article critically examines Australia’s efforts and the potential impact of these proposed standards on companies such as Meta, Apple, and Google.
The eSafety Commissioner acknowledges that the technology industry was given a two-year window to develop its own codes regarding harmful content. However, this approach has proven ineffective, as the codes failed to provide sufficient safeguards and lacked a strong commitment to identifying and removing known child sexual abuse material. In response to this failure, the Commissioner has taken the initiative to develop new, more stringent standards.
The proposed standards, currently open for consultation and pending parliamentary approval, aim to address the worst-of-the-worst online content, including synthetic child sexual abuse material and pro-terror content. The eSafety Commissioner, Julie Inman Grant, emphasizes that the focus is on ensuring the industry takes meaningful steps to prevent the spread of seriously harmful content, particularly child sexual abuse material. These standards would apply to a wide range of platforms, including websites, photo storage services, and messaging apps.
Australia’s previous attempts to hold tech giants accountable for user-generated content have been challenging to enforce. The “Online Safety Act” passed in 2021 was ground-breaking in its ambition to regulate tech companies’ responsibilities regarding content moderation. However, attempts to exercise the extensive powers granted by this act have been met with indifference from some technology companies.
A recent example of the difficulties faced in enforcing accountability is the case involving Elon Musk’s X. The eSafety Commissioner fined X Aus$610,500 (US$388,000) for its failure to demonstrate effective removal of child sexual abuse content from its platform. However, X ignored the deadline to pay the fine and has since launched legal action to challenge the penalty.
Australia’s internet watchdog acknowledges the limitations of industry self-regulation and highlights the importance of the proposed standards in addressing the proliferation of harmful content online. While these efforts are commendable, it remains to be seen how effective they will be in practice. To truly tackle deepfake child abuse material and pro-terror content, there is a need for stronger regulation that compels technology giants to take proactive and responsible measures in content moderation.
Australia’s eSafety Commissioner’s proposed industry-wide standards represent a significant step towards tackling the dissemination of deepfake child abuse material and pro-terror content. By requiring technology giants to take more responsibility for harmful content on their platforms, Australia aims to safeguard its online space. However, challenges in enforcement and the need for stronger regulation remain. It is crucial for the proposed standards to be implemented effectively in order to protect vulnerable individuals and combat the spread of harmful online content.
The strange and elusive domain of quantum mechanics, characterized by its counterintuitive principles, often raises…
Water sources around the globe face increasing threats from pollution, particularly from heavy metals like…
In recent years, the prevalence of plastics in our environment has become alarmingly evident. Microscopic…
The U.S. Geological Survey (USGS) has unveiled its groundbreaking nationwide map detailing landslide susceptibility, revealing…
The rapid rise of large language models (LLMs) has significantly transformed various aspects of our…
The vast expanse of space offers a daunting challenge when it comes to astronomical observations,…
This website uses cookies.