

Let’s create more inclusive and safer online platforms


About the campaign
The rise of the internet has opened up incredible possibilities, allowing people to connect instantly and globally. However, without proper regulations, we've witnessed the growth of digital platforms that can create harmful online environments. Issues like online bullying, racism, sexism, doxxing, death threats, revenge porn, live-streamed terrorism, deep fakes, and complex financial scams are all too familiar to us.
But it doesn't have to remain this way. We urge the Government to implement basic regulations that ensure social media platforms and search engines are designed with user safety in mind. This can be achieved through new transparency and safety measures overseen by an independent media regulator.
The key is that this regulation is focused on tech companies because we need changes in how systems are designed. We need clear requirements for transparency and accountability to ensure that these platforms are safe, just as we expect for other products we use.
The time to act is now, especially as Minister Erica Stanford is considering potential actions in this area.
This isn’t the end. But it will be a powerful start. It will be the beginning of the transformation of online platforms into a positive force in our lives, which lives up to the potential we saw when the platforms first came into being. Places where people can meaningfully connect, where we can share, where we can deepen the bonds between us, where we can care for others and be cared for, where we can flourish.


The action
So get amongst it. Share information with your friends and whānau, meet with your MP, talk to your teachers, students, parents, grandparents, workmates – this is an issue that everyone can get behind. Let’s be the people power movement that shows them we won’t be silent until what our digital experiences are healthy, safe and inclusive.
What can you do:
- Meet with your local MP to discuss your views on online harm and why we need legislative change to protect people.
- Write to Minister Stanford on your concerns with our current online technology space, reinforcing why we need legislative change.
- Chat with those around you about the impacts of online harm and what can be done about it!
- Download our campaign images to spread the word even further.


Key messages
You are welcome to use any of this content when taking action. Our top messages include:
- Online harm is a serious issue that affects both individuals and society.
- We need to change how online platforms are designed and how they manage online spaces to better prevent and address this harm.
- When the Internet first emerged, it held great potential. However, without proper safeguards, we’ve seen the creation of digital platforms that have created toxic environments that harm people both online and offline.
- Search engines and social media platforms have been designed to promote content that drives engagement, regardless of its harmful effects. As a result, harmful material can easily spread into the feeds of many people, leading to broader societal issues.
- We need to tackle this fundamental design flaw, which is the main focus of our call to action. Aotearoa New Zealand has the opportunity to enhance digital spaces by implementing sensible regulations.
- Importantly the Government must uphold its obligations under Te Tiriti o Waitangi and work with Māori to develop regulation.
- We want to see law that creates a thorough and consistent framework for ensuring online safety. This framework should include requirements of transparency and accountability, a duty of care, independent oversight, and penalties for non-compliance.
- This campaign doesn’t include a call on banning social media for young people as it’s primarily focused on tech company accountability.
- Countries like Australia and the UK, and the European Union already have online safety laws, we can do it too and we have fallen behind.
- In developing regulation, it is important to hear from people most harmed to ensure the resulting regulation and policy is fit for purpose.
- Engaging with online spaces has become a huge part of everyday life. We want people to be able to thrive and have their human rights respected everywhere, including online.
The Call
We want to see law that creates a thorough and consistent framework for ensuring online
safety.


This framework should include requirements of transparency and accountability, a duty of care, independent oversight, and penalties for non-compliance.
Importantly the Government must uphold its obligations under Te Tiriti o Waitangi and work with Māori to develop regulation.
The intent is to increase accountability by tech companies for the harm occurring.
We think the main parts should include:
- Transparency: Tech companies should clearly show how their algorithms work, like what content they recommend, what they remove, and how complaints are handled.
- Duty of care: Tech companies must actively make sure their products and services are safe by design. This means having strong checks to find risks and ways to reduce them.
- Independent oversight: There should be outside monitoring with the power to penalise companies that don't follow the rules. Tech companies must also provide reports to show if they are following these rules.
In developing regulation, it is important to hear from people most harmed to ensure the resulting regulation and policy is fit for purpose.
The call does not include a ban on social media for people under 16 as the focus here is on holding tech companies to account.
We want rules that focus directly on tech companies in order to fix the root causes of online harm. And the time is now, as Minister Erica Stanford is currently thinking about what actions we can take in this area.
In fact, similar government action is already being done by other countries around the world, including Australia, the European Union and the United Kingdom.
FAQS
Social media platforms are a critical space for people to exercise the right to freedom of expression. Online violence and abuse are a direct threat to this freedom of expression. Online harm can also fuel broader human rights violations through amplifying violent extremism. Further below we discuss Amnesty research The Social Atrocity: Meta and the right to remedy for the Rohingya which illustrates the gravity this harm can have.
A number of organisations and individuals have been consulted in the lead up to this campaign.
Theory of change:
- In short, our theory of change is based on a people power model – building a groundswell of support that compels decision-makers to act. There is also a window of opportunity here as we know the Government is considering action to address online harm. This makes it important to get before Government right now. There appears to be political appetite for action so we want to harness this moment.
- The first step in this campaign focuses on encouraging people to write or talk with MPs, especially Government MPs and the Minister whose portfolio includes this issue (Minister Erica Stanford). This is important because the issue is under active consideration.
- At the end of the year we’ll reassess to consider what we should do next.
We’re seeing online platforms being designed to promote high engagement content (regardless of the harm) through algorithmic amplification. This is driven by the business model that companies have adopted where harmful content drives engagement and therefore revenue.
In practice this means highly harmful material finds its way into the social media feeds of many different people, driving broader societal harm. We need a solution that addresses this core design problem. Which is the focus of our call.
Amnesty’s research The Social Atrocity: Meta and the right to remedy for the Rohingya illustrates the gravity this harm can have.
Facebook owner Meta’s dangerous algorithms and reckless pursuit of profit substantially contributed to the atrocities perpetrated by the Myanmar militaryagainst the Rohingya people in 2017...
The Social Atrocity: Meta and the right to remedy for the Rohingya, details how Meta knew or should have known that Facebook’s algorithmic systems were supercharging the spread of harmful anti-Rohingya content in Myanmar, but the company still failed to act.
“In 2017, the Rohingya were killed, tortured, raped, and displaced in the thousands as part of the Myanmar security forces’ campaign of ethnic cleansing. In the months and years leading up to the atrocities, Facebook’s algorithms were intensifying a storm of hatred against the Rohingya which contributed to real-world violence,” said Agnès Callamard, Amnesty International’s Secretary General.
“While the Myanmar military was committing crimes against humanity against the Rohingya, Meta was profiting from the echo chamber of hatred created by its hate-spiralling algorithms....”
Meta uses engagement-based algorithmic systems to power Facebook’s news feed, ranking, recommendation and groups features, shaping what is seen on the platform. Meta profits when Facebook users stay on the platform as long as possible, by selling more targeted advertising. The display of inflammatory content – including that which advocates hatred, constituting incitement to violence, hostility and discrimination – is an effective way of keeping people on the platform longer. As such, the promotion and amplification of this type of content is key to the surveillance-based business model of Facebook.
In the months and years prior to the crackdown, Facebook in Myanmar had become an echo chamber of anti-Rohingya content. Actors linked to the Myanmar military and radical Buddhist nationalist groups flooded the platform with anti-Muslim content, posting disinformation claiming there was going to be an impending Muslim takeover, and portraying the Rohingya as “invaders”.
The 2019 Amnesty International report: Surveillance Giants, stated:
However, the use of algorithms to curate social media content and encourage people to remain on the platform can result in Google and Facebook actively promoting or amplifying abusive, discriminatory or hateful content. The platforms recommend and promote new content based on opaque algorithmic processes to determine what will
best engage users.
Because people are more likely to click on sensationalist or incendiary material, the so-called ‘recommendation engines’ of these platforms can send their users down what some have called a ‘rabbit hole’ of toxic content.
Sensationalism in mass media is, of course, not a new phenomenon, and is not limited to the internet. But the recommendation engines of social media go well beyond the adage “if it bleeds, it leads”: they can systematically privilege extreme content including conspiracy theories, misogyny, and racism to keep people on their platforms for as long as possible.
For example, one academic study into the spread of anti-refugee sentiment on Facebook found that “anti-refugee hate crimes increase disproportionally in areas with higher Facebook usage during periods of high anti-refugee sentiment online”.
Similarly, the algorithms behind Google’s YouTube platform have been shown to have various harmful consequences .... As well as privileging harmful content, the platforms’ algorithms can also undermine freedom of expression or lead to discrimination by suppressing certain forms of content.
For example, LGBTI communities have alleged that YouTube’s algorithm blocks or suppresses videos containing LGBTI content by automatically enforcing age restrictions and by “demonetising” the videos – meaning that they deny the producers ad revenue. YouTube denies this, saying the company does “not automatically demonetize LGBTQ content.”
Huge numbers of people in Aotearoa are experiencing online harm. In a 2023 Internal Affairs discussion document, Safer Online Services and Media Platforms, stated:
“Everyone consumes or uses content, like books, films, and radio to social media, blogs, and everything in between. However, our rapidly evolving and growing environment means that New Zealand’s existing regulatory systems for content are no longer as responsive or effective as we would like them to be. Because of this, New Zealanders are being exposed to harmful content and its wider impacts more than ever before.”1
Online harm is having a very real impact, and unfortunately it isn’t new. In 2021 the Manaaki Collective was launched with advisory resources and a declaration against digital harassment. The Manaaki Collective Declaration states2:
“We are very concerned with the recent increase in digital hate, online harassment and death threats, and the crossing over into real-life harassment in Aotearoa. Threats of violence and physical intimidation of human rights defenders constitute a human rights crisis in New Zealand.
We call for a more responsive system that protects the rights of human rights, environmental rights and minority rights defenders.
The tactics of digital harassment, threats, doxxing and smear campaigns are deliberate tactics to cause terror and inhibit important progress towards the ideals of treaty justice and human rights for marginalised groups in Aotearoa. They are an extension of a colonial, patriarchal white supremacist system that has sought to oppress through fear, violence and division for centuries.”
Released in 2023, a Netsafe survey found that 46% of Māori had experienced harmful digital communications in the past year.3
Research in 2017 documented the alarming impact that abuse and harassment on social media are having on women, with women around the world reporting stress, anxiety, or panic attacks as a result of these harmful online experiences. In Aotearoa New Zealand, around 1/3 of women surveyed said they had experienced online abuse and harassment. Of those women who experienced abuse, 75% said they had not been able to
sleep well, 49% feared for their physical safety and 32% feared for the physical safety of their families as a result.
Check out Netsafe annual reports for statistics, for example NetSafe stated “there were 28,468 online harm reports in FY24... 6,272 of these were categorised as harmful digital communication complaints”.4
Current law is not enough to address online harm, for example the 2023 Internal Affairs discussion document, Safer Online Services and Media Platforms, stated:
“Our main pieces of legislation are over 30 years old: the Films, Videos and
Publications (Classification Act) 1993 and the Broadcasting Act 1989. Many parts of those laws are still relevant, for example codes of broadcasting practice and tools to protect children from age-inappropriate content on television. But they do not have the reach and tools to deal with the online world.
The current system is difficult to navigate and has big gaps. New Zealanders must figure out which of five industry complaint bodies to go to if they feel content is unsafe or breaches the conditions of the platform it is on. On top of that, not all forms of content are covered by those bodies. The system is also very reactive because it relies mainly on complaints about individual pieces of content. For most forms of content, we do not have the tools and powers to ensure that platforms are doing what they should to manage the risks of harmful content.
It is important that our laws reflect our digitalised environment, including clear avenues where consumers can influence the content they see and respond to content they feel is harmful. While the development of this legislation rests with government, the implementation and practice sit with platforms. These safety practices need clear oversight to ensure effective and appropriate implementation.”5
In 2022, Netsafe and NZTech launched the voluntary Aotearoa New Zealand Code of Practice for Online Safety and Harms. Many concerns have been raised about the adequacy of the Code.
See this comment from NZ Tech Chief Executive in a recent article:
NZ Tech chief executive Graeme Muller also wanted more government involvement.
"I personally think that online safety is not improving. I'm proud to be involved in one little initiative that's trying to do something to improve it."
He said the social media platforms themselves wanted government intervention.
"What we are missing in New Zealand is we keep sitting back and hoping it'll fix itself," he said. "The platforms would much prefer clarity. That's what I've heard them say. They get hammered if they take something down, and they get hammered if they don't."
In sum, we’re concerned that the Code is inadequate. Given the significant impact online has, and how important online activity is to daily life, we believe it is something that warrants regulation to ensure it meets appropriate standards.
See this article for consideration of the Code and need for regulation. See also this article for highlighting support for regulation.
Yes, there is already similar legislation in places like Australia, the UK and the EU. The more countries that take part the more pressure on tech companies to change. In addition, as described above, tech companies themselves have talked about the need for more intervention.
This campaign doesn’t include a call on banning social media for young people as it’s primarily focused on tech company accountability.