Close sites icon close

Search for the country site.

Country profile

Country website

How can we protect refugees from growing digital threats?

Stories

How can we protect refugees from growing digital threats?

Online misinformation, disinformation and hate speech targeting refugees and asylum-seekers is causing real-world harms. Finding ways to tackle it is critical.
15 July 2025 Also available in:
An illustration showing lots of phone screens and text saying "misinformation", "false news" and "confusion" with question marks.

In August 2017, widespread violence targeting the Rohingya minority in Myanmar’s Rakhine State led to thousands of deaths and more than 750,000 Rohingya fleeing to neighbouring Bangladesh. A year later, a UN fact-finding mission concluded that the use of social media platforms to spread misinformation and hate speech against the Rohingya had played a significant role in escalating the violence.   

Eight years on, and misinformation, disinformation and hate speech directed at refugees and stateless people has proliferated on digital platforms around the world and become increasingly linked to real-word incidents of xenophobia, violence and even forced displacement.  

“There’s a growing normalization of harmful rhetoric about refugees. We need more guardrails and safety policies by digital platforms,” observes Gisella Lomax, senior advisor on information integrity at UNHCR, the UN Refugee Agency.  

She adds that the rise of generative artificial intelligence (AI) has turbo-charged the problem by allowing people to inexpensively flood digital platforms with false, manipulative or exploitative content such as deep fakes. Anonymously, and with the click of a mouse, toxic narratives characterizing refugees as opportunistic, dangerous or criminal have spread through social media channels, reaching and influencing millions of people and causing harm to both refugees, and the organizations trying to protect and support them.

The deluge of misinformation on digital platforms is leading to a lack of trust in the vital protection information humanitarians need to share during a crisis. “It has an impact on determining what information you can trust,” says Lomax. “And access to reliable information is fundamental for refugees. In emergencies, it can be lifesaving.”

  

Elections multiply risks

The risks are particularly high ahead of elections. In late 2023, the run up to national elections in Indonesia coincided with the arrival of more than 1,700 Rohingya refugees by boat to Aceh Province and a sharp uptick in online hate targeting the Rohingya, along with a proliferation of fake UNHCR social media accounts spreading disinformation.  

A group of people carrying belongings stand in the shallows alongside a wooden boat carrying refugees.

Rohingya refugees arrive in Ulee Madon in Indonesia's Aceh Province in November 2023 after making the dangerous crossing from Bangladesh.

Previously, Rohingya refugees fleeing persecution in Myanmar, or overcrowded camps in Bangladesh, had been treated with compassion when coming ashore in Aceh after treacherous boat journeys. But in the last two months of 2023, members of the local community gathered to protest boat landings and even turned some boats away. And in December of that year, hundreds of students stormed a building in Indonesia’s Banda Aceh city that was sheltering Rohingya refugees. They forced 137 refugees onto trucks and demanded they be removed.  

In a statement, UNHCR described the mob attack as “not an isolated act, but the result of a coordinated online campaign of misinformation, disinformation and hate speech”.  

Developing digital protections for refugees

Humanitarian organizations have struggled with how best to protect refugees and other vulnerable people from the harms caused by hate speech and false information. 

With support from ECHO, the European Commission’s humanitarian aid department, in 2023 UNHCR began developing a global response to misinformation, disinformation and hate speech on digital platforms. The strategy draws on learnings from pilot projects in multiple countries and partnerships across a range of sectors.  

In South Africa, where online anti-foreigner sentiment has fed into an offline vigilante movement that has targeted refugee businesses and homes, and even schools attended by refugee children, one of the pilots is testing a proactive “pre-bunking” approach, with support from Innovation Norway. As many South African schools lack digital devices or internet connectivity, UNHCR partnered with the Department of Basic Education and the private sector to develop an analogue board game for school learners called “Mzansi Life” (South African Life). Players are encouraged to put themselves in the shoes of refugees and asylum-seekers, and to question common anti-foreigner narratives.   
 

A young boy holds a phone and stares at the screen along with four other children.

Children in rural South Africa cluster around a phone screen. 

“You build mental antibodies to misinformation, disinformation and hate speech, and you’re able to detect when you’re being manipulated online,” explains Katie Drew, a consultant with UNHCR who is leading the pilot.

Initial results have been promising, with most participants reporting a significant shift in their perceptions of non-South Africans and their awareness of online manipulation. The next stage of the project involves developing a digital version of the game and launching a competition on TikTok that encourages users to create and post their own pre-bunking content promoting solidarity with refugees.

Challenging toxic narratives

Another pilot project has focused on the challenge of online hate speech targeting Rohingya refugees in six Asian countries. Besides monitoring social media platforms, the project involves partnering with local civil society organizations, governments, refugee-led organizations, and media to strengthen awareness of the harmful impacts of online hate speech and to develop creative content that counters toxic narratives.  

Late last year,  the Association for Progressive Communications (APC) – a UNHCR partner organization – worked with Yasmin Ullah and Hafsar Tameesuddin, former Rohingya refugees and co-founders of the Rohingya Maiyafuinor Collaborative Network, to produce a video in which the two women use a humorous approach to confront falsehoods about their community that circulate widely on social media. 

“Without a lot of resources to do this work, we’ve had to be selective in the work we do,” explains Ullah. “Sometimes we’ve had to become cheeky about it and reply to the hate speech ourselves. We try to balance fun and light-heartedness with hard-hitting messages.”

Partnerships are key

Drawing on findings from the pilots and from consultations with partners, UNHCR recently launched an Information Integrity Toolkit, funded by ECHO, that provides practical tools and guidance for understanding online risks and addressing them. “The tools in it are really cross cutting and can be applied by other humanitarian organizations,” says Lomax, who sees UNHCR’s work as part of the wider UN system’s efforts to make digital spaces safer for refugees and other vulnerable groups, as outlined in the Global Digital Compact adopted by Member States last September.

In the context of sweeping funding cuts and humanitarians being more stretched than ever, partnerships are key to UNHCR being able to have an impact on the huge volume of online misinformation and hate speech targeting refugees. Collaborating with local civil society organizations, digital rights groups and governments is also about recognizing that different contexts demand different solutions. “On the one hand, hate speech and misinformation can be global and transnational, but it’s also very localized, often with deep cultural roots, and it’s local actors that have the contextual, historic knowledge and languages to make sense of that,” says Lomax.  

A new project funded by the Government of Switzerland is supporting UNHCR’s efforts to create and strengthen these “digital protection” partnerships, along with new research and policy development.

Partnerships with tech and AI companies are also crucial, not only for advocating with those companies to address hate speech and other information risks on their own platforms, but for the skills and knowledge they can bring. In preparation for the South African pilot, UNHCR drew on training and insights from Google, which has used pre-bunking videos in various contexts, including to counter false narratives about Ukrainian refugees in Central and Eastern Europe.

In total, 23 private sector companies, governments, UN partners, NGOs and refugee-led organizations that committed to scale up actions to prevent the harmful impacts of hate speech, misinformation and disinformation on displaced people through a multistakeholder pledge announced at the Global Refugee Forum in December 2023.  

Since then, while there have been many developments in the sector, practical engagement with these critical actors continues.  

“One of our objectives is to encourage tech companies to pay more attention to humanitarian contexts,” says Lomax.