Summary Report: “The Weaponization of Social Media: How social media can spark violence and what can be done about it.”

Summary Report: “The Weaponization of Social Media: How social media can spark violence and what can be done about it.”

Summary Report of, “The Weaponization of Social Media: How social media can spark violence and what can be done about it, November 2019” by Mercy Corps

Introduction

“The Weaponization of Social Media” report identifies some of the methodology used by bad actors who seek to exploit social media for their purposes and ideology and identifies ways in which bad actors have exploited social media at the expense of creating divisions among groups of people, in some cases with violence, to pursue their goals. It describes the exploitation of cognitive processes that reduce people’s capacity to evaluate the messaging and respond emotionally in ways that they would not necessarily do in person. The report also provides solutions and considerations for groups to consider when facing the challenges brought about by weaponized social media. 

Background and Considerations About the Report

“The Weaponization of Social Media” was written primarily to provide peacekeeping organizations information about how social media can be used as a weapon and what organizations can do to mitigate the damage of these campaigns. Any organization interested in better understanding why social media is so powerful and how it can be used, as well as how to mitigate the effects, would find many insights in the report. Throughout the report the authors recognize and highlight the complex nature of these types of attacks and acknowledge the need for further study to develop best practices and for groups to consider the uniqueness of the area in which they are working.

Two words the authors define for the sake of clarity are “misinformation” and “disinformation”: 

  • Misinformation: incorrect information spread by people without the intent to deceive. 
  • Disinformation: incorrect information spread to intentionally deceive or manipulate

others, including deliberately false news stories, manufactured protests, doctored content (such as photos or videos), and tampering with private communications before release.

What Makes Social Media a Powerful Weapon

Social media provides those that use it for messaging to use data to create messages that are highly targeted and personalized and can exploit cognitive processes to influence their audience without them knowing it. Social media companies categorize users into groups of people with similar preferences and ideas, preventing the flow of information and providing a bad actor the ability to efficiently target the groups to which they want to send specific messaging.

Social media platforms provide much greater power in communication when compared to traditional media such as newspapers, radio, and television. First, social media is not limited by location as is legacy media, expanding the geographic reach of the audience receiving messages sent via social media. Because of the cost and the mobility of devices that spread social media messages, it is accessible to more people, more of the time. The expanded reach, mobility, and increased number of people using social media also allow ideas to spread quickly. 

Applying principles from cognitive science, it becomes clear that one of the most powerful effects of social media is the exploitation of four of the cognitive processes. 

  • The primacy effect explains that people are most likely to hold on to the points they learn about first. Once a person has been exposed to a perspective on a certain topic, they are likely to hold on to that information even when new and possibly higher quality information is presented to them. This helps perpetuate misinformation or disinformation for social media users if it is the first time they have been exposed to a piece of information. 
  • The illusory truth effect explains that a person presented with information several times, even if at first they did not believe it, is more likely to believe it the more times they see it. 
  • The availability heuristic and confirmation bias explain that a person sees the things that they already believe more than those ideas that they do not. The social media algorithms are set to provide material to users that they are likely to consume, so weaponized messaging would be more likely to reach the bad actor’s target audience.

The exploitation of these four cognitive processes reduces the capacity of social media users to evaluate the messaging of the bad actor. 

The aspect of technology that takes away the human component and allows people to write things or do things that they would not do if that person was in front of them, coupled with the algorithm designed to keep people engaged on a platform, helps create “echo chambers” and “social media bubbles.” The distance that working on a computer creates from another human being allows them to, without much thought, unfriend or unfollow a person that they disagree with. The people that they stay connected with more and more share the same ideas about topics and prevent them from being exposed to ideas that challenge their own that they might agree with if they had the opportunity to read them.

The decentralized nature of social media and widespread use make it difficult to quickly identify when it is being misused and to track down the people that are misusing it. There are billions of social media users. Content creation and sharing happens constantly. There have not yet been algorithms developed for identifying when social media is being weaponized. Even when it is discovered that it is being misused, the people misusing it are difficult to locate and/or prosecute. These features make social media platforms an ideal place for bad actors to conduct their work. 

Four Ways Social Media can be Weaponized

This report describes four different ways that social media can be weaponized. They will look different depending on the context in which the tactics are employed, but they do provide a good idea of patterns to look for when trying to identify weaponized social media campaigns, or to better understand them.

  • Information Operations (IO): In this report, this is a disinformation campaign coordinated by one nation state against another nation state.
  • Political Manipulation (PM): This is an example of a disinformation campaign used to manipulate the political discourse in a nation state.
  • Digital Hate Speech (DHS): This is a strategy used to amplify existing false narratives about a person or group of people.
  • Radicalization and Recruitment (RR): This is a strategy used to recruit across large distances and use the social media platform to manipulate the recruits and coordinate their efforts.


Case Studies

In the following case studies, the four ways social media can be weaponized will be viewed by a corresponding group. First, Russia targets a Syrian group using IO to help destroy confidence in a Syrian humanitarian group that has footage of Russian troops participating in what might be war crimes. Second, President Rodrigo Duerte uses PM to win the 2016 election in the Philippines. Third, DHS is used to amplify held grievances about a Muslim-based group, leading to murder and many from this group fleeing the country. Fourth, ISIS uses RR to draw in disenfranchised young people to carry out its missions. 

Case Study 1 – Russia’s targeting of the White Helmets in Syria – Information Operations (IO)

In this case study, IO is one of the means for Russia to incite violence toward a group of Syrians. Russia targeted the Syrian White Helmets first by collecting intelligence on them through open-source channels and analysis from digital advertising agencies. They created multimedia material that is emotionally charged and disseminated the material through a variety of online and offline channels. 

The following are the impacts and implications of the IO campaign the Russians employed that are provided in the report: 

  • Campaigns reached an estimated 56 million people on Twitter with posts related to the White Helmets during ten key news moments of 2016 and 2017.
  • Over 210 White Helmet volunteers have been killed since 2013.
  • There is a net effect of distracting from or covering up activities by Syrian and Russian forces on the ground, including potential war crimes.


Case Study 2 – Elections in the Philippines – Political Manipulation (PM)

In this case study, President Rodrigo Duerte PM sought to promote his 2016 presidential campaign while discrediting his opponents. 

This campaign happened in 3 stages: 

  • Design: They employed PR companies to build out their marketing techniques and branding by hiring local writers who used language and phrases popular among locals. This helped them establish their branding and narratives.
  • Mobilization: During this stage they identified and hired the people who would be the disseminators of disinformation and misinformation. They hired social media influencers with 50,000 to 2 million followers. They hired those that would be managing the trolls and bots and ensured the people in charge of news media that supported Duerte were prepared for the next stage. They brought onboard fan page moderators, unpaid volunteers, and supportive members of the political party willing to spread the disinformation created by the campaign. 
  • Implementation: In this stage, the coordinated disinformation and misinformation campaign started and continued until their objectives were met, in this case Duerte winning the election.”

The following are the impacts and implications of the PM campaign that Duerte’s team employed that are provided in the report: 

  • The general public is confused on government policies and practices.
  • Fake news is rampant and digital disinformation is becoming part of the norm for political agents. 
  • The campaign “Harms not only journalists and political opponents but also local workers used as active agents of disinformation.”


Case Study 3 – Intercommunal Violence in Myanmar – Digital Hate Speech

In this case study, DHS is the strategy used by Buddhist nationalist groups to amplify the hate of a group, the Rohingya, primarily made up of Muslims in a predominantly Buddhist country. It is this strategy in which the speed and proliferation of use, the algorithms, and manipulation of cognitive processes are seen at scale. Of the four case studies, this one appears to have done the greatest damage. U.N. officials have described the aftermath as the ethnic cleansing of Rohingya.

Social media is used to spread “extra-factual sources of information” or information that is contentious and exists first outside of social media. The following are the types of extra-factual sources of information used in this case study:   

  • RumorUnverified information that is transmitted from one person to others. Rumors can be true, false, or a mixture. At their core, mis- and disinformation are rumors.
  • Hate speech: Any form of expression (speech, text, images) that demeans or attacks a person or people as members of a group with shared characteristics such as race, gender, religion, sexual orientation, or disability.
  • Dangerous speech: Speech that has a special capacity to catalyze or amplify violence by one group against another.


The following are the impacts and implications of the DHS campaign the Buddhist nationalist groups employed that is provided in the report: 

  • Amplifying grievances and triggering violence between groups of differing ethnic and religious identities. 
  • Anti-Muslim sentiment and intercommunal violence against Muslim identity groups have been the most visible examples and are linked to the country’s deep Buddhist nationalist project. 
  • virulent rumors and online hate speech triggered the Mandalay riot of July 2014 in which approximately. 20 people were injured and 2 were killed.
  • Buddhist nationalists such as the 969 movement and Ma Ba Tha have exploited social media (particularly Facebook) “to stoke fear, normalize hateful views and facilitate actors of violence” against identity groups (particularly Muslims or the ethnic Rohingya) who are perceived and promoted as enemies of Buddhism or of the State.
  • More recently, the Myanmar military carried out systematic clearance operations in 2017 against the Rohingya people in response to the Arakan Rohingya Salvation Army attacks, another situation that was largely amplified by digital hate speech and the propagation of unverified rumors.
  • Hundreds of thousands of Rohingya have fled, there has been systematic rape by security forces and affiliated militia groups, and there have been over 6,000 civilian deaths. UN officials have characterized the Rakhine State security operations as ethnic cleansing.


Case Study 4 – ISIS media jihad – Radicalization and Recruitment (RR)

In this case study, ISIS weaponized social media by using it for RR. They targeted young, tech-savvy millennials who were frustrated with some aspect of their circumstances and were looking for connection and a sense of purpose. ISIS used social media to identify and draw in these targets and then used it for sharing information and coordinating actions. 

The following are the impacts and implications of the RR campaign that ISIS employed which are provided in the report: 

The plan is to move recruits toward radicalization one step at a time. Exposure to one set of ideas can open the door for other, more radical thinking to take root.

  • The focus on propaganda elevates the status, importance, and role of social media users.
  • ISIS is adaptive and persistent, despite coordinated efforts among technology companies, governments, and civil society to counter them. When suspected accounts are de-platformed, blocked users come back online using alternative handles. Operatives have become skilled at undermining detection efforts and ensuring the group’s digital survival.


Response Framework

The responses provided in this framework are a general approach to reducing the impact of social media weaponization at the different stages in which it occurs. It is not meant to be comprehensive or static.  The authors are explicit about the need for a variety of groups to engage in the continued work on responses as the threat actors evolve and to appropriately deal with the situation in the context in which it surfaces.

The following are the stages and a brief description. Examples can be found in the full report: 

  • Prevention. Regulations by governments, industry associations, and companies. 
  • Monitoring, detection, and assessment of threats. “Activities under this category include information and threat mapping, the development of open-source rumor monitoring and management systems, identification and analysis of online hate speech, and social network monitoring, analysis and reporting.” 
  • Building resilience to threats. This includes online and offline behaviors which include teaching digital media literacy and awareness campaigns about certain topics or adversarial tactics. 
  • Mitigation of impacts. This seeks to reduce the harm by surfaced weaponized information, ideally during the crisis. 


Considerations When Developing Solutions to Mitigate Weaponization of Social Media

The following are suggestions for improving the quality of the systematic response to weaponizing social media: 

  • Environmental Factors: The authors have found three environmental factors that must be considered to prevent, mitigate, or counter the weaponization of social media: foundations for digital harm, pathways to digital harm, and signals of digital harm. 
  • Build communities digital resilience: Teach communities how to defend themselves and build conflict-resolution capacities to recover from attacks. 
  • Build best-practices: Find and publish evidence-based best practices to protect against, respond to, and recover from the weaponization of social media. 
  • Use bad-actor tactics for good: Some of the same tactics bad actors use should be used by those trying to prevent the harm of the bad actors.
  • Apply best practices of offline behavior: Consider ways in which best practices for protecting people in real life that can be applied to online protection.  


Conclusion

Organizations and people are at risk of experiencing the impact of social media weaponization. It could be another country creating disinformation and misinformation campaigns confusing voters about issues or candidates. There is much work to do to defend against, respond to, and recover from bad actors weaponizing social media. It is necessary work, as the effects can lead to damage in the physical world and have similar consequences to those of physical attacks. As best practices emerge, communities and organizations, large and small, can start sharing them with like groups and allow the tools of the internet to be used as they were originally imagined, as a place for information to be democratized. Where people are not restricted in their thinking by the manipulation of their cognitive processes, they are able to use their cognitive processes to make informed choices that can improve their lives and strengthen their communities. Another benefit of this work would allow organizations to be better equipped to protect against, respond to, and mitigate these attacks, ultimately strengthening the nation. 

Categories: