This is a trasnscript of Dr. Jonathan Corpus Ong's webinar for Civil Society Capacity Building to Counter Disinformation
Many of us attending this webinar have been coping from a very difficult information environment that has normalized and amplified hateful speech, harassment, and red-tagging. Over the past years, we've seen politicians strategically circulating narratives that aim for political divisiveness; instead of being punished or vilified, this has often yielded them a passionate base of very angry populist voters who are very consistent and reliable. For the communication masterminds behind disinformation campaigns, "fake news" projects have yielded additional income and created shadow economies that only hide in plain sight -- if they bother to hide at all. Many happily take credit for disinformation campaigns as it leads to new projects and clients.
Our different sectors of academia, journalism, legal sector, community organization, and frontline activism have launched diverse and disparate well-meaning efforts. I'm curious to know how many people in the room think we're in a better or worse place than we were four years ago when the terms "fake news" and "troll armies" first entered the global lexicon.
So on one hand, we have more financial support and strategic investments in investigative reporting, academic research, and capacity-building for civil society actors. These have yielded new jobs, reporter beats, fields of study, "anti-fake-news" laws, and more aggressive deplatforming, such as in the case of Donald Trump and his legion of QAnon conspiracy theorists.
On the other hand, the arduous tasks of disinformation busting and fact-checking always seem like cutting off the heads of the hydra one at a time: when one head is severed, several grow and evolve in its place. There is also the challenge of culture and context: one solution proposed in one part of the world may not be culturally appropriate in another; or to put it another way, to what extent should the cultural norms dictated in Silicon Valley apply to national contexts elsewhere? How can we also empower tech regulation "from below" -- in the form of supporting local reform in election laws, taxation laws, protections for journalists and civil society?
This webinar today is an invitation for us to work together toward a whole-of-society approach to mitigating disinformation.
Key principles behind a whole-of-society approach to mitigating disinformation would be:
- multi-stakeholderism
this means refusing silo'd approaches and working collaboratively rather competitively. It means diversity in terms of social backgrounds and disciplinary expertise in order to anticipate new disinformation trends as well as unintended outcomes of well-meaning interventions. - structural
this means addressing the production of disinformation at its roots, rather than offering corrections, fixes and labels. This requires examining the political economy of disinformation: how the business is run, worker arrangements, and the complicity and collusion of "respectable" industries with digital shadow economies. - local
this means being especially sensitive to how targeted disinformation, along with hate speech and conspiracy theory, inflicts particular social harms toward ethnic minorities, women, LGBTQ+, immigrant and refugees. It also requires empowering workers in the frontlines of disinformation mitigation to protect themselves.
For today's webinar, I'm drawing from my previous research.
This includes the 2018 report Architects of Networked Disinformation which was based on interviews with strategists and political trolls they enlisted in the 2016 Philippines elections.
The 2019 report Tracking Digital Disinformation was focused on the 2019 elections and documented the shift from mega-influencers to micro-influencers and closed groups.
The 2020 study was about Southeast Asian elections in comparative perspective and applying cross-regional approach -- what can we learn from the Thai elections, Philippines, and Indonesian elections in terms of the common narratives, the common producers, and new election and social media laws.
My most recent study (2021) is about human rights workers in the Philippines and their (lack of) investment in strategic communications in the last four years coping with various attacks / conspiracy theories from state actors as well as online trolls. This report asks, "to what extent are human rights organizations empowering the communication personnel and digital workers in their own teams to fight fake news"? This report is part of a larger Harvard project on "the true costs of disinformation", where we will account for not just financial costs of disinfo to individual organizations but the human costs and emotional labor of workers tasked to do this as their job.
I'll be summarizing some of the relevant findings for you today.
Multistakeholder collaborations are important in order to identify the right kind of reporting / research needed to mitigate impact of disinformation narratives. It's worthwhile to invest in researcher-journalist collaborations focused on "strategic silence" and de-escalation. Strategic silence refers to editorial decisions to minimize publicity / reporting on potentially extreme ideas and ideologies that could lead to targeted violence to certain communities
The inherent problem with initiatives that only focus on fact-checking is that it might end up popularizing influencers and their extreme ideas, including on COVID-19 conspiracies or racially targeted disinfromation.
We should find ways to engage funders to expand the narrow terms of current fact-checking initiatives and facilitate spaces for critical dialogue, audit, and creative collaboration among fact-checkers, journalists, and academics. We should find ways to collaborate directly rather than do our own thing in silo'd spaces. Current business-as-usual interventions where we do our own thing have not proven effective enough.
Researchers in the region can be more active in cautioning journalists to practice "strategic silence" and quarantine extremist ideas and hateful expressions. Incentivizing critical collaborative spaces bridging journalists with academics might help guide the reporting of crisis events and mitigate hate speech.
We need to be more careful when reporting on vaccine hesitancy.
We need to be more mindful when naming influencers or strategists behind disinformation campaigns, as coverage might lead to them gaining more attention, publicity, and clients!
Structural approaches are also necessary when finding a way to expose disinformation shadow economies in Asia. In the US, diverse segments among the far-right have real ideological investment behind the xenophobic and/or misogynistic online speech that aligns with their political agenda. In many Asian particularly Southeast countries however many disinformation producers are financially motivated with little ideological investment. This requires harness the array of tools of taxation and auditing, industry self-regulatory councils, and media monitoring to understand disinformation as an industry. We need to also do more investigation of how related fields of practice such as search engine optimization, hackers, data analytics companies, meme page operators, and digital influencer agencies are responsible and/or complicit.
Empowering regional researchers can also help decenter the focus on Silicon Valley "big tech" and advance debate on the norms that should regulate highly popular Asian-owned platforms such as Line and Viber. Network-building initiatives that facilitate transnational ties between South/Southeast Asian diasporic researchers in the US and local researchers and civil society can also help in unifying accountability initiatives at various scales; discussions here can also add nuance how top-down legislative frameworks might be inconsistent with local cultural norms.
Discuss Politics-Profit Work Models.
We need to examine the social harms of disinformation, its porous boundaries with hate speech and conspiracy theory, and understand how it affects the frontline workers tasked to mitigate "fake news".
In the wake of COVID-19, anti-China racist speech and conspiracy theory surged in global context, and SEAsian countries were unfortunately no exception. Rather than fact-checking their statements or calling these people out, some journalists reproduced this hateful rhetoric in their own personal pages or republished conspiracy theory in national newspapers.
At times, online discourse slipped into racist expressions against Chinese people posing threats to multicultural social relations.
Unfortunately, some journalists in Asian newsrooms have only doubled-down on their decision not to fact-check this disinformation narrative, with some claiming that this is a "false equivalence" or that "hate speech is not disinformation".
Hate speech and disinformation have porous boundaries and can lead to violence. Local journalists, activists, and academics need to develop a more sustained research agenda around hate speech and racism in Asian countries, attuned to the specific racial hierarchies and power dynamics in deep and recent historical context. Anti-racism in newsrooms, civil society, and academic spaces are important investments in Asian countries with our own blind spots around race and their intersections with class, gender, sexuality.
We also need to resist national security frames from funders that misleadingly frame 'foreign interference' as a Russia or China issue as it is often a strategy to advance geopolitical agendas rather than attend to local realitie
What can organizations do to build capacity from within?
For one, organizations can invest in their communications personnel who are actually burdened with the task of responding to disinformation.
Our audit of Philippines human rights organizations reveals: human rights advocates recognized the monumental importance of rebuilding the relevance of human rights in the eyes of a public bombarded with strategically coordinated "fake news" and anti-establishment narratives, however very little material and human resources support have directly followed this realization.
Human rights actors continue to treat communication as a set of disparate tools, activities or platforms, rather than a long-term and cohesive strategy to take back control of broad political narratives and reshape publics’ engagement with and perception of human rights. While most organizations improvised and implemented “tweaks” in their public-facing communication, their lack of appreciation of and material investment in skills and people resulted in cosmetic, “one-off” efforts that handicap their ability to address the crisis effectively.
Communications and technology workers continue to play peripheral roles in human rights organizations and were seldom enlisted in overall strategic planning. Almost half (46%) of the organizations we interviewed had no staff member dedicated to communications or branding; some enlisted communications personnel on a per-project basis, while others only commissioned interns or volunteers without pay.
Organizations need to be strategic when making decisions of frontlining versus backchanneling when it comes to standing up to populist leaders. They should be prepared to pivot based on project objectives, and lend support to allies who are under attack at a specific moment.
Questions to end the discussion:
- How can pro-democracy coalitions be more diverse and inclusive?
- How can academics and journalists work together? Journalists may themselves be reluctant to antagonize those who control the corporate advertising money that their news agencies depend on.
- How can foreign donors lend support to local civil society and researchers to advocate for tech regulation "from below”?