Home Technology AI Using AI As A Weapon Against Disinformation
Artificial intelligence digital brain on blue server background 3D rendering

Using AI As A Weapon Against Disinformation


Given the stress that today’s democracy is exposed to, technology is referred to as being a scapegoat due to the rising information disorder which is seen being a cause of this stress, globally. Not only governments, but also academics, non-profits and journalists turn their eye towards different types of technology such as bots and social media in order to analyze the cause better. On the other hand, given the proliferation of new technologies, such as machine learning or AI (artificial intelligence) in our daily lives, technology is also seen as a potential solution for alleviating the trust issue in today’s media ecosystem.

Viewing technology at the core of today’s media ecosystem means that the root cause of the disinformation problem should be technology. Yet, if this were the case there would be no asymmetry in terms of the media practices across the global spectrum. Therefore, today’s media and information problems should not be explained by the technology alone. Perhaps, we should better start with accepting technology as a double-edged sword before jumping into conclusions.

With regard to the rising trend of AI, there are some major threats to the quality of information that should be taken into account:

  • Fake news: The easiness to manipulate audio or video files leads to the damage of reputation of influential actors as the development of apparently original videos of them will be utilized for spreading of fake news as well.
  • Social media marketing: Given the availability of vast amounts of user data on social media, algorithms can easily influence the content flow to individual accounts so that their opinions and behaviors can easily be manipulated.
  • Algorithmic curation: This refers to the platforms’ algorithms developing more refined gaps as a result of dynamic reinforcement learning as a result of which users may eventually find themselves exposed to more extreme versions of their beliefs or opinions.
  • Bots: Due to the automation capabilities of bots, there are several social media campaigns whose real organizers may go unnoticed despite their large-scale impact.

These points mentioned should not be interpreted as algorithms being not important at all. Needless to say, algorithms have an impact on disinformation and fake news as they act as gatekeepers for content manipulation. To give a specific example, the extent of disinformation on Facebook is much higher in comparison to Twitter and the open web and despite Facebook’s efforts to improve the algorithms, new offenders quickly replaced old ones.

Given this persistent manipulation issue of the media, individuals can barely resist the online trolls or the propaganda or the interference of foreign powers within the digital ecosystem. One of the main challenges to overcome such instances is the technical difficulty to specify such instances. In addition to this, the lack of existence of a baseline against which to assign a rank of importance regarding a particular phenomena makes it all the more challenging to design effective interventions. Given the millions of users who are impacted by the flow of misleading articles and content sources, it should not be a surprise that these misleading media stories also have an influence on the behavioral change of these users.

One common mistake when tackling such complicated information issues is to confuse the emergence of a new phenomenon with its real impact upon the world; newness does not necessarily imply an impact. Although the discovery of a particular phenomenon is crucial this should not act as evidence for the impact of this emerging phenomenon.

Given the rapid proliferation of manipulative technologies into the digital ecosystem, anxiety levels of users are also on increase. It needs to be seen whether synthetic media like the fabricated media will further exacerbate today’s information crisis. Unless the fabricated media is shared in different ways than the deceptive media, there is no reason to expect why it should not be subject to the same influential media, political and social enterprises. After all, social identity makes the basis for beliefs about political matters.

As each cloud has a silver lining, we can also choose to look at the current issues from the bright side and focus on the following ways in which AI can help resolve the quality issues:

  • Given the automated detection capabilities of AI, algorithmic tools can guide users towards fact-checking and corrections.
  • Algorithmic tools can defend against collaborative attacks of the online propagandists.
  • Users can be “nudged” by means of computational aids which expose them to different content streams outside their echo chambers and thus open their eyes to various other perspectives.

It would be too naive to expect that all these available tools and technologies will solve the issue of disinformation and misinformation by leading to truth. One important point that should be repeatedly emphasized is that the meaning of stories appear within a network of frames and stories. Misinterpretation of stories is based on socio-political narratives and structures. In addition to this, stories interpreted in materially misleading ways carry greater importance than stories that are constituted of obviously falsifiable claims. Despite AI’s capabilities of channeling individuals to the correct resources, so far there is no evidence for AI’s dealing with the disinformation campaigns and overcoming its negative effects.

So, where do we go from here? As is the case with every complicated issue, we should not expect to have a silver bullet. Instead, we can continue to seek systemic responses for the systemic issues underpinning today’s mediascape. By focusing our attention on empowering the civic and political institutions we can start making progress. We also should bear in mind that an analysis of US-based media systems will not help us to gain an understanding of the dynamics and vulnerabilities across other cultural contexts.

Today’s democracy requires us to fight against disinformation by empowering media accountability. It is equally important to specify clearly our requirements from social media companies in tackling the issue of disinformation.

Last, but not least, not only should the nature of disinformation be understood in-depth, but the targeted efforts to expose users to disinformation should be monitored closely in order to prevent its widespread. Given the available research capability and political outreach gathered in the hands of a few private enterprises, there should be an ongoing effort to increase the awareness levels among individuals as well as enable the motivated individuals to access key data sources to ensure robust research standards.

Previous articleCrypto Investors Alert: Tokenized Stocks, Bakkt’s Asian competitor & Japan’s ETF
Next article‘I am different. Let this not upset you.’
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bosphorus University in her home town Turkey. Furthermore, she is the editor of several international journals, including those for Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley where she worked as a researcher for companies like Facebook and Google.


Please enter your comment!
Please enter your name here