AI-Driven Disinformation Campaigns: Detecting Synthetic Propaganda in Encrypted Messaging via Graph Neural Networks

Authors

  • Anil Kumar Pakina Independent Researcher
  • Ashwin Sharma Independent Researcher
  • Deepak Kejriwal Independent Researcher

DOI:

https://doi.org/10.56127/ijst.v4i1.1960

Keywords:

False Information, Constructed Propaganda, Secure Message Transmission, Graph Neural Networks, Influence Operations, Bot Networks, Coordinated Inauthentic Behavior, AI Ethics, Network Anomaly Detection, Social Cybersecurity

Abstract

The rapid rise of generative AI has fueled more sophisticated disinformation campaigns, particularly on encrypted messaging platforms like WhatsApp, Signal, and Telegram. While these platforms protect user privacy through end-to-end encryption, they pose significant challenges to traditional content moderation. Adversaries exploit this privacy to disseminate undetectable synthetic propaganda, influencing public opinion and destabilizing democratic processes without leaving a trace.

This research proposes a privacy-preserving detection framework using Graph Neural Networks (GNNs) that focuses on non-content-based signals—such as user interactions, message propagation patterns, temporal behavior, and metadata. GNNs effectively capture relational and structural patterns in encrypted environments, allowing for the detection of coordinated inauthentic behavior without breaching user privacy.

Experiments on a large-scale simulated dataset of encrypted messaging scenarios showed that the GNN-based framework achieved 94.2% accuracy and a 92.8% F1-score, outperforming traditional methods like random forests and LSTMs. It was particularly effective in identifying stealthy, low-frequency disinformation campaigns typically missed by conventional anomaly detectors.

Positioned at the intersection of AI security, privacy, and disinformation detection, this study introduces a scalable and ethical solution for safeguarding digital spaces. It also initiates dialogue on the legal and ethical implications of behavioral surveillance in encrypted platforms and aligns with broader conversations on responsible AI, digital rights, and democratic resilience.

Downloads

Published

2025-03-17

How to Cite

Anil Kumar Pakina, Ashwin Sharma, & Deepak Kejriwal. (2025). AI-Driven Disinformation Campaigns: Detecting Synthetic Propaganda in Encrypted Messaging via Graph Neural Networks. International Journal Science and Technology, 4(1), 12–24. https://doi.org/10.56127/ijst.v4i1.1960

Similar Articles

1 2 3 4 5 > >> 

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.