AI-Driven Disinformation Campaigns: Detecting Synthetic Propaganda in Encrypted Messaging via Graph Neural Networks

Authors

  • Anil Kumar Pakina Independent Researcher
  • Ashwin Sharma Independent Researcher
  • Deepak Kejriwal Independent Researcher

DOI:

https://doi.org/10.56127/ijst.v4i1.1960

Keywords:

False Information, Constructed Propaganda, Secure Message Transmission, Graph Neural Networks, Influence Operations, Bot Networks, Coordinated Inauthentic Behavior, AI Ethics, Network Anomaly Detection, Social Cybersecurity

Abstract

The rapid rise of generative AI has fueled more sophisticated disinformation campaigns, particularly on encrypted messaging platforms like WhatsApp, Signal, and Telegram. While these platforms protect user privacy through end-to-end encryption, they pose significant challenges to traditional content moderation. Adversaries exploit this privacy to disseminate undetectable synthetic propaganda, influencing public opinion and destabilizing democratic processes without leaving a trace.

This research proposes a privacy-preserving detection framework using Graph Neural Networks (GNNs) that focuses on non-content-based signals—such as user interactions, message propagation patterns, temporal behavior, and metadata. GNNs effectively capture relational and structural patterns in encrypted environments, allowing for the detection of coordinated inauthentic behavior without breaching user privacy.

Experiments on a large-scale simulated dataset of encrypted messaging scenarios showed that the GNN-based framework achieved 94.2% accuracy and a 92.8% F1-score, outperforming traditional methods like random forests and LSTMs. It was particularly effective in identifying stealthy, low-frequency disinformation campaigns typically missed by conventional anomaly detectors.

Positioned at the intersection of AI security, privacy, and disinformation detection, this study introduces a scalable and ethical solution for safeguarding digital spaces. It also initiates dialogue on the legal and ethical implications of behavioral surveillance in encrypted platforms and aligns with broader conversations on responsible AI, digital rights, and democratic resilience.

References

[1] Bontcheva, K., Papadopoulos, S., Tsalakanidou, F., & others. (2024). Generative AI and disinformation: Recent advances, challenges, and opportunities. KU Leuven.

[2] Bokolo, B. G., & Liu, Q. (2024). Artificial intelligence in social media forensics: A comprehensive survey and analysis. Electronics, 13(2).

[3] Bradshaw, S., Howard, P. N., & Kollanyi, B. (2021). Industrialized disinformation: 2020 global inventory of organized social media manipulation. Computational Propaganda Project. https://comprop.oii.ox.ac.uk

[4] Brunton, F., & Nissenbaum, H. (2015). Obfuscation: A user's guide for privacy and protest. MIT Press.

[5] European Commission. (2022). Digital Services Act (DSA): Regulation (EU) 2022/2065.

[6] Hamilton, W. L., Ying, Z., & Leskovec, J. (2020). Graph representation learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 14(3), 1–159. https://doi.org/10.2200/S01063ED1V01Y202009AIM046

[7] Helmus, T. C. (2022). Artificial intelligence, deepfakes, and disinformation. RAND Corporation.

[8] Kipf, T. N., & Welling, M. (2017). Semi-supervised classification with graph convolutional networks. International Conference on Learning Representations (ICLR).

[9] Kreps, S., McCain, R., & Brundage, M. (2022). All the news that’s fit to fabricate: AI-generated text as a tool of disinformation. Political Behavior, 44(3), 1–25. https://doi.org/10.1007/s11109-022-09789-5

[10] Mozilla Foundation. (2022). Privacy and propaganda: How encrypted apps became tools of disinformation.

[11] Murayama, Y., Asai, T., Fujita, S., & others. (2021). Detecting coordinated inauthentic behavior on encrypted messaging apps. Proceedings of the AAAI Conference on Artificial Intelligence, 35(7), 5794–5802.

[12] Novelli, C., & Sandri, G. (2024). Digital democracy in the age of artificial intelligence. arXiv preprint arXiv:2412.07791.

[13] Oord, A., Dieleman, S., Zen, H., et al. (2016). WaveNet: A generative model for raw audio. DeepMind.

[14] Revett, K., Jahankhani, H., de Magalhães, S. T., & Gorunescu, F. (2007). A survey of user authentication based on mouse dynamics. Journal of Computers, 2(3), 48–56.

[15] Singh, R., & Dwivedi, R. (2023). Detection of synthetic media and disinformation campaigns on encrypted networks using GNNs. Journal of Information Warfare, 22(1), 55–72.

[16] Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752. https://doi.org/10.1126/science.aat5991

[17] Tschider, C. A. (2022). The Algorithmic Accountability Act: Regulating AI and ensuring ethical compliance. Georgetown Law Journal, 110(5), 1241–1279.

[18] Ünver, A. (2023). Emerging technologies and automated fact-checking: Tools, techniques and algorithms. EDAM Policy Papers.

[19] Valavanidis, A. (2023). Artificial intelligence (AI) applications. National and Kapodistrian University of Athens.

[20] Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559

[21] Wang, B., Wang, Y., & Zhang, J. (2023). Detecting coordinated misinformation in encrypted group chats using graph-based clustering. Information Sciences, 626, 192–208.

[22] Weller, A., Garcia, J. A., & Bengio, Y. (2022). Transparency and accountability in AI systems: A technical roadmap. AI & Society, 37(3), 497–510.

[23] Welling, M., & Kipf, T. N. (2020). Graph neural networks for semi-supervised learning. Foundations and Trends in Machine Learning, 13(4), 307–406.

[24] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., & Yu, P. S. (2020). A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1), 4–24.

[25] Ying, R., Bourgeois, D., You, J., Zitnik, M., & Leskovec, J. (2019). GNNExplainer: Generating explanations for graph neural networks. Advances in Neural Information Processing Systems, 32, 9240–9251.

[26] Yu, J., Yu, Y., Wang, X., Lin, Y., Yang, M., & Qiao, Y. (2024). The shadow of fraud: The emerging danger of AI-powered social engineering and its possible cure. arXiv preprint arXiv:2401.01123.

[27] Zhou, X., Zafarani, R., Shu, K., & Liu, H. (2020). Fake news: Fundamental theories, detection strategies and challenges. ACM Transactions on Information Systems (TOIS), 38(3), 1–40.

[28] da Silva, B. C. C., Ferraz, T. P., & De Deus Lopes, R. (2024). Enriching GNNs with text contextual representations for detecting disinformation campaigns on social media. arXiv preprint arXiv:2410.19193.

[29] Lakzaei, B., Chehreghani, M. H., & Bagheri, A. (2024). Disinformation detection using graph neural networks: A survey. Artificial Intelligence Review, 1–47.

[30] Sharma, K., Qian, F., Jiang, H., Ruchansky, N., Zhang, M., & Liu, Y. (2019). Combating fake news: A survey on identification and mitigation techniques. ACM Transactions on Intelligent Systems and Technology (TIST), 10(3), 1–42.

Downloads

Published

2025-03-17

How to Cite

Anil Kumar Pakina, Ashwin Sharma, & Deepak Kejriwal. (2025). AI-Driven Disinformation Campaigns: Detecting Synthetic Propaganda in Encrypted Messaging via Graph Neural Networks. International Journal Science and Technology, 4(1), 12–24. https://doi.org/10.56127/ijst.v4i1.1960

Similar Articles

<< < 1 2 3 4 5 6 7 > >> 

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)