The Role of De-identification in AI-Powered Zero Trust Architectures for Data Privacy Compliance

Authors

  • Mukul Mangla Independent Researcher

DOI:

https://doi.org/10.56127/ijst.v2i2.2310

Keywords:

Data protection, GDPR, Differential privacy, Anonymization, Data privacy compliance, Zero Trust, de-identification

Abstract

The fast adoption of the artificial intelligence (AI) in the enterprise setting has been the main factor that has changed the way companies handle, process, and protect sensitive information. However, the new acceleration has brought new risks that are related to privacy, compliance, and cybersecurity. The established perimeter-based security models have become less effective to mitigate the advanced cyber threats and insider risks, therefore, leading to the rise of Zero Trust Architectures (ZTA) as a security paradigm. Meanwhile, strict regulatory policies like the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA), and the California Consumer Privacy Act (CCPA) emphasize de-identification as a key tool of safety of sensitive data. Anonymization, pseudonymization, and differential privacy are collectively referred to as de-identification, which is a crucial element in supporting secure data processing without affecting analytical utility. In this paper, the author analyzes how de-identification can be used in AI-based Zero Trust systems as a tool to reach the compliance with international data privacy laws. Based on a review of retrieved literature and industry publications, as well as regulatory standards, the paper presents a conceptual framework of incorporating de-identification methods into ZTA settings to reduce risks of data leakage, adversarial attacks, and non-observance. The results show that de-identification does not just enhance the compliance but also enhances AI-based monitoring and detection functions in Zero Trust ecosystems. This work provides a new viewpoint in developing resilient, compliance-oriented, and ethically based data security architectures by merging the privacy engineering with AI-enabled ZTA

References

[1] Shethiya, A. S. (2023). Rise of LLM-Driven Systems: Architecting Adaptive Software with Generative AI. Spectrum of Research, 3(2).

[2] Cases, B. U., & Figueiredo, M. (2023). Generative AI with SAP and Amazon Bedrock. SAP Technical Documentation.

[3] Malempati, M. (2021). Developing End-to-End Intelligent Finance Solutions Through AI and Cloud Integration. Available at SSRN 5278350.

[4] Lai, T., Shi, Y., Du, Z., Wu, J., Fu, K., Dou, Y., & Wang, Z. (2023). Psy-llm: Scaling up global mental health psychological services with ai-based large language models. arXiv preprint arXiv:2307.11991.

[5] Thukral, V., Latvala, L., Swenson, M., & Horn, J. (2023). Customer journey optimisation using large language models: Best practices and pitfalls in generative AI. Applied Marketing Analytics, 9(3), 281-292.

[6] Chakraborty, U., Roy, S., & Kumar, S. (2023). Rise of Generative AI and ChatGPT: Understand how Generative AI and ChatGPT are transforming and reshaping the business world (English Edition). BPB Publications.

[7] Devi, K. V., Manjula, V., & Pattewar, T. (2023). ChatGPT: Comprehensive study on generative AI tool. Academic Guru Publishing House.

[8] Ravindran, A. A. (2023). Internet-of-things edge computing systems for streaming video analytics: Trails behind and the paths ahead. IoT, 4(4), 486-513.

[9] Ilieva, G., Yankova, T., Klisarova-Belcheva, S., Dimitrov, A., Bratkov, M., & Angelov, D. (2023). Effects of generative chatbots in higher education. Information, 14(9), 492.

[10] Sainio, K. (2023). Generative Artificial Intelligence Assisting in Agile Project Pain Points (Doctoral dissertation, Master’s Thesis, Faculty of Management and Business, Tampere University, Finland).

[11] Ravindran, A. A. (2023). Edge Computing Systems for Streaming Video Analytics: Trail Behind and the Paths Ahead.

[12] Xie, Q. (2023). Deep learning based chatbot in fintech applications (Doctoral dissertation, University of Maryland, Baltimore County).

[13] Shoeibi, N. (2023). Evaluating the effectiveness of human-centered AI systems in education.

[14] Patsakis, C., & Lykousas, N. (2023). Man vs the Machine in the Struggle for Effective Text Anonymisation in the Age of Large Language Models. Scientific Reports, 13, Article No. 16026. https://doi.org/10.1038/s41598-023-42977-3

[15] Garfinkel, S. L. (2015). De-Identification of Personal Information (NIST Interagency/Internal Report 8053). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.IR.8053.

[16] Rose, S., Borchert, O., Mitchell, S., & Connelly, S. (2020). Zero Trust Architecture (NIST Special Publication 800-207). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-207.

[17] Syed, N. F., Shah, S. W., Shaghaghi, A., Anwar, A., Baig, Z., & Doss, R. (2022). Zero trust architecture (ZTA): A comprehensive survey. IEEE Access, 10, 57143–57179. https://doi.org/10.1109/ACCESS.2022.3174679.

[18] Kang, H., Liu, G., Wang, Q., Meng, L., & Liu, J. (2023). Theory and application of zero trust security: A brief survey. Entropy, 25(12), 1595. https://doi.org/10.3390/e25121595.

[19] Chevrier, R., Foufi, V., Gaudet-Blavignac, C., Robert, A., & Lovis, C. (2019). Use and understanding of anonymization and de-identification in the biomedical literature: Scoping review. Journal of Medical Internet Research, 21(5), e13484. https://doi.org/10.2196/13484.

[20] Johnson, A. E. W., Bulgarelli, L., & Pollard, T. J. (2020). Deidentification of free-text medical records using pre-trained bidirectional transformers. In Proceedings of the ACM Conference on Health, Inference, and Learning (CHIL). https://doi.org/10.1145/3368555.3384455.

[21] Murugadoss, K., Rajasekharan, A., Malin, B., Agarwal, V., Bade, S., Anderson, J. R., … Ardhanari, S. (2021). Building a best-in-class automated de-identification tool for electronic health records through ensemble learning. Patterns, 2(6), 100255. https://doi.org/10.1016/j.patter.2021.100255.

[22] Dyda, A., Purcell, M., Curtis, S., Field, E., Pillai, P., Ricardo, K., … Lau, C. L. (2021). Differential privacy for public health data: An innovative tool to optimize information sharing while protecting data confidentiality. Patterns, 2(12), 100366. https://doi.org/10.1016/j.patter.2021.100366.

[23] Ficek, J., Pickering, S., Chen, R., & et al. (2021). Differential privacy in health research: A scoping review. Journal of the American Medical Informatics Association, 28(10), 2269–2276. https://doi.org/10.1093/jamia/ocab135.

[24] Di-Ciccio, C., Cecconi, F., De-Giacomo, G., Mendling, J., & Russo, A. (2021). Privacy-preserving process mining in zero trust architectures. IEEE Access, 9, 67075–67092. https://doi.org/10.1109/ACCESS.2021.3086706

[25] Yogarajan, V., Pfahringer, B., & Mayo, M. (2020). A review of automatic end-to-end de-identification: Is high accuracy the only metric? Applied Artificial Intelligence, 34(3), 251–269. https://doi.org/10.1080/08839514.2020.1718343.

Downloads

Published

2023-05-28

How to Cite

Mangla, M. (2023). The Role of De-identification in AI-Powered Zero Trust Architectures for Data Privacy Compliance. International Journal Science and Technology, 2(2), 95–106. https://doi.org/10.56127/ijst.v2i2.2310

Similar Articles

<< < 1 2 3 4 5 6 

You may also start an advanced similarity search for this article.