Back to top

1-Diffractor Efficient and Utility-Preserving Text Obfuscation Leveraging Word-Level Metric Differential Privacy

Last modified Jun 14
   No tags assigned

The study of privacy-preserving Natural Language Processing (NLP) has gained rising attention in recent years. One promising avenue studies the integration of Differential Privacy in NLP, which has brought about innovative methods in a variety of application settings. Of particular note areword-level Metric Local Differential Privacy (MLDP) mechanisms, which work to obfuscate potentially sensitive input text by performing word-by-wordperturbations. Although these methods have shown promising results in empirical tests, there are two major drawbacks: (1) the inevitable loss of utility due to addition of noise, and (2) the computational expensiveness of running these mechanisms on high-dimensional word embeddings. In this work, we aim to address these challenges by proposing 1-Diffractor, a new mechanism that boasts high speedups in comparison to previous mechanisms, while still demonstrating strong utility- and privacy-preserving capabilities. We evaluate 1-Diffractor for utility on several NLP tasks, for theoretical and task-based privacy, and for efficiency in terms of speed and memory. 1-Diffractor shows significant improvements in efficiency, while still maintaining competitive utility and privacy scores across all conducted comparative tests against previous MLDP mechanisms. Our code is made available at: https://github.com/sjmeis/Diffractor.

Files and Subpages

Name Type Size Last Modification Last Editor
3643651.3659896.pdf 2,29 MB 25.06.2024