Browser fingerprinting: how to protect machine learning models and data with differential privacy?

  • As modern communication networks grow more and more complex, manually maintaining an overview of deployed soft- and hardware is challenging. Mechanisms such as fingerprinting are utilized to automatically extract information from ongoing network traffic and map this to a specific device or application, e.g., a browser. Active approaches directly interfere with the traffic and impose security risks or are simply infeasible. Therefore, passive approaches are employed, which only monitor traffic but require a well-designed feature set since less information is available. However, even these passive approaches impose privacy risks. Browser identification from encrypted traffic may lead to data leakage, e.g., the browser history of users. We propose a passive browser fingerprinting method based on explainable features and evaluate two privacy protection mechanisms, namely differentially private classifiers and differentially private data generation. With a differentially private RandomAs modern communication networks grow more and more complex, manually maintaining an overview of deployed soft- and hardware is challenging. Mechanisms such as fingerprinting are utilized to automatically extract information from ongoing network traffic and map this to a specific device or application, e.g., a browser. Active approaches directly interfere with the traffic and impose security risks or are simply infeasible. Therefore, passive approaches are employed, which only monitor traffic but require a well-designed feature set since less information is available. However, even these passive approaches impose privacy risks. Browser identification from encrypted traffic may lead to data leakage, e.g., the browser history of users. We propose a passive browser fingerprinting method based on explainable features and evaluate two privacy protection mechanisms, namely differentially private classifiers and differentially private data generation. With a differentially private Random Decision Forest, we achieve an accuracy of 0.877. If we train a non-private Random Forest on differentially private synthetic data, we reach an accuracy up to 0.887, showing a reasonable trade-off between utility and privacy.show moreshow less

Download full text files

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Katharina Dietz, Michael Mühlhauser, Michael SeufertORCiDGND, Nicholas Gray, Tobias Hoßfeld, Dominik Herrmann
URN:urn:nbn:de:bvb:384-opus4-1073461
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/107346
ISSN:1863-2122OPAC
Parent Title (English):1st International Workshop on Machine Learning in Networking (MaLeNe), part of the Conference on Networked Systems 2021 (NetSys 2021), September 13-16, 2021, Lübeck, Germany
Publisher:Universitätsbibliothek TU Berlin
Place of publication:Berlin
Editor:Mathias Fischer, Winfried Lamerdorf
Type:Conference Proceeding
Language:English
Year of first Publication:2021
Publishing Institution:Universität Augsburg
Release Date:2023/10/11
Series:Electronic Communications of the EASST ; 80
DOI:https://doi.org/10.14279/tuj.eceasst.80.1179
Institutes:Fakultät für Angewandte Informatik
Fakultät für Angewandte Informatik / Institut für Informatik
Fakultät für Angewandte Informatik / Institut für Informatik / Lehrstuhl für vernetzte eingebettete Systeme und Kommunikationssysteme
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
Licence (German):Deutsches Urheberrecht