Ahmed Salem
Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models
A Salem, Y Zhang, M Humbert, P Berrang, M Fritz, M Backes
Annual Network and Distributed System Security Symposium (NDSS), 2019
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
J Jia, A Salem, M Backes, Y Zhang, NZ Gong
ACM SIGSAC Conference on Computer and Communications Security (CCS), 2019
Badnl: Backdoor attacks against nlp models with semantic-preserving improvements
X Chen, A Salem, D Chen, M Backes, S Ma, Q Shen, Z Wu, Y Zhang
Annual computer security applications conference, 554-569, 2021
Dynamic backdoor attacks against machine learning models
A Salem, R Wen, M Backes, S Ma, Y Zhang
2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P), 703-718, 2022
Updates-leak: Data set inference and reconstruction attacks in online learning
A Salem, A Bhattacharya, M Backes, M Fritz, Y Zhang
USENIX Security Symposium, 2019
Mlcapsule: Guarded offline deployment of machine learning as a service
L Hanzlik, Y Zhang, K Grosse, A Salem, M Augustin, M Backes, M Fritz
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021
{ML-Doctor}: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Y Liu, R Wen, X He, A Salem, Z Zhang, M Backes, E De Cristofaro, M Fritz, ...
31st USENIX Security Symposium (USENIX Security 22), 4525-4542, 2022
Analyzing leakage of personally identifiable information in language models
N Lukas, A Salem, R Sim, S Tople, L Wutschitz, S Zanella-Béguelin
arXiv preprint arXiv:2302.00539, 2023
Baaan: Backdoor attacks against autoencoder and gan-based machine learning models
A Salem, Y Sautter, M Backes, M Humbert, Y Zhang
arXiv preprint arXiv:2010.03007, 2020
Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks
A Salem, M Backes, Y Zhang
arXiv preprint arXiv:2010.03282, 2020
Privacy-Preserving Similar Patient Queries for Combined Biomedical Data.
A Salem, P Berrang, M Humbert, M Backes
Proc. Priv. Enhancing Technol. 2019 (1), 47-67, 2019
Get a model! model hijacking attack against machine learning models
A Salem, M Backes, Y Zhang
arXiv preprint arXiv:2111.04394, 2021
Bayesian estimation of differential privacy
S Zanella-Béguelin, L Wutschitz, S Tople, A Salem, V Rühle, A Paverd, ...
International Conference on Machine Learning, 40624-40636, 2023
{UnGANable}: Defending Against {GAN-based} Face Manipulation
Z Li, N Yu, A Salem, M Backes, M Fritz, Y Zhang
32nd USENIX Security Symposium (USENIX Security 23), 7213-7230, 2023
SoK: Let the privacy games begin! A unified treatment of data inference privacy in machine learning
A Salem, G Cherubin, D Evans, B Köpf, A Paverd, A Suri, S Tople, ...
2023 IEEE Symposium on Security and Privacy (SP), 327-345, 2023
Two-in-One: A Model Hijacking Attack Against Text Generation Models
WM Si, M Backes, Y Zhang, A Salem
arXiv preprint arXiv:2305.07406, 2023
Dynamic backdoor attacks against deep neural networks
A Salem, R Wen, M Backes, S Ma, Y Zhang
Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
L Wutschitz, B Köpf, A Paverd, S Rajmohan, A Salem, S Tople, ...
arXiv preprint arXiv:2311.15792, 2023
Comprehensive Assessment of Toxicity in ChatGPT
B Zhang, X Shen, WM Si, Z Sha, Z Chen, A Salem, Y Shen, M Backes, ...
arXiv preprint arXiv:2311.14685, 2023
Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning
R Wen, T Wang, M Backes, Y Zhang, A Salem
arXiv preprint arXiv:2310.11397, 2023
Sistema negali atlikti operacijos. Bandykite vėliau dar kartą.
Straipsniai 1–20