Blog
-
How to prompt LLMs with private data?
by Haonan Duan, Adam Dziedzic, Nicolas Papernot, Franziska Boenisch, and Jovana Jankovic
-
Can stochastic pre-processing defenses protect your models?
by Yue Gao, Ilia Shumailov, Kassem Fawaz, and Nicolas Papernot.
-
Are adversarial examples against proof-of-learning adversarial?
by Congyu Fang, Hengrui Jia, Varun Chandrasekaran, and Nicolas Papernot
-
How to Keep a Model Stealing Adversary Busy?
by Adam Dziedzic, Muhammad Ahmad Kaleem, and Nicolas Papernot
-
All You Need Is Matplotlib
or Federated Learning with Untrusted Servers is Not Private
-
Arbitrating the integrity of stochastic gradient descent with proof-of-learning
by Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, and Nicolas Papernot
-
Beyond federation: collaborating in ML with confidentiality and privacy
by Adam Dziedzic, Christopher A. Choquette-Choo, Natalie Dullerud and Nicolas Papernot
-
Is this model mine?
by Pratyush Maini, Mohammad Yaghini and Nicolas Papernot
-
To guarantee privacy, focus on the algorithms, not the data
by Aleksandar Nikolov and Nicolas Papernot
-
Teaching Machines to Unlearn
by Christopher A. Choquette-Choo, Varun Chandrasekaran, and Nicolas Papernot
-
In Model Extraction, Don’t Just Ask ‘How?’: Ask ‘Why?’
by Matthew Jagielski and Nicolas Papernot
-
How to steal modern NLP systems with gibberish?
by Kalpesh Krishna and Nicolas Papernot
-
How to know when machine learning does not know
by Nicolas Papernot and Nicholas Frosst
-
Machine Learning with Differential Privacy in TensorFlow
by Nicolas Papernot
-
Privacy and machine learning: two unexpected allies?
by Nicolas Papernot and Ian Goodfellow
-
The challenge of verification and testing of machine learning
by Ian Goodfellow and Nicolas Papernot
-
Is attacking machine learning easier than defending it?
by Ian Goodfellow and Nicolas Papernot
-
Breaking things is easy
by Nicolas Papernot and Ian Goodfellow