Posts
Welcome to the cleverhans blog
This is a blog by Ian Goodfellow and Nicolas Papernot about security and privacy in machine learning.

If you came here looking for the opensource
cleverhans
library for benchmarking the vulnerability of machine learning models to adversarial examples, here is its GitHub repository. 
If you were looking for the technical report associated with the
cleverhans
library, it is available here and the BibTex entry for it is:
@article{papernot2016cleverhans,
title={cleverhans v1.0.0: an adversarial machine learning library},
author={Papernot, Nicolas and Goodfellow, Ian and Sheatsley, Ryan and Feinman, Reuben and McDaniel, Patrick},
journal={arXiv preprint arXiv:1610.00768},
year={2016}
}
Here is a list of all entries in our blog.

How to prompt LLMs with private data?

Can stochastic preprocessing defenses protect your models?

Are adversarial examples against proofoflearning adversarial?

How to Keep a Model Stealing Adversary Busy?

All You Need Is Matplotlib

Arbitrating the integrity of stochastic gradient descent with proofoflearning

Beyond federation: collaborating in ML with confidentiality and privacy

Is this model mine?

To guarantee privacy, focus on the algorithms, not the data

Teaching Machines to Unlearn

In Model Extraction, Don’t Just Ask ‘How?’: Ask ‘Why?’

How to steal modern NLP systems with gibberish?

How to know when machine learning does not know

Machine Learning with Differential Privacy in TensorFlow

Privacy and machine learning: two unexpected allies?

The challenge of verification and testing of machine learning

Is attacking machine learning easier than defending it?

Breaking things is easy
subscribe via RSS