Skip to content

The AttackBench framework wants to fairly compare gradient-based attacks against Machine Learning models. The goal is to find the most reliable attack to assess the robustness of a model.

Notifications You must be signed in to change notification settings

attackbench/attackbench.github.io

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

About

The AttackBench framework wants to fairly compare gradient-based attacks against Machine Learning models. The goal is to find the most reliable attack to assess the robustness of a model.

Topics

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages