Workshop on Adversarial Machine
Learning and Beyond

Workshop at AAAI 2022

February 28th, 2022

Overview

Though machine learning (ML) approaches have demonstrated impressive performance on various applications and made significant progress for the development of artificial intelligence (AI), the potential vulnerabilities of ML models to malicious attacks (e.g., adversarial/poisoning attacks) have raised severe concerns in safety-critical applications. For example, by adding small noise to an image, an ML model would misclassify it as another category.

Such adversarial examples can be generated in numerous domains, including image classification, object detection, speech recognition, natural language processing, graph representation learning, and self-driving cars in the physical world. Although it is not to be alarmist, researchers in ML and AI have a responsibility to preempt attacks and build safeguards especially when the task is critical for information security and human lives.

The adversarial ML techniques could also result in potential data privacy and ethical issues when deploying ML techniques in real-world applications. Counter-intuitive behaviors of ML models will largely affect the public trust on AI techniques, while a revolution of machine learning/deep learning methods may be an urgent need. This workshop aims to discuss important topics about adversarial ML, which can bridge academia with industry, algorithm design with policy making, short-term approaches with long-term solutions, to deepen our understanding of ML models in adversarial environments and build reliable ML systems in the real world.

Paper Submissions
Paper submission deadline
Notification to authors
Camera ready deadline
Call for Papers

Topics

We welcome submission from different aspects of adversarial ML, including but not limited to:
Malicious attacks for ML models to identify their vulnerability in black-box/real-world scenarios.
The positive/negative social impacts and ethical issues related to adversarial ML.
Novel algorithms and theories to improve model robustness.
The consideration and experience of adversarial ML from industry and policy making.
Benchmarks to reliably evaluate attack and defense methods and measure the real progress of the field.
Positive applications of adversarial ML, i.e., adversarial for good.
Theoretical understanding of adversarial ML and its connection to other areas.
We consider submissions that haven’t been published in any peer-reviewed venue (except those under review). The accepted papers will be allocated either a contributed talk or a poster presentation. Submissions including full papers (6-8 pages) and short papers (2-4 pages) should be anonymized and follow the AAAI-22 Formatting Instructions (two-column format) at https://www.aaai.org/Publications/Templates/AuthorKit22.zip.
Paper Awards: We will offer totally $3000 for the (2~3) best paper awards.
Best Paper Awards
We offer $1000 for each of the following papers.
Attention for Adversarial Attacks: earning from your Mistakes.Florian Jaeckle, Aleksandr Agadzhanov, JINGYUE LU, M. Pawan Kumar.
1
Demystifying the Adversarial Robustness of Random Transformation Defenses. Chawin Sitawarin, Zachary Golan-Strieb, David Wagner.
2
Revisiting Adversarial Robustness of Classifiers With a Reject Option. Jiefeng Chen, Jayaram Raghuram, Jihye Choi, Xi Wu, Yingyu Liang, Somesh Jha.
3
Competition

Data-Centric Robust Learning on ML Models

Introduction

Current machine learning competitions mostly seek for a high-performance model given a fixed dataset, while recent Data-Centric AI Competition (https://https-deeplearning-ai.github.io/data-centric-comp/) changes the traditional format and aims to improve a dataset given a fixed model. Similarly, in the aspect of robust learning, many defensive methods have been proposed of deep learning models for mitigating the potential threat of adversarial examples, but most of them strive for a high-performance model in fixed constraints and datasets. Thus how to construct a dataset that is universal and effective for the training of robust models has not been extensively explored. To accelerate the research on data-centric techniques of adversarial robustness in image classification, we organize this competition with the purpose of developing novel data-centric algorithms, such as data augmentation, label refinement, crafting adversarial data, even designing knowledge fusion algorithms from other datasets. The participants are encouraged to freely develop novel ideas to find effective data-centric techniques to promote to train robust ML models.

Models and Datasets

This competition consists of two stages.

Stage I: we choose 2 baseline networks on CIFAR-10. These models come from:

1) ResNet50 https://arxiv.org/abs/1512.03385 ----CIFAR-10

2) DenseNet121: https://arxiv.org/abs/1608.06993 ----CIFAR-10

We will use the data points and corresponding label vectors from the submissions to train the models, meanwhile, the participants have the ability to assign some training settings, including the optimizer, weight decay, learning rate, and training epochs. After the training phase, we use the private test set based on the CIFAR-10 to evaluate the submissions on the public leaderboard.

Stage II: the top-50 participants in Stage I will enter Stage II. In this stage, we will evaluate submissions on another private test set based on the CIFAR-10. The chosen models are also different from those in Stage I. Besides, in Stage II we only allow participants to adjust the training parameters, and the data points and corresponding label vectors will be fixed after Stage I.

Notely, in Stage I, a portion of data points in our private test set will come from the basic test set of CIFAR-10, but we do not recommend that participants try to incorporate the basic test set of CIFAR-10 or probe the contents of the private test set. We will change the test set in Stage II. Meanwhile, we will check the winners’ final program. Participants are encouraged to design general and effective data-centric techniques to improve the models’ performance.

We train the models based on the submissions and obtain the classification rate of (higher is better), which is computed by the following formula:

where M is the set of all trained models, X is the evaluation dataset.

Whenever multiple submissions obtain the same score, they will be compared by the number of data points (less is better).

Submission Format

Each submission is a zip archive of dataset, including no more than 50,000 (with the same training number on CIFAR-10) data points and corresponding label vector, and the training setting of every model, including the optimizer, weight decay, learning rate, and training epochs. More details will be announced soon.

Competition Site
Invited Speakers
Florian Tramèr
Stanford
Jingfeng Zhang
RIKEN
Maksym Andriushchenko
EPFL
Hadi Salman
MIT
Alex Lamb
MILA
Chaowei Xiao
Arizona State University
Xiaofeng Mao
Alibaba
Schedule The time is in Pacific Time (PT).
02:50pm - 03:00pmOpening Remarks: Yinpeng Dong
Session 1
03:00pm - 03:30pm
Invited Talk #1: Maksym Andriushchenko: “Adversarially robust image attribution against fake images”
03:30pm - 04:00pm
Invited Talk #2: Florian Tramer
04:00pm - 04:10pm
Contributed Talk #1: Florian Jaeckle: “Attention for Adversarial Attacks: Learning from your Mistakes”
04:10pm - 04:20pm
Contributed Talk #2: Zihao Wang: “Goal-Oriented Data-Centric Robust Learning”
04:20pm – 04:30pm
Contributed Talk #3: Wenkai Zheng: “Data Enhancement with Multiple Adversarial Perturbation Constraints”
Session 2
04:30pm - 05:00pm
Invited Talk #3: Alex Lamb
05:00pm - 05:30pm
Invited Talk #4: Hadi Salman
05:30pm - 05:40pm
ARES-Bench Release: Xiaofeng Mao
05:40pm - 05:50pm
Contributed Talk #4: Jayaram Raghuram:“Revisiting Adversarial Robustness of Classifiers With a Reject Option”
05:50pm - 06:00pm
Contributed Talk #5: Yiqi Zhong: “Exploiting the Potential of Datasets: A Data-Centric Approach for Model Robustness”
06:00pm – 07:00pm
Poster Session & Dinner
Session 3
07:00pm - 07:30pm
Invited Talk #5: Jingfeng Zhang: “Adversarial Robustness: From Basic Science to Some Applications”
07:30pm – 08:00pm
Invited Talk #6: Chaowei Xiao
08:00pm – 08:30pm
Invited Talk #7: Xiaofeng Mao
08:30pm – 08:40pm
Contributed Talk #6: Chawin Sitawarin:“Demystifying the Adversarial Robustness of Random Transformation Defenses”
08:40pm – 08:50pm
Contributed Talk #7: Qiwei Tian: “Improving Adversarial Robustness with Data-Centric Learning”
08:50pm – 09:00pm
Contributed Talk #8: Jian Zhao: “Mix Saturation Attack Data like Cocktail for Robust Learning”
Organizers
Yinpeng Dong
Tsinghua University
Tianyu Pang
Tsinghua University
Xiao Yang
Tsinghua University
Dingcheng Yang
Tsinghua University
Xiaofeng Mao
Alibaba
Yuefeng Chen
Alibaba
Eric Wong
MIT
Zico Kolter
CMU
Yuan He
Alibaba
Accepted Papers
Oral
Attention for Adversarial Attacks: Learning from your Mistakes;
PDF
Demystifying the Adversarial Robustness of Random Transformation Defenses;
PDF
Revisiting Adversarial Robustness of Classifiers With a Reject Option;
PDF
Long Paper
Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional Variational AutoEncoders for Adversary Detection in the Presence of Noisy Images;
PDF
Provable Defense Against Clustering Attacks on 3D Point Clouds;
PDF
Traversing the Local Polytopes of ReLU Neural Networks;
PDF
Tensor Normalization and Full Distribution Training;
PDF
Saliency Diversified Deep Ensemble for Robustness to Adversaries;
PDF
Training Universal Adversarial Perturbations with Alternating Loss Functions;
PDF
Robust Out-of-distribution Detection for Neural Networks;
PDF
Estimating the Robustness of Classification Models by the Structure of the Learned Feature-Space;
PDF
Improving Perceptual Quality of Adversarial Images Using Perceptual Distance Minimization and Normalized Variance Weighting;
PDF
Broad Adversarial Training with Data Augmentation in the Output Space;
PDF
Ensemble-in-One: Learning Ensemble within Random Gated Networks for Enhanced Adversarial Robustness;
PDF
BiGrad: Differentiating through Bilevel Optimization Programming;
PDF
Metamorphic Adversarial Detection Pipeline for Face Recognition Systems;
PDF
Measuring the Contribution of Multiple Model Representations in Detecting Adversarial Instances;
PDF
Optimal Robust Classification Trees;
PDF
A Practical and Stealthy Adversarial Attack for Cyber-Physical Applications;
PDF
Robust No-Regret Learning in Min-Max Stackelberg Games;
PDF
Short paper
Aliasing coincides with CNNs vulnerability towards adversarial attacks;
PDF
An Adversarial Benchmark for Fake News Detection Models;
PDF
Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness?;
PDF
The Diversity Metrics of Sub-models based on SVD of Jacobians for Ensembles Adversarial Robustness;
PDF
Heterogeneous Architecture Search Approach within Adversarial Dynamic Defense Framework;
PDF
Mitigation of Adversarial Policy Imitation via Constrained Randomization of Policy (CRoP);
PDF
Patch Vestiges in the Adversarial Examples Against Vision Transformer Can Be Leveraged for Adversarial Detection;
PDF
Meta Adversarial Perturbations;
PDF
Competition Papers
#1: Goal-Oriented Data-Centric Robust Learning;
PDF
#2: Data Enhancement with Multiple Adversarial Perturbation Constraints;
PDF
#3: Mix Saturation Attack Data like Cocktail for Robust Learning;
PDF
#4: Exploiting the Potential of Datasets: A Data-Centric Approach for Model Robustness;
PDF
#5: Improving Adversarial Robustness with Data-Centric Learning;
PDF
#6: Improve Data Robustness With Multiple Data Processing;
PDF
#7: Data-Centric Techniques To Robust ML Models;
PDF
#8: Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation;
PDF
#10: Mining limited data for more robust and generalized ML models;
PDF
Previous Workshops

A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning, ICML 2020

Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (AML-CV), CVPR 2021

Adversarial Robustness in the Real World, ICCV 2021

Sponsors
Contact
Please contact Yinpeng Dong (dyp17@mails.tsinghua.edu.cn), Tianyu Pang (pty17@mails.tsinghua.edu.cn), Xiao Yang (yangxiao19@mails.tsinghua.edu.cn) if you have any questions.