A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning


Workshop at ICML 2021




Overview

Adversarial machine learning is a new gamut of technologies that aim to study vulnerabilities of ML approaches and detect the malicious behaviors in adversarial settings. The adversarial agents can deceive an ML classifier by significantly altering its response with imperceptible perturbations to the inputs. Although it is not to be alarmist, researchers in machine learning have a responsibility to preempt attacks and build safeguards especially when the task is critical for information security, and human lives. We need to deepen our understanding of machine learning in adversarial environments.


While the negative implications of this nascent technology have been widely discussed, researchers in machine learning are yet to explore their positive opportunities in numerous aspects. The positive impacts of adversarial machine learning are not limited to boost the robustness of ML models, but cut across several other domains including privacy protection, reliability and safety test, model understanding, improving generalization performance on different tasks, etc.


Since there are both positive and negative applications of adversarial machine learning, tackling adversarial learning to their use in the right direction requires a framework to embrace the positives. This workshop aims to bring together researchers and practitioners from a variety of communities (e.g., machine learning, computer security, data privacy and ethics) in an effort to synthesize promising ideas and research directions, as well as foster and strengthen cross-community collaborations on both theoretical studies and practical applications. Different from the previous workshops on adversarial machine learning, our proposed workshop seeks to explore the prospects besides reducing the unintended risks for sophisticated ML models.


Call for Papers

We welcome submission from different aspects of adversarial ML, including but not limited to

  • Adversarial / poisoned attacks against ML models
  • Adversarial defenses to improve decision robustness
  • Methods of detecting / rejecting adversarial examples
  • Model verification and certified training / inference
  • Benchmarks to reliably evaluate previous defenses
  • Theoretical understanding of adversarial ML
  • Empirical studies that help to construct practically robust systems
  • Adversarial ML in the real world
  • Robust model architectures, data augmentations, and dataset biases
  • Positive applications of the technicals in adversarial ML (e.g., privacy protection, generalization improvement, interpretable ML, transfer learning, reinforcement learning, traditional CV and NLP tasks)

We only consider submissions that haven’t been published in any peer-reviewed venue, including ICML 2021 conference. The workshop is non-archival and will not have any official proceedings. Based on the PC’s recommendation, the accepted papers will be allocated either a contributed talk or a poster presentation.


Submission format:   Submissions should be anonymized and follow the template. Submissions should be up to 4 pages, plus unlimited space for references and appendices.


Submission server: https://openreview.net/group?id=ICML.cc/2021/Workshop/AML.   


We will offer 1~2 Best Paper Awards (total $3,000 prize) and 1~2 Adversarial for Good Awards (total $3,000 prize).

Important Dates

 

    Submission deadline: June 5th, 2021

    Notification to authors: June 20th, 2021

    Video recordings of contributed talks deadline: June 27th, 2021

    Camera-ready deadline: July 1st, 2021

Schedule

This is the tentative schedule of the workshop. All slots are provided in Eastern Time (ET).

Morning Session


7:45 - 8:00 Opening Remarks
8:00 - 8:30 Invited Talk #1
8:30 - 9:00 Invited Talk #2
9:00 - 9:10 Contributed Talk #1
9:10 - 9:40 Invited Talk #3
9:40 - 10:10 Invited Talk #4
10:10 - 10:20 Contributed Talk #2
10:20 - 11:00 Panel Discussion #1
11:00 - 12:00 Poster Session #1

Afternoon Session


12:00 - 12:30 Invited Talk #5
12:30 - 13:00 Invited Talk #6
13:00 - 13:30 Invited Talk #7
13:30 - 13:40 Contributed Talk #3
13:40 - 14:10 Invited Talk #8
14:10 - 14:40 Invited Talk #9
14:40 - 15:10 Invited Talk #10
15:10 - 15:20 Contributed Talk #4
15:20 - 16:00 Panel Discussion #2
16:00 - 17:00 Poster Session #2
 

Invited Speakers




Liwei Wang

Peking University

Sven Gowal

DeepMind

Jan Hendrik Metzen

Bosch Center for Artificial Intelligence



Will Xiao

Harvard Medical School

Cihang Xie

UC Santa Cruz

Matthias Hein

University of Tübingen

Workshop Organizers




Hang Su

Tsinghua University

Yinpeng Dong

Tsinghua University

Tianyu Pang

Tsinghua University




Shuo Feng

University of Michigan

Henry Liu

University of Michigan

Dan Hendrycks

UC Berkeley

Francesco Croce

University of Tuebingen

Contact

Please contact Hang Su, Yinpeng Dong , Tianyu Pang if you have any questions.


Sponsored by