ICLR 2019 Workshop on Debugging ML Models

Register here.

New Orleans, LA - May 6, 2019

Machine learning (ML) models are increasingly being employed to make highly consequential decisions pertaining to employment, bail, parole, and lending. While such models can learn from large amounts of data and are often very scalable, their applicability is limited by certain safety challenges. A key challenge is identifying and correcting systematic patterns of mistakes made by ML models before deploying them in the real world.

The goal of this workshop, held at the International Conference on Learning Representations (ICLR), is to bring together researchers and practitioners with different perspectives on debugging ML models. Topics of interest are listed below, although we also welcome submissions that do not directly fit into these topics.

- Debugging via interpretability: How can interpretable models and techniques aid us in effectively debugging ML models?
- Program verification as a tool for model debugging: Are existing program verification frameworks readily applicable to ML models? If not, what are the gaps that exist and how do we bridge them?
- Visualization tools for debugging ML models: What kind of visualization techniques would be most effective in exposing vulnerabilities of ML models?
- Human-in-the-loop techniques for model debugging: What are some of the effective strategies for using human input and expertise for debugging ML models?
- Novel adversarial attacks for highlighting errors in model behavior: How do we design adversarial attacks that highlight vulnerabilities in the functionality of ML models?
- Theoretical correctness of model debugging techniques: How do we provide guarantees on the correctness of proposed debugging approaches? Can we take cues from statistical considerations such as multiple testing and uncertainty to ensure that debugging methodologies and tools actually detect ‘true’ errors?
- Theoretical guarantees on the robustness of ML models: Given a ML model or system, how do we bound the probability of its failures?
- Insights into errors or biases of real-world ML systems: What can we learn from the failures of widely deployed ML systems?
- Best practices for debugging large scale ML systems: What are standardized best practices for debugging large-scale ML systems?

Important Dates:
- Submission deadline: March 1, 2019, 11.59pm Pacific Time
- Acceptance notification: March 18, 2019 (before ICLR early registration deadline)
- Camera-ready deadline for accepted papers: April 6, 2019
- Workshop: Monday May 6, 2019
If you are a student/postdoc, we encourage you to apply for ICLR’s volunteer and travel grants before their March 13 deadline. If you need a visa to travel to the US, consider submitting your paper before the submission deadline. Then contact us so that we can fast track reviewing of your paper.

Submission Instructions:
- Submission page: https://easychair.org/conferences/?conf=debugml19
- Submit anonymized papers of up to 4 pages (not including references) using the ICLR template. The 4 page limit is strict, but references can be as many pages as needed.
- The reviewing process is double blind. Hence, please ensure that the PDF does not contain any information that could identify authors or institutions. Do not put author names or institutions in the filename of the PDF.
- Concurrent submission to other venues (journal, conference, workshop) is allowed. Work already published in a journal, conference, or workshop should be extended in a meaningful way.

Accepted papers will be presented as posters at the workshop. Additionally, some accepted papers will also be invited to present spotlight or oral talks at the workshop. Selected papers will be recognized with Best Research Paper, Best Applied Paper, Best Student Paper awards. Camera-ready versions of accepted papers will be uploaded to the conference website (unless requested not to), but there will be no formal published proceedings.

Please check the workshop website https://debug-ml-iclr2019.github.io/ for updated information and email debugging.ml@gmail.com any questions.

Date: 
Monday, May 6, 2019 - 7:30am to 7:30pm