Causal Inference & Machine Learning: Why now?
Virtual - December 13th, 2021
Machine Learning has received enormous attention from the scientific community due to the successful application of deep neural networks in computer vision, natural language processing, and game-playing (most notably through reinforcement learning). However, a growing segment of the machine learning community recognizes that there are still fundamental pieces missing from the AI puzzle, among them causal inference. This recognition comes from the observation that even though causality is a central component found throughout the sciences, engineering, and many other aspects of human cognition, explicit reference to causal relationships is largely missing in current learning systems.
This entails a new goal of integrating causal inference and machine learning capabilities into the next generation of intelligent systems, thus paving the way towards higher levels of intelligence and human-centric AI. The synergy goes in both directions; causal inference benefitting from machine learning and the other way around.
- Current machine learning systems lack the ability to leverage the invariances imprinted by the underlying causal mechanisms towards reasoning about generalizability, explainability, interpretability, and robustness.
- Current causal inference methods, on the other hand, lack the ability to scale up to high-dimensional settings, where current machine learning systems excel.
All indications are that such marriage can be extremely fruitful. For instance, initial results indicate that understanding and leveraging causal invariances is a crucial ingredient in achieving out-of-distribution generalization (transportability) -- something that humans do much better than state-of-the-art ML systems. Also, causal inference methodology offers a systematic way of combining passive observations and active experimentation, allowing more robust and stable construction of models of the environment. In the other direction, there is a growing evidence that embedding causal and counterfactual inductive biases into deep learning systems can lead to high-dimensional inferences that are needed in realistic scenarios.
This 2nd edition of the WHY workshop (1st edition: WHY-19) focuses on bringing together researchers from both camps to initiate principled discussions about the integration of causal reasoning and machine learning perspectives to help tackle the challenging AI tasks of the coming decades. We welcome researchers from all relevant disciplines, including but not limited to computer science, cognitive science, robotics, mathematics, statistics, physics, and philosophy.
We will invite papers that describe methods for answering causal questions with the help of ML machinery, or methods for enhancing ML robustness and generalizability with the help of causal models (i.e., carriers of transparent structural assumptions). Authors are encouraged to identify the specific task the paper aims to solve and where on the causal hierarchy their contributions reside (i.e., associational, interventional, or counterfactual). Topics of interest include but are not limited to the following:
- Algorithms for causal inference and mechanisms discovery.
- Causal analysis of biases in data science & fairness analysis.
- Causal and counterfactual explanations.
- Generalizability, transportability, and out-of-distribution generalization.
- Causal reinforcement learning, planning, and imitation.
- Causal representation learning and invariant representations.
- Intersection of causal inference and neural networks.
- Fundamental limits of learning and inference, the causal hierarchy.
- Applications of the 3-layer hierarchy (Pearl Causal Hierarchy).
- Evaluation of causal ML methods (accuracy, scalability, etc).
- Causal reasoning and discovery in child development.
- Other connections between ML, cognition, and causality.