On Pearl’s Hierarchy and the Foundations of Causal Inference
DSA ADS Course, 2021
Discuss causal reasoning, causal inference, Pearl Causal Hierarchy, Structural Causal Model, Causal Hierarchy Theorem, and applied probabiity.
On Pearl’s Hierarchy and the Foundations of Causal Inference - March, 2021
Cause and effect relationships play a central role in how we perceive and make sense of the world around us, how we act upon it, and ultimately, how we understand ourselves. Almost two decades ago, computer scientist Judea Pearl made a breakthrough in understanding causality by discovering and systematically studying the “Ladder of Causation” [Pearl and Mackenzie 2018], a framework that highlights the distinct roles of seeing, doing, and imagining. In honor of this landmark discovery, we name this the Pearl Causal Hierarchy (PCH). In this chapter, we develop a novel and comprehensive treatment of the PCH through two complementary lenses, one logical-probabilistic and another inferentialgraphical. Following Pearl’s own presentation of the hierarchy, we begin by showing how the PCH organically emerges from a well-specified collection of causal mechanisms (a structural causal model, or SCM). We then turn to the logical lens. Our first result, the Causal Hierarchy Theorem (CHT), demonstrates that the three layers of the hierarchy almost always separate in a measure-theoretic sense. Roughly speaking, the CHT says that data at one layer virtually always underdetermines information at higher layers. Since in most practical settings the scientist does not have access to the precise form of the underlying causal mechanisms – only to data generated by them with respect to some of PCH’s layers – this motivates us to study inferences within the PCH through the graphical lens. Specifically, we explore a set of methods known as causal inference that enable inferences bridging PCH’s layers given a partial specification of the SCM. For instance, one may want to infer what would happen had an intervention been performed in the environment (second-layer statement) when only passive observations (first-layer data) are available. We introduce a family of graphical models that allows the scientist to represent such a partial specification of the SCM in a cognitively meaningful and parsimonious way. Finally, we investigate an inferential system known as docalculus, showing how it can be sufficient, and in many cases necessary, to allow inferences across PCH’s layers. We believe that connecting with the essential dimensions of human experience as delineated by the PCH is a critical step towards creating the next generation of AI systems that will be safe, robust, human-compatible, and aligned with the social good.