Beyond Fairness: Towards a Just, Equitable, and Accountable Computer Vision CVPR 2021 Workshop

Online - June 25, 2021

Register here.

Computer vision technologies are being developed and deployed in a variety of socially consequential domains including policing, employment, healthcare and more. In this sense, computer vision has ceased to be a purely academic endeavor, but rather one that impacts people around the globe on a daily basis. Yet, despite the rapid development of new algorithmic methods, advancing state-of-the-art numbers of standard benchmarks, and, more generally, the narrative of progress that surround public discourses about computer vision, the reality of how computer vision tools are impacting people paints a far darker picture.

In practice, computer vision systems are being weaponized against already over-surveilled and over-policed communities (Garvie et al. 2016; Joseph and Lipp, 2018; Levy, 2018), perpetuating discriminatory employment practices (Ajunwa, 2020), and more broadly, reinforcing systems of domination through the patterns of inclusion and exclusion operative in standard datasets, research practices, and institutional structures within the field (West, 2019; Miceli et al., 2020; Prabhu and Birhane, 2020). In short, the benefits and risks of computer vision technologies are unevenly distributed across society, with the harms falling disproportionately on marginalized communities.

We encourage interdisciplinary work, position papers, surveys, retrospectives and other discussions addressing issues that we should consider while conducting and publishing computer vision research.

 

We welcome contributions focused on (but not limited to) the following topics: 

 

 

  • Computer vision in practice: Who is benefitting and who is being harmed? We welcome submissions examining how computer vision technology intersect with, and amplifies, structural inequality and how the research practices and incentive structures within the academic field are implicated in the harms to marginalized groups. We welcome submissions that critically examine how one's own research and their roles within the community can contribute to a strengthening or dismantling of existing systems of power.

 

 

 

  • Cross-disciplinary research methods and methodologies. Computer vision is simultaneously a social and technical endeavor. Yet, current computer vision education and publication incentives tend to valorize the technical and devalue the social. This knowledge hierarchy is directly implicated in the harms being perpetuated by the field. We encourage submissions that introduce methods and methodologies that have been developed in other fields and by communities experiencing marginalization.

 

  • Accountability and transparency. The field of computer vision is currently facing a crisis of accountability. Computer vision systems are being developed and deployed at a rapid pace, often in highly socially consequential domains. Yet, computer vision models and datasets are frequently developed with little transparency into the design and development process, and few mechanisms of accountability, contestability, or recourse for individuals impacted by the systems. We encourage submissions that audit models or datasets, that examine model or dataset development processes, and that introduce frameworks that promote transparent and accountable model and dataset development (e.g. Model Cards, Datasheets). We also welcome submissions that explore ethical obligations of researchers and discuss mechanisms of ethical oversight within academic research.

 

  • Activism and collective organizingWe believe computer vision experts have an important role to play in shifting public discourse and public policy regarding the use of computer vision technologies, and shifting the research culture, norms, and incentive structures in a manner that will ultimately promote more responsible research practices. We hope to empower researchers to take an active role in all these realms. We welcome submissions that examine the institutional barriers within computer vision that are contributing to the extreme concentration of power within the field, in an effort to better diagnose the current condition of the field, and provide insights into how to best effect change.

 

  • Historical perspectives. Computer vision methods have advanced rapidly in recent years. However, many of the practical applications of computer vision methods have long histories that predate the field. These histories can provide valuable insight into the latent assumptions and ideologies underlying modern computer vision tools as well as the harms these tools can cause to marginalized groups. We encourage submissions that offer historical perspectives on the field of computer vision, including, but not limited to, histories of computer vision datasets and the trajectory of various applications across time.

 

For general inquiries about the workshop please contact beyondfaircv2021@gmail.com

 

Cheers,

Emily Denton and Timnit Gebru

 

Fairness, Accountability, and Transparency in Machine Learning - http://fatml.org/

Date: 
Friday, June 25, 2021 - 7:15am to 4:00pm