Justice as Explanation : the right to reasons in algorithmic decision making

La Coordination des Midis-conférences des Jeunes Chercheurs du CRDP a le plaisir de vous inviter à visionner la capsule vidéo de présentation de la conférence citée en titre.

Abstract : Decisions that were once principally made by humans are now increasingly being made by algorithms. Public and private decision makers alike might draw on machine learning systems or other artificially intelligent applications to complement or replace human judgment. These kinds of algorithms often have an ‘unexplainable’ architecture: it is usually impossible to review the reasons for which a machine learning algorithm, for example, reaches a particular conclusion. This generates a potential justice problem. Individuals adversely affected by algorithmic decisions might be unable to successfully seek redress through the legal system when the reasons for which a decision has been made is unknowable. This paper will aim to introduce and delineate the scope of this potential justice problem. It will suggest that the use of machine learning systems by public decision makers carries with it the threat that such decisions will become inscrutable and immune from review. The paper will conclude by defending the position that the inscrutability of machine learning systems undermines procedural fairness and endangers the right of individuals to seek redress. Regulation must be developed to counter such effects.

Speaker : Michael Lang, LL.M. Candidate, McGill University – Faculty of Law

Bon visionnement !

Ce contenu a été mis à jour le 11 novembre 2020 à 20 h 15 min.