Deploying Algorithmic Impact Assessment (AIA) as a proactive strategy to manage the risks raised by the use of AI systems in Business to Consumer relationship | Midi-conférence

Dans le cadre des Midis-conférences des jeunes chercheur.e.s du CRDP, nous avons le plaisir de vous convier à la conférence « Deploying Algorithmic Impact Assessment (AIA) as a proactive strategy to manage the risks raised by the use of AI systems in Business to Consumer relationship ».

DATE : 12 MARS 2024

HEURE : 12H -13H




Nowadays, various types of Artificial Intelligence (AI) systems such as Generative AI systems are pervasive in our daily lives. These systems are general-purpose systems that can be used in any sector of the economy, including information and communication, wholesale and retail trade, public administration, scientific and technical activities, healthcare, and education. They have various applications, including content generation, creative writing, email marketing, personalized product recommendation, tailored content, targeted advertising, data analysis, customer management, and more.

Yet, generative AI systems, along with the gains and benefits, raise some risks and impacts on individuals’ private lives, how they make choices, and their interactions with businesses that deploy those systems. If this remains unaddressed by law and policy, exposes individuals and consumers to various types of risks and harms such as filter bubbles, misinformation and disinformation, behavior manipulation, discriminatory outcomes, and non-transparent decisions.

To adequately address those potential impacts and risks raised by the use of generative AI systems on individuals and consumers, we must examine them within their specific socio-economic framework.

Considering that neither the typical AI or consumer protection regulations nor the proposed risk assessment mechanisms address the risks raised by the use of generative AI systems in Business-to-Consumer (B2C) relationships, this study proposes a model based on Algorithmic Impact Assessment (AIA) tools that comprehensively evaluates the main type of risks caused by the use of those systems in various type of Business to consumer (B2C) relationship and suggest adapted actions to be taken: Generative AI Impact Assessment (Gen-AIA).



Saeed Rostamalizade is a doctoral candidate in Innovation, Science, Technology, and Law (Ph.D.) at the University of Montreal under the joint direction of Professors Pascale DUFOUR and Nicolas VERMEYS.

His research project concerns regulating the use of Artificial Intelligence (AI) and Algorithmic Decision Making (ADM) systems according to consumer protection principles. Within this project, he is trying to discuss the various types of challenges and concerns raised by the use of AI and ADM systems from four dimensions: 1- the diversity of risks raised by the use of AI and ADM systems in Business to consumer (B2C) relationships, 2- the variety of the impact levels of AI and ADM risks, 3- regulatory challenges to the use of AI and ADM, and 4- the lack of knowledge regarding the use of AI and ADM systems.

He proposes a model of Algorithmic Impact Assessment (AIA) tool that helps identify, assess and mitigate the risks and impact level of risks raised by using AI and ADM systems in various types of Business to consumer (B2C) relationships and suggests adapted actions to be taken: Consumer Rights Impact Assessment (CRIA).

Saeed ROSTAMALIZADEH holds a master’s degree in Rule of Law for Development (PROLAW) from Loyola University Chicago and a master’s degree in International Commercial Law obtained from Shahid Beheshti University.

He is particularly interested in the legal and ethical aspects of new technologies specifically Artificial Intelligence (AI) and AI-based tools and applications such as Algorithmic Decision Making (ADM) systems, and their risks and impacts on individual rights


Lien zoom
ID de réunion: 524 903 3947
Code secret: 060077

Ce contenu a été mis à jour le 27 mars 2024 à 18 h 52 min.