Mitigating risks of bias output by AI systems
Friday, February 23, 12 to 1 p.m. EST
Featuring:
- Maya Medeiros, Partner, Norton Rose Fulbright LLP
- Carole Piovesan, Co-Founder at INQ Law
- Justine Gauthier, Director, AI Governance at Mila
- Kuljit Bhogal, Associate at Osler, Hoskin & Harcourt LLP
- Daniel Bourque, Assistant General Counsel at Workday
Moderated by:
- Arun Krishnamurti, Senior Counsel, Google Canada
CPD to be confirmed (1 hour EDI professionalism has been applied for)
The Federal government introduced Bill C-27, which includes the Artificial Intelligence and Data Act (AIDA) to establish requirements for the design, development, use, and provision of AI systems.
AIDA would require entities responsible for high-impact AI systems to establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system.
AIDA integrates “biased output” and section 3 of the Canadian Human Rights Act (“CHRA”). Biased output is defined as content that is generated, or a decision, recommendation or prediction that is made, by an AI system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the CHRA, or on a combination of such prohibited grounds. Section 3 of the CHRA sets out the following prohibited grounds of discrimination: race, national or ethnic origin, colour, religion, age, sex, sexual orientation, gender identity or expression, marital status, family status, genetic characteristics, disability and conviction for an offence for which a pardon has been granted or in respect of which a record suspension has been ordered . Biased output does not include content, or a decision, recommendation or prediction, the purpose and effect of which are to prevent disadvantages that are likely to be suffered by, or to eliminate or reduce disadvantages that are suffered by, any group of individuals when those disadvantages would be based on or related to the prohibited grounds.
Join this lively discussion to better understand “biased output” in the context of AI systems, and ways to mitigate risks of biased output.
Registration:
- FREE for CAN-TECH Law members
- $25 plus HST: Non-members
Special Instructions: