The federal government’s proposed AI legislation misses the mark on protecting Canadians
Artificial intelligence (AI) is transforming many aspects of society, but it comes with risks and harms. In Canada, concerns about AI's impact have not been adequately addressed by the national AI strategy. A new article by Joanna Redden, one of our program's Principal Investigators (PIs), highlights significant gaps in Canada's approach to AI governance and suggests that the proposed Artificial Intelligence and Data Act (AIDA) falls short of providing the necessary protections.
Joanna Redden's article outlines the following key points:
- Lack of oversight: The AIDA, as currently drafted, does not address government use of AI despite widespread adoption across the public sector. This contrasts with AI governance in other leading nations and fails to meet the expressed interests of government employees.
- Limited transparency: The Canadian Tracking Automated Governance (TAG) register, which lists 303 applications of AI within government agencies, reflects the limited information available to the public about AI use. The TAG register, developed in collaboration with the U.K.-based Public Law Project, is a start, but there is a need for more comprehensive registries to ensure effective oversight.
- Inadequate public consultation: The current legislation lacks meaningful consultation with the public, resulting in inadequate regulation of AI's social impacts. The federal government's recent spending announcements on AI mostly focus on accelerating adoption without addressing its risks.
Joanna Redden's article proposes that AI governance in Canada must prioritize transparency, public engagement, and oversight. She suggests that the AIDA be split from Bill C-27 to allow for the necessary public consultations and redrafting to better address the needs of Canadians.
Read the full article to gain deeper insights into Canada's AI governance shortcomings and learn about the proposed solutions that can guide the country toward safer and more transparent AI practices.