The artificial intelligence (AI) revolution is set to transform healthcare, leading to improved outcomes and saved lives. New innovations have the potential to decrease existing health inequities. But without intentional consideration, these technologies can also exacerbate inequities by relying on biased datasets that disproportionately represent populations with greater access to and benefit from the health system.
Laws are critical tools for mitigating these significant risks associated with AI.
South of the border, U.S. President Donald Trump immediately scrapped AI regulations after his inauguration. And this week, the declaration from the AI Action Summit in Paris calling for inclusivity in AI was not signed by the U.S. or the U.K. These decisions are creating an urgent and alarming vacuum in AI protections, leaving many communities vulnerable to the unintended consequences of unchecked AI design, development and deployment.
While Canada endorsed the summit declaration, demonstrating a growing focus on creating legislative and regulatory controls on AI, there remains a striking lack of commitment to equity in our AI legislation. This demands immediate attention.
Policymakers must act urgently to integrate equity as a clear and explicit component of all AI-related policies. This proactive approach will ensure advancements can benefit everyone, rather than a privileged few.
AI is not inherently frightening or harmful. At its core, it is simply a set of methods or techniques used to achieve specific goals, such as making predictions, decisions or developing products. In healthcare, the applications are vast. Machine learning algorithms can facilitate early cancer detection, natural language processing models can serve as scribes in clinical settings, and computer vision technology can automatically interpret imaging results. These categories often overlap, demonstrating the fluid and ever-evolving nature of AI.
The important factor with AI is the context in which it is used and the intentions behind its application. AI in healthcare has identified inequities in the diagnosis of heart conditions in Black patients, for example. At the same time, it can also perpetuate biases and lead to unfair treatment of equity-deserving groups, where, for example, Black patients could be misdiagnosed or experience delayed treatment.
Federally, the Artificial Intelligence and Data Act (AIDA) is currently being considered by the Standing Committee on Industry and Technology. AIDA aims to limit the potential harm caused by AI. While it would apply to private sector entities, this has significant implications for companies that develop technologies for diagnostics, treatment planning and electronic medical records. However, experts have already established that the Bill’s language still allows developers considerable freedom without accountability.
In Ontario, Bill 194, Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024 has already been passed, with minimal emphasis on privacy safeguards and the creation of accountable and transparent AI systems.
The era of vague language in AI legislation must end.
Legislative efforts in Canada currently lack a focus on equity and the needs of equity-deserving groups. The AIDA and Bill 194, while a step forward, still need significant work to fulfill their purpose effectively. Rather than settling for these existing and proposed laws, we must strive to develop new legislation that places equity at the forefront. This will ensure that AI technologies are governed by laws that protect and benefit everyone equitably. Otherwise, we may be left in a similar situation as our American neighbours – where we hope the tech sector upholds equity and human rights, but there are no legislative requirements that they do.
Policymakers must ensure equity is a clear and explicit component of AI-related policies and laws. These laws must address not only the equity impacts during the design and development of AI, but also its implementation. This ensures the health system fully understands the implications of deploying AI before putting it into practice.
Both the Ontario Human Rights Commission and the Information and Privacy Commissioner of Ontario have issued guidance to improve these laws. They emphasize the need for explicit references to human rights, a principle-based approach, and greater accountabilities from the government on the public sector’s use of AI. We build on this advice by advocating for the inclusion of equity considerations.
At the same time, while laws are crucial, they are only one piece of a puzzle. In a future blog, we’ll explain the importance of inclusive community engagement and governance in developing AI policies.