Europe’s legal frameworks for AI governance are having a global impact, especially in Canada, where world-leading research and AI usage must find a way to smoothly integrate into modern society.
In 2021, the European Commission unveiled the first-ever full legal framework on AI, called the Artificial Intelligence Act. The goal of this framework was to strike a balance between AI innovations and preserving individual rights. Taking a risk-based approach, Europe established four levels of risk presented by AI systems and determined how to regulate each one to protect end users and society in general.
Almost immediately, Europe’s actions began to influence AI research and regulations. It was of particular interest to Canada, a nation that is already a well-established global leader in AI development and usage. Here, some of the largest impacts and their implications for Canada’s future AI efforts will be discussed.
How Canada regulates the risks of AI relative to Europe’s approach
Europe’s AI Act establishes 4 levels of AI risk stemming from AI. These include minimal risk for technologies such as spam filters and AI-enabled video games, and limited risk for AI that have certain transparency issues, like deep fakes and emotional recognition systems. Later high risk classifications covered systems like educational and vocal training systems, biometric identification software, and law enforcement software, requiring these systems to follow specific safety standards, while items deemed to be of unacceptable levels of risk, like social scoring or real-time biometrics, were completely banned.
Canada borrows some aspects of this framework for its own rules on AI, called the Directives on Automated Decision-making(ADM). However, Canada lays more emphasis on fundamental rights and individual rights rather than the innovation of AI systems, and the Directives lack legal or regulatory status, at least at the broadest national levels (though the C-11privacy reform bill does touch on the impact of AI). This is a contrast with Europe’s approach, which aims to govern not just the EU but also to have extra territorial jurisdiction.
At the federal level, there is more prescriptive legislation that matches with some of what the EU is doing. For example, Québec's Bill 64 addresses AI with the concept of "algorithmic transparency" around AI usage, a disclosure element the Europeans are pushing the world to adopt. However, some of the other provinces have not yet adopted EU style regulations, relying instead on commitments to privacy and transparency such as those established in Ontario.
Beyond risk alone, Canada also considers ownership and liability issues
In a landmark judgment, the Canadian Intellectual Property Office (CIPO) named an AI system, “RAGHAV”, as the co-author of an artistic painting. This is an extremely important development because it is the first time an AI system has been registered as the author or co-owner of a product. This opens doors for future AI ownership in ways that aren’t being considered in the same light in Europe.
CIPO showed a bit more restraint when it came to patents. Citing patent laws, CIPO refused to list an AI called “DABUS” as an inventor of a patent. However, as more cases are brought before the system, this precedent may shift.
A more pressing issue than AI system ownership of specific artistic or innovative outputs, however, is the assignment of liability for AI system errors. The courts are moving in uncharted waters all over the world and Canada is no exception as it seeks to settle a number of key questions that have come up in the construction of recent legislative proposals.
For example, who has liability, the AI system or the inventor? If AI is involved in a mishap, like an autonomous car accident, should it be subject to the same standards as a person? What if an AI develops inherent bias toward a group of people, will it be liable for racism? While the C-11 bill assigns some responsibility to AI systems, it is (at present) mostly in the guise of created liability for a manufacturer’s failures to disclose, in plain language, how a decision was reached by the AI system.
End user responsibilities in Canada
While European rules are focused more specifically on risks caused by the AI system itself, Canada’s national government and provinces are choosing, on the whole, to give developers and end users more responsibilities for using AI safely.
For example, since lab testing can’t always predict the performance of machine learning systems in the real world, machine-learning products should be subject to randomized control trials to ensure their safety, efficacy and fairness. This is a shared responsibility between the developer and the user.
Further, AI systems need to be continually educated and evaluated, especially when they are being deployed in environments that are evolving. This can be done independently, or through the lens of organizations like the International Organization for Standardization or Institute for Electrical that offer such services. Even large multinational AI firms, like Google, now offer AI ethics services that examine AI products in multiple dimensions like data used to train systems, algorithms, and so forth, to ensure proper deployment over time.
Concluding Thoughts
AI technology is developing fast. The world has already seen its application in healthcare, agriculture and education, and its future applicability is endless. However, AI also poses potential risks due to its opaque decision-making and self-learning characteristics. Thus, what the European Commission’s AI Act and Canada’s Directives for ADM offer are critical steps in setting boundaries around this rapidly advancing technology without stymying its growth.
From here, Canada will want to add teeth to many of its commitments and proposed guidelines. While the EU aims for international standards according to their own rules, for maximum control over the future of AI within its own borders, Canada will want to push to continue its pioneering approach of rules that incorporate and respect human rights and responsibilities alike.