On June 16, 2022, Bill C-27, the Digital Charter Implementation Act, 2022, was introduced and received first reading in the House of Commons. Bill C-27 is in fact a second attempt at the overhaul of Canada’s federal privacy framework—as you may recall, I wrote about the previous attempt, Bill C-11, here. Bill C-11 died on the order paper in 2021 before the federal election. Bill C-27 was recently re-introduced in Parliament by the Government of Canada to create a new statutory framework governing personal information practices in the private sector.
Bill C-27 would create three new statutes:
- Consumer Privacy Protection Act (CPPA): this would repeal and replace the privacy framework in PIPEDA
- Personal Information and Data Protection Tribunal Act (PIDPTA): this would create an administrative tribunal to review certain decisions made by the Privacy Commissioner of Canada (Commissioner) and impose penalties for contraventions of the CPPA, and
- Artificial Intelligence and Data Act (AIDA): this would create a risk-based approach to regulating trade and commerce in AI systems
Both CPPA and PIDPTA look familiar, as they are updated (with a few modifications) from Bill C-11; on the other hand, AIDA is completely new, and the parts of PIPEDA having to do with electronic documents would survive separately as the Electronic Documents Act.
In my last post, I delved into some of the highlights of CPPA and PIDPTA, and promised that I would discuss AIDA in this post.
To that end, it is time to explore the proposed provisions in AIDA—this would be the first federal law in Canada to regulate the creation and use of AI systems and set out significant penalties for non-compliance.
The purposes of AIDA would be to regulate international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements, applicable across Canada, for the design, development and use of those systems, and to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests.
What would constitute “harm”? It could include physical or psychological harm to an individual; damage to an individual’s property; or economic loss to an individual.
AIDA would apply to private sector organizations that design, develop or make available for use artificial intelligence systems in the course of international or interprovincial trade and commerce. And an “artificial intelligence system” would be broadly defined as a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.
Some things would not be captured by AIDA; for instance, AIDA would not apply to a government institution defined in section 3 of the Privacy Act or to a product, service or activity under the direction or control of the Minister of National Defence, the Director of the Canadian Security Intelligence Service, the Chief of the Communications Security Establishment, or a prescribed person. Also note that Canada’s Directive on Automated Decision-Making imposes several requirements on the federal government’s use of automated decision-making technologies and on businesses that license or sell such technologies to the federal government (with some exceptions).
AIDA would create several new requirements:
- Establish measures with respect to the manner in which data is anonymized and the use or management of anonymized data.
- Conduct an impact assessment to determine whether the system is a high-impact system—note that this term, “high-impact” is not yet defined and we will have to wait for the regulations to understand what is meant by this term and how to assess whether a system is high-impact.
- Keep records of steps taken to meet compliance requirements, including the reasons supporting conclusions of required impact assessments.
- When dealing with a high-impact system:
- Establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system. In addition to “harm” discussed above, it is important to note that “biased output” would mean content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds.
- Establish measures to monitor compliance with the mitigation measures and the effectiveness of those mitigation measures.
- Publish on a publicly available website in plain language a description of: how the system is, or intended to be used; the types of content that it generates and the decisions, recommendations or predictions that it makes; the mitigation measures established to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system; and any other information prescribed by regulation.
- Notify the Minister of Innovation, Science and Industry (Minister) if use of the system results or is likely to result in material harm.
- The Minister would be able to, by order, require: the production of records regarding an AI system; an audit conducted itself or through an independent auditor; the implementation of measures to address issues identified in the audit; the cessation of use of a high-impact system if the Minister has reasonable grounds to believe that the use of the system gives rise to a serious risk of imminent harm; and the publication of information about contraventions (this would not include confidential business information).
- The Minister would be able to designate an Artificial Intelligence and Data Commissioner, whose role would be to assist the Minister in the administration and enforcement of AIDA.
- AIDA would create significant enforcement provisions. In addition to administrative monetary penalties (to be set out in the regulations, where the goal is to promote compliance and not to punish), there would be penalties for contraventions, namely up to three percent of global revenue or $10 million dollars. And for more serious offences, there could be penalties of up to five percent of global revenue or $25 million dollars or imprisonment. These more serious offences would involve: possessing or using personal information obtained through criminal or other unlawful means for the purposes of creating, using or making available an AI system; using an AI system knowing (or being reckless as to whether) the system is likely to cause serious or psychological harm or substantial damage to property, if that harm or damage occurs; and using an AI system with intent to defraud the public and cause economic loss, if that loss occurs. It is important to keep in mind that there would also be the possibility of a fine in the discretion of the court or a term of imprisonment of up to five years less a day, or to both, in the case of an individual.
What can we take from the proposed AIDA? This is the first attempt at the creation of an AI law in Canada, and it constitutes a bold first step. Organizations will need to pay attention to the progress of Bill C-27, as they will need to comply with provisions if they design, develop, operate, license, or sell AI systems in the course of commercial transactions internationally or interprovincially.
That said, there are parts of AIDA that are unclear, and this lack of clarity could create challenges for businesses that are aiming to create compliance programs and try to avoid the serious fines and penalties. For instance, it may be difficult to know exactly how to determine whether a system is “high-impact”, and whether that system might result in “material harm”. Along the same lines, there is not much detail when it comes to the administrative penalties. And in like manner, some organizations may be concerned about the amount of detail that would need to be disclosed to the public when dealing with high-impact systems—they may be of the view that it would be onerous to understand and subsequently publish in plain language a description of all of the required features of a particular high-impact system. Disappointingly, there appears to be a lot of information that will not be revealed until regulations are released. Indeed, many may assert that AIDA lacks the necessary specificity to achieve compliance compared to the European Union’s AI Act.
Regardless of the fact that we are still in the early stages, it is never to early too take a closer look at the bill and start to prepare.