February was a noteworthy month in privacy – the Office of the Privacy Commissioner of Canada made some announcements that we should pay attention to. The first announcement involves the decision to file a Notice of Application in the Federal Court seeking a declaration that Facebook contravened Canada’s federal private sector privacy law.
The second announcement involves the Commissioners for Canada, Québec, British Columbia, and Alberta launching a joint investigation into Clearview AI regarding the use of facial recognition technology.
The third announcement concerns the Office of the Privacy Commissioner of Canada commencing an investigation about the Royal Canadian Mounted Police (RCMP) and its use of Clearview AI’s facial recognition technology.
I will briefly explain the announcements one by one.
1. Facebook application
It is important to first understand the background. On March 19, 2018, the Office of the Privacy Commissioner of Canada received a complaint regarding Facebook’s compliance with the Personal Information Protection and Electronic Documents Act (PIPEDA).
There was concern that Cambridge Analytica could access millions of Facebook users’ private data without their consent for use in psychographic modelling for political purposes. Accordingly, there was an investigation with respect to allegations that Facebook allowed Cambridge Analytica, among others, to inappropriately access information from facebook.com users without their knowledge or consent and that Facebook had insufficient safeguards in place to prevent such access, as well as the subsequent inappropriate use of personal information of facebook.com users. The access was obtained through a Facebook third-party app, known as “thisisyourdigitallife” (TYDL).
On April 25, 2019, the Office of the Privacy Commissioner released the Report of Findings (Report) regarding the Joint investigation of Facebook by the Privacy Commissioner of Canada and the Information and Privacy Commissioner for British Columbia. The examination concentrated on three things: the consent of users (those who installed an app and their friends) whose information was disclosed by Facebook to apps, and in particular to the TYDL App; safeguards against unauthorized access, use and disclosure by apps; and accountability for the information under Facebook’s control. These areas were scrutinized under both PIPEDA and British Columbia’s Personal Information Protection Act (BC PIPA).
Briefly, the main findings included:
- Facebook failed to obtain valid and meaningful consent of installing users: Facebook could not show that TYDL in fact obtained meaningful consent for its purposes, (including potentially, political purposes), or that Facebook made reasonable efforts to ensure that TYDL, and apps in general, were obtaining meaningful consent from users
- Facebook failed to obtain meaningful consent from friends of installing users: Facebook used overbroad and conflicting language in its privacy communications that clearly did not constitute meaningful consent. This language was presented to users, generally on registration, in relation to disclosures that could occur years later, to unknown apps for unknown purposes. There was also an unreasonable reliance on installing users to provide consent on behalf of each of their friends to release those friends’ information to an app, even though they would have had no knowledge of that disclosure
- Facebook had inadequate safeguards to protect user information: Facebook relied on contractual terms with apps to protect against unauthorized access to users’ information, but then put in place superficial, largely reactive, and thus ineffective, monitoring to ensure compliance with those terms. Also, Facebook could not provide evidence of enforcement actions taken in relation to privacy related contraventions of those contractual requirements
- Facebook failed to be accountable for the user information under its control: Facebook did not take responsibility for giving real and meaningful effect to the privacy protection of its users. It abdicated its responsibility for the personal information under its control, effectively shifting that responsibility almost exclusively to users and Apps. Facebook relied on overbroad consent language, and consent mechanisms that were not supported by meaningful implementation. Its safeguards were superficial and did not adequately protect users’ personal information. There was no privacy protection framework
It was concerning that Facebook had been investigated previously in 2009 regarding similar issues, but did not implement any of the recommendations; had Facebook done so, some of the issues identified in this investigation could have been avoided or significantly mitigated.
However, Facebook’s response was to disagree with the findings and either outright reject, or refuse to implement the recommendations. This was in addition to disagreeing and rejecting recommendations in a preliminary report to bring Facebook into compliance with PIPEDA and PIPA, remediate the effects of Facebook’s past non-compliance, ensure effective implementation of its commitments, and ensure Facebook’s future compliance with Canadian privacy law. More specifically, the primary recommendation set out in the Report involved Facebook taking certain steps to ensure it obtains meaningful consent from installing users and their friends. That is, consent had to: clearly inform users about the nature, purposes and consequences of the disclosures; occur in a timely manner, before or at the time when their personal information is disclosed; and be express where the personal information to be disclosed is sensitive. There were also expectations that Facebook implement additional measures to ensure that it was obtaining meaningful consent for its disclosure of user information to each third-party app. One example involved complying with consent requirements and at minimum the “must dos” as outlined in our Offices’ Guidelines for Obtaining Meaningful Consent.
Moreover, Facebook was recommended to implement an easily accessible mechanism so users could know at any time what apps had access to what elements of their personal information (including by virtue of the app having been installed by one of the user’s friends), and know the nature, purposes and consequences of the access so they could change their preferences. Also, it was recommended that Facebook’s retroactive review and resulting notifications should cover all apps, and resulting notifications should include adequate detail for users to understand the nature, purpose and consequences of disclosures that may have been made to apps installed by a friend so they are able to access the controls to switch off any ongoing disclosure to any or all apps. It was also recommended that Facebook be subject to oversight by a third-party monitor, appointed by and serving to the benefit of the Commissioners, at the expense of Facebook, to monitor and regularly report on Facebook’s compliance with the recommendations for a period of five years.
Facebook indicated that it was willing to agree to third-party monitoring, subject to certain proposed material conditions and restriction; however, Facebook was not willing to implement any of the substantive recommendations, so the monitoring would not be helpful. The Report stated:
In our view, therefore, the risk is high that Canadians’ personal information will be disclosed to apps and used in ways the user may not know of or expect.
Consequently, the complaint against Facebook on each of the aspects of accountability, consent, and safeguards, was well-founded, and remained unresolved.
In response, there was an announcement that the Privacy Commissioner of Canada filed a Notice of Application in the Federal Court seeking a declaration that Facebook contravened PIPEDA.
This was done because the Federal Court has the authority to impose binding orders requiring an organization to correct or change its practices and comply with the law.
February 6, 2020’s Notice of Application sought the following:
- A declaration that Facebook contravened various provisions of PIPEDA
- An order requiring Facebook to correct its practices by implementing effective, specific and easily accessible measures to obtain, and ensure it maintains, meaningful consent from all users
- An order requiring Facebook to specify the technical revisions, modifications and amendments to be made to its practices to achieve compliance with PIPEDA
- An order that the parties return before the Court for the purposes of seeking a fully-particularized formal order reflecting the specific revisions, modifications and amendments to be made in order to achieve compliance with PIPEDA
- An order that the Court retain jurisdiction for the purposes of ongoing monitoring and enforcement of orders
- An order prohibiting Facebook from further collecting, using and disclosing any personal information of users in any manner that contravenes PIPEDA, and
- An order requiring Facebook to publish a public notice of any action taken or proposed to be taken to correct its practices that contravene PIPEDA.
This matter is still ongoing.
2. Joint investigation regarding Clearview AI
The Office of the Privacy Commissioner of Canada, La Commission d’accès à l’information du Québec, The Office of the Information and Privacy Commissioner for BC, and The Office of the Information and Privacy Commissioner of Alberta are jointly investigating Clearview AI and its use of facial recognition technology.
By way of background, the issue has come to the attention of the Commissioners because there have been several media reports questioning whether Clearview AI is collecting and using personal information without consent. That is, it appears that technology is being used to collect images and make facial recognition available to law enforcement for the purposes of identifying individuals (for example, see here and here). There is also some question about whether the company using its technology to provide services to financial institutions.
What is this technology? Facial recognition is a technology that works to identify human faces from images or videos. Essentially, the technology uses facial recognition software to map facial features, compares the information with a large database of recorded faces, and finds a match. More specifically, there is a process of detection, alignment, measurement, representation, matching, and identification. What is key is the distinguishable facial features or landmarks of the face, which are called nodal points; there are several (about 80) that can be used when looking at face geometry during the process. Some examples include the distance between the eyes, width of the nose, or depth of the eye sockets. These features help to create the faceprint that is compared to others that are stored in the database to find a match.
However, there are concerns about accuracy of the technology, and also potential bias in the facial recognition algorithms. The database is created by training an algorithm with a large amount of data typically with a deep neural network. Issues relating to accuracy and bias can raise serious concern regarding the potential misuse of data.
Apparently, Clearview AI has claimed to have over three billion images scraped from websites or social media platforms such as Facebook. By scraping, I mean that there is a bot or automated process going through images on webpages and storing them.
During the investigation, there will be a determination about whether Clearview AI’s practices comply with PIPEDA, BC PIPA, Alberta’s Personal Information Protection Act (AB PIPA), as well as Québec’s Act Respecting the Protection of Personal Information in the Private Sector and the Act to Establish a Legal Framework for Information Technology.
The Privacy Commissioner of Canada’s announcement stated:
Privacy regulators in every province and territory have also agreed to work together to develop guidance for organizations – including law enforcement – on the use of biometric technology, including facial recognition.
This is an active investigation, so there are no further details are available at this time.
3. Investigation into the RCMP’s use of Clearview AI’s facial recognition technology
Along the same lines, the Office of the Privacy Commissioner of Canada began an investigation into the RCMP’s use of Clearview AI’s facial recognition technology.
The relevant legislation in this examination is the Privacy Act, because this legislation applies to the public sector. To be sure, the RCMP is listed in the Schedule of the Privacy Act.
The Office of the Privacy Commissioner of Canada, along with privacy regulators in every province and territory, plan on working together to develop guidance for organizations, including law enforcement, on the use of biometric technology, including facial recognition.
The investigation is currently taking place, so there are no further details available at this time.
What does this mean?
Currently, these matters are ongoing. We will keep you posted once there is news.
In the meantime, it is important to consider what this all means; as technology becomes more sophisticated, there are privacy concerns that need to be addressed. Whether it is websites, platforms, third party apps, or facial recognition technology, it is critical that privacy laws work to provide the necessary protections and enable effective and meaningful enforcement.
- Bill C-27: a look at proposed AI provisions - August 9, 2022
- Bill C-27: Federal privacy law reform re-introduced - July 5, 2022
- Electronic surveillance in the workplace—what do employees think? - June 7, 2022