Menu
Log in
Log in


News

  • 1 May 2019 12:36 PM | Deleted user

    Conclusions go beyond safeguards implicated in data breach and lead to significant re-thinking of transfers of personal information

    On April 9, 2019, the Office of the Privacy Commissioner of Canada (“OPC”) released its report of findings related to the Equifax data breach. On September 7, 2017 Equifax Inc. publicly announced that an attacker had accessed the personal information of more than 143 million individuals and later reported that the breach affected around 19,000 Canadians. The OPC commenced an investigation and concluded that the breach affected some Canadians whose information was collected by US-based Equifax Inc. (also referred to as “Equifax US”) and some Canadians who had purchased or received products, such as fraud alerts from Canada-based Equifax Canada Co. (“Equifax Canada”). The nature of the information and how it was acquired by either Equifax entity is described by the OPC in its report of findings:

    The affected personal information was collected by Equifax Inc. from certain Canadian consumers who had direct-to-consumer products or fraud alerts. The direct-to-consumer products included paid online access by individuals to their Canadian credit report, credit monitoring, and alert services (in relation to their Canada credit files). The information was collected by Equifax Inc. as it plays an integral role in delivering these direct-to-consumer products and processing certain fraud alert transactions.

    Attackers gained access to Equifax Inc.’s systems on May 13, 2017 by exploiting a known vulnerability in the software platform supporting an online dispute resolution portal that is part of Equifax Inc.’s Automated Consumer Information System (“ACIS”). They then operated undetected within Equifax Inc.’s systems for a period of time and ultimately gained access to Canadian personal information unrelated to the functions of the compromised portal.

    Information in Canadians’ credit files is stored by Equifax Canada on servers located in Canada and segregated from Equifax Inc.’s systems. However, during the process of delivering direct-to-consumer products to Canadians, information from credit files needed to fulfil these products is transferred to Equifax Inc. in the US. For instance, a static copy of a full credit file is transferred by Equifax Canada to Equifax Inc. if a credit report is purchased by a consumer. While Equifax Canada’s servers are segregated from Equifax Inc.’s systems, Equifax Canada’s security policies, direction and oversight were, and are, largely managed by Equifax Inc.

    The OPC concluded that both Equifax Canada and the US parent fell short of their privacy obligations to Canadians, focusing on five different areas of compliance:

    (1) Safeguards of Equifax US and Equifax Canada: Directly stemming from the data breach, the OPC found that neither Equifax US nor Equifax Canada implemented safeguards that were adequate as required under PIPEDA. Overall, the OPC concluded that vulnerability management, network segregation, implementation of basic information security practices, and oversight were deficient at Equifax US. Equifax Canada was found to lack adequate safeguards in terms of oversight, vulnerability management and the implementation of basic information security practices.

    (2) Conformity with Retention / Destruction Requirements: The OPC investigated whether personal information was being retained longer than was reasonably necessary It concluded that there was no process in place to delete Canadian personal information in compliance with the Equifax US data retention policy. The policy was not being followed, monitored or complied with. 

    (3) Accountability of Equifax Canada for protecting personal information: The OPC found that in the aftermath of the breach, there were a number of significant communications failures with the public and directed at Canadian consumers. The scope of Canadian data involved was unclear and was communicated in an unclear manner. Some of the information provided by the companies to the OPC were contradicted by information provided by consumers. The companies did not have a sufficient handle on what information they had, where it was from and who was responsible for it. 

    (4) Adequate consent by Canadians for collection and disclosure of information: This may be the most interesting and consequential finding from the Equifax case. Though the OPC has historically seen transfers of personal information from one entity to another for processing as not requiring consent, the OPC has changed its position:

    109. Providing adequate information about available choices when an individual is consenting to the collection, use or disclosure of their information is a key component of valid consent. In this case, it appears reasonable to require consent to the collection of information by, and disclosure of information to, Equifax Inc. as a condition of the online Canadian direct-to-consumer products, as Equifax Canada does not offer these products in-house. However, an individual would still have choices. In addition to the simple option of “not signing-up” for Equifax Canada credit file monitoring or other products, individuals interested in obtaining access to their Equifax Canada credit report could choose to use Equifax Canada’s free credit report service, provided by postal mail and avoiding any information disclosure to Equifax Inc. Equifax Canada does not currently communicate the difference in disclosures to consumers in the course of delivering online or postal access, i.e., that the former involves collection of information by Equifax Inc. and transfers of information to Equifax Inc. in the US, whereas the latter does not.

    110. In summary, Equifax Canada was not adequately clear about: (i) the collection of sensitive personal information by Equifax Inc., in the US, (ii) its subsequent disclosures of sensitive personal information to Equifax Inc., and (iii) the options available to individuals who do not wish to have their information disclosed in this way. Consequently, with respect to Equifax Canada’s practices to obtain consent for collection of personal information by Equifax Inc., and disclosure of personal information to Equifax Inc., the matter is well-founded.

    111. However, as noted in para. 101 above, we acknowledge that in previous guidance our Office has characterized transfers for processing as a ‘use’ of personal information rather than a disclosure of personal information. Our guidance has also previously indicated that such transfers did not, in and of themselves, require consent. In this context, we determined that Equifax Canada was acting in good faith in not seeking express consent for these disclosures.

    (5) Adequate Mitigation Measures: In the aftermath of the breach, the OPC concluded that offering a brief period of credit monitoring was inadequate relative to the scope of service Equifax could provide to Canadians in the circumstances, especially where better products (e.g. lifetime credit freezes) were offered to Americans affected by the same breach.

    In the end the OPC made a number of recommendations, most of which are binding on Equifax as a result of entering into a compliance agreement between Equifax Canada the OPC

    161. The following recommendations relate to contraventions found in Sections 1, 2, and 3 of this report, i.e. Safeguards and Retention by Equifax Inc. and Accountability of Equifax Canada. We recommended that Equifax Canada:

    1. Implement a procedure to keep the written arrangement with Equifax Inc., covering all Canadian personal information under Equifax Canada’s control collected by Equifax Inc. and disclosed to Equifax Inc., up-to-date.
    2. Institute a robust monitoring program by Equifax Canada against the requirements in the arrangement, and a structured framework for addressing any issues arising under it.
    3. Identify Canadians’ personal information that should no longer be retained by Equifax Inc. according to its retention schedule and delete it.
    4. Every two years, for a six-year term, provide to our Office:
      1. a report from Equifax Canada detailing its monitoring for compliance with the arrangement described in b. above;
      2. an audit report and certification, covering all Canadians’ personal information processed by Equifax Inc., against an acceptable security standard, conducted by an appropriate external auditor; and
      3. a third party assessment, covering all Canadians’ personal information processed by Equifax Inc., of Equifax Inc.’s retention practices.

    162. The following recommendations relate to contraventions found in Section 5 of this report, ie. Safeguards of Equifax Canada. We recommended that Equifax Canada:

    1. Provide our office with a detailed security audit report and certification, covering all Canadian personal information it is responsible for, against an acceptable security standard, conducted by an appropriate external auditor every two years for a six-year term.

    The re-thinking of consent and outsourcing in this finding has led to the OPC’s consultation on transborder dataflows, which is seeking input on this radical change in position by the OPC, discussed in the April 17 edition of the CanTech newsletter.

  • 1 May 2019 12:36 PM | Deleted user

    Supreme Court offers mixed reasons for finding no section 8 violations

    The Supreme Court of Canada discussed the nature of electronic communications and the ability of the State to make use of those conversations with its decision in R v Mills, though the case provides less in the way of real guidance than it could have.

    Mills became the subject of an undercover operation conducted by police to catch child lurers on the internet. An officer posed as a 14-year-old girl, ‘Leann’. Mills used Facebook and Hotmail to send sexually explicit messages to ‘Leann’, ultimately arranging to meet her in a park. The police arrested him at the park. During the course of the undercover operation, the officer posing as ‘Leann’ recorded the conversation by taking screenshots using purpose-built software called “Snagit”. The officer did not have prior judicial authorization to make and keep these screenshots. 

    At trial, Mills applied to exclude the screenshots from evidence. The trial judge concluded the screenshots were “private communications” under section 183 of the Code and therefore that prior judicial authorization had been required under section 184.2 of the Code from the point that Mills became the subject of investigation. The trial judge held the screenshots constituted a “seizure of communications” in breach of Mills’ reasonable expectation of privacy in his communications under section 8 of the Charter. However, the trial judge ultimately held that admission would not bring the administration of justice into disrepute, and so Mills was convicted.

    On appeal, the conviction was upheld. However, the Court of Appeal found that the trial judge had erred in concluding that the section 184.2 authorizations were required. Instead, they found that Mills’ did not have a reasonable expectation of privacy in the communications and so section 8 was not engaged.

    The Supreme Court of Canada dismissed the appeal, but with three different sets of reasons and four decisions. 

    Justice Brown (with Justices Abella and Gascon concurring) found that there was no unreasonable search on the basis that there was no search at all, since the accused did not have a reasonable expectation of privacy in his conversation with the undercover officer. This set of reasons pays the least attention, at least explicitly, to the technological aspect of the communication. They concluded that any section 8 claim requires that the accused have a subjectively held and objectively reasonable expectation of privacy in the subject matter of the search, and in the view of this cohort Mills’ subjective expectation of privacy was not objectively reasonable. Specifically, these three judges held that “adults cannot reasonably expect privacy online with children they do not know” (para 23).

    Generally whether an expectation of privacy is objectively reasonable has turned on consideration of a number of factual questions, such as whether the person has the ability to regulate access or whether the accused has abandoned the property. There has always been a certain level of discontinuity between the types of factors listed and the question that the objective portion of the analysis is meant to answer, which is whether, on a normative analysis, the privacy interest concerned is one that a person should be entitled to expect in our society. In essence, Justice Brown’s analysis simply goes directly to that normative issue, and concludes that adults cannot reasonably expect privacy online with children they do not know. Society values privacy in the context of many adult-child relationships “including, but in no way limited to, those with family, friends, professionals, or religious advisors” (para 24), but this relationship was not one of those contexts.

    The challenge for this conclusion, as Justice Brown’s cohort recognizes, is that on its face it runs contrary to the long-accepted principle that privacy must be assessed on “broad and neutral terms” that do not lead to post facto reasoning: for example in R v Wong, [1990] 3 SCR 36 the privacy question was whether the accused had a privacy interest when they rented a hotel room, not whether they could have a privacy interest in an illegal gaming operation being conducted in a hotel room. Justice Brown therefore stresses that no such post facto reasoning was engaged in the particular facts of this case. The officer who created ‘Leann’ knew that any adult who communicated with ‘her’ would be communicating with a child unknown to them, and so no other sort of communication could result. They argued that on these facts, sanctioning this form of unauthorized surveillance does not impinge citizens’ privacy in a way that is inconsistent with a free and open society in which expectations of privacy are normative. Where the police were aware that ‘Leann’ was fictitious and there was no risk to any genuine adult-child relationship, they could be absolutely certain that no section 8 breach could occur from taking the screenshots, because there was no reasonable expectation of privacy. Where there was no potential for a privacy breach, there was no need for prior judicial authorization, and as such section 184.2 of the Code did not apply.

    Justice Karakatsanis J (with Chief Justice Wagner concurring) agreed in the result, and also found that there was no reasonable expectation of privacy in these communications, but for entirely different reasons. She held that 

    [39] The right to be secure against unreasonable searches and seizures must keep pace with technological developments to ensure that citizens remain protected against unauthorized intrusions upon their privacy by the state: R. v. Fearon, 2014 SCC 77, [2014] 3 S.C.R. 621, at para. 102; see also R. v. Wong, [1990] 3 S.C.R. 36, at p. 44. However, as technology evolves, the ways in which crimes are committed — and investigated — also evolve.

    Applying those principles to these circumstances, she concluded that no interaction had taken place here which should be considered an interception by the State. Her reasoning rests on an analogy with R v Duarte, [1990] 1 SCR 30. That case dealt with an undercover officer who had a conversation with the accused, and who surreptitiously recorded it: the case found that prior judicial authorization was required even for “consent interceptions” such as that. The argument Justice Karakatsanis makes, however, is that there has never been any suggestion that prior judicial authorization would be required for the undercover officer to have the conversation with the accused: “it is not reasonable to expect that your messages will be kept private from the intended recipient (even if the intended recipient is an undercover officer)” (para 36). On that basis, the accused here had no reasonable expectation of privacy in his conversation with ‘Leann’.

    There is a need for judicial pre-authorization when the state chooses to surreptitiously make a permanent electronic record of such a communication: however, Justice Karakatsanis holds, that is not what occurred here. All that had occurred here was the conversation itself, which happened to be a conversation which took place by electronic means: the State was not creating the record:

    [48]…Mr. Mills chose to use a written medium to communicate with Constable Hobbs. Email and Facebook messenger users are not only aware that a permanent written record of their communication exists, they actually create the record themselves. The analogy with Duarte is to the oral conversation, not the surreptitious recording of that conversation. 

    There was the further issue that in this case the police had used the program “Snagit” to take screenshots of the electronic messages, but Justice Karakatsanis held that this did not change things. Inherently, she held, the communications existed as a written record, and “I cannot see any relevant difference in the state preserving the conversations by using ‘Snagit’ to take screenshots of them, by using a computer to print them, or by tendering into evidence a phone or laptop with the conversations open and visible” (para 56). She did note, however, that:

    [57] My conclusion that s. 8 is not engaged in this case does not mean that undercover online police operations will never intrude on a reasonable expectation of privacy. As technology and the ways we communicate change, courts play an important role in ensuring that undercover police techniques do not unacceptably intrude on the privacy of Canadians. Particularly in the context of the digital world, it is important for courts to consider both the nature and the scale of an investigative technique in determining whether s. 8 is engaged. With respect to the concern about the prospect of broader surveillance made possible by technological advances, as Binnie J. observed in Tessling, “[w]hatever evolution occurs in future will have to be dealt with by the courts step by step. Concerns should be addressed as they truly arise”: para. 55.

    She also added, as a note of caution:

    [60]…The fact that conversations with undercover officers now occur in written form on the Internet does not, in itself, violate s. 8 of the Charter. However, this conclusion in no way gives the police a broad license to engage in general online surveillance of private conversations.

    Justice Moldaver, writing only for himself, agreed with the reasons of both Brown and Karakatsanis JJ. One could see that as making Justice Brown’s decision the majority one, in that four of seven judges ultimately accept his reasoning.

    Justice Martin, also writing only for herself, however, disagreed with the reasoning of all the other members of the court, found the accused to have a reasonable expectation of privacy, found that the use of the Snagit software was an interception, and found there to be a section 8 violation. However, as she concluded that the evidence should not be excluded, she agreed in the result. Describing the case as “Duarte for the digital age” she suggested that 

    [88] In this case, we have the opportunity to pull the normative principles of Duarte and Wong through this Court’s more recent Charter s. 8 and Code Part VI jurisprudence — in particular, PatrickR. v. TELUS Communications Co., 2013 SCC 16, [2013] 2 S.C.R. 3; R. v. Cole, 2012 SCC 53, [2012] 3 S.C.R. 34; SpencerR. v. Marakah, 2017 SCC 59, [2017] 2 S.C.R. 608; R. v. Jones, 2017SCC 60, [2017] 2 S.C.R. 696; Reeves. The goal is to arrive at a judicial position that, while firmly grounded in the case law, “keep[s] pace with technological development, and, accordingly, . . . ensure[s] that we are ever protected against unauthorized intrusions upon our privacy by the agents of the state, whatever technical form the means of invasion may take”: Wong, at p. 44.

    [89] The risk contemplated in Duarte was that the state could acquire a compelled record of citizens’ private thoughts with no judicial supervision. At the end of the Cold War era, the way to obtain a real-time record of a conversation was to record it. Today, the way to obtain a real-time record of a conversation is simply to engage in that conversation. This Court must assess how and whether the primary concern of documentation in Duarte still applies to cases in which (a) a communication method self-generates documentation of the communication, and (b) the originator of the communication knows that this occurs. Should this shift in communication technology now allow the state to access people’s private online conversations at its sole discretion and thereby threaten our most cherished privacy principles?

    Justice Martin rejected Justice Karakatsanis’ view that the Facebook exchange was equivalent to only the conversation itself in Duarte, holding that it was equivalent to both the conversation and the electronic recording of it, and argued that “[t]his duality should support, not undermine the protection of privacy rights” (para 93). Similarly she held that the issue of whether the State surreptitiously created the record or whether it was created as a by-product of the communication was irrelevant to the underlying policy concerns:

    [100] The consequences of knowing that, at any point and with reference to any of our statements, we will have to contend with a documented record of those statements in the possession of the state, would be no less than the total “annihilat[ion]” (Duarte, at p. 44) of our sense of privacy.

    Justice Martin also disagreed with Justice Brown’s approach of finding no reasonable expectation of privacy in a conversation between an adult and a child not known to them, arguing that “The Court should not create Charter-free zones in certain people’s private, electronic communications on the basis that they might be criminals whose relationships are not socially valuable” (para 111), concluding that that approach was inconsistent with the principle of content-neutrality.

  • 1 May 2019 12:32 PM | Deleted user

    Privacy Commissioner plans to take Facebook to federal court 

    On April 29, 2019, the Office of the Information and Privacy Commissioner of British Columbia and the Office of the Privacy Commissioner of Canada (“OPC”) released the result of their joint investigation into Facebook, Inc. in connection with Cambridge Analytica. In PIPEDA Report of Findings #2019-002, both Commissioners conclude that Facebook had violated the federal and British Columbia privacy statutes. 

    The investigation stemmed from revelations that personal information of users of a third party app on the Facebook platform was later used by third parties for targeted political messaging. The investigation focussed on: (i) consent of users, both those who installed an app and their friends, whose information was disclosed by to the apps, and in particular to the “thisisyourdigitallife” or TYDL App; (ii) safeguards against unauthorized access, use and disclosure by apps; and (iii) accountability for the information under Facebook’s control.

    The OPC reports that they were disappointed with Facebook’s “lack of engagement” with their investigation, with many of the OPC’s questions going unanswered, or the answers provided being deficient. The OPC summarized its findings as follows: 

    1. Facebook failed to obtain valid and meaningful consent of installing users. Facebook relied on apps to obtain consent from users for its disclosures to those apps, but Facebook was unable to demonstrate that: (a) the TYDL App actually obtained meaningful consent for its purposes, including potentially, political purposes; or (b) Facebook made reasonable efforts, in particular by reviewing privacy communications, to ensure that the TYDL App, and apps in general, were obtaining meaningful consent from users.
    2. Facebook also failed to obtain meaningful consent from friends of installing users. Facebook relied on overbroad and conflicting language in its privacy communications that was clearly insufficient to support meaningful consent. That language was presented to users, generally on registration, in relation to disclosures that could occur years later, to unknown apps for unknown purposes. Facebook further relied, unreasonably, on installing users to provide consent on behalf of each of their friends, often counting in the hundreds, to release those friends’ information to an app, even though the friends would have had no knowledge of that disclosure.
    3. Facebook had inadequate safeguards to protect user information. Facebook relied on contractual terms with apps to protect against unauthorized access to users’ information, but then put in place superficial, largely reactive, and thus ineffective, monitoring to ensure compliance with those terms. Furthermore, Facebook was unable to provide evidence of enforcement actions taken in relation to privacy related contraventions of those contractual requirements.
    4. Facebook failed to be accountable for the user information under its control. Facebook did not take responsibility for giving real and meaningful effect to the privacy protection of its users. It abdicated its responsibility for the personal information under its control, effectively shifting that responsibility almost exclusively to users and Apps. Facebook relied on overbroad consent language, and consent mechanisms that were not supported by meaningful implementation. Its purported safeguards with respect to privacy, and implementation of such safeguards, were superficial and did not adequately protect users’ personal information. The sum of these measures resulted in a privacy protection framework that was empty.

    The OPC characterized these findings as particularly concerning, as its previous investigation of Facebook in 2009 found similar issues, leading the OPC to the conclusion that Facebook had not taken the recommendations from that investigation seriously. In this investigation, the OPC made the following recommendations:

    Facebook should implement measures, including adequate monitoring, to ensure that it obtains meaningful and valid consent from installing users and their friends. That consent must: (i) clearly inform users about the nature, purposes and consequences of the disclosures; (ii) occur in a timely manner, before or at the time when their personal information is disclosed; and (iii) be express where the personal information to be disclosed is sensitive. ...

    Facebook should implement an easily accessible mechanism whereby users can: (i) determine, at any time, clearly what apps have access to what elements of their personal information [including by virtue of the app having been installed by one of the user’s friends]; (ii) the nature, purposes and consequences of that access; and (iii) change their preferences to disallow all or part of that access.

    Facebook’s retroactive review and resulting notifications should cover all apps. Further, the resulting notifications should include adequate detail for [each user] to understand the nature, purpose and consequences of disclosures that may have been made to apps installed by a friend. Users should also be able to, from this notification, access the controls to switch off any ongoing disclosure to individual apps, or all apps.

    Facebook disagreed with many of the conclusions and recommendations, and the OPC has indicated that it plans to seek an order from the Federal Court to implement the recommendations.

    The report of findings also includes an interesting discussion about jurisdiction. Facebook asserted that the OPC did not have jurisdiction because there was no evidence that any Canadian user personal information had been disclosed to the operator of the TYDL app. Facebook also asserted that the OIPC of British Columbia did not (and could not) have jurisdiction by operation of Section 3 of the Personal Information Protection Act of British Columbia, which provides that the Act does not apply where PIPEDA applies. The OIPC and OPC pointed to the Organizations in British Columbia Exemption Order, and also asserted that their jurisdiction over the complaint did not depend on information having been provably disclosed to the TYDL app:

    44. While the complaint may have been raised within the context of concerns about access to Facebook users’ personal information by Cambridge Analytica, as noted above, the complaint specifically requested a broad examination of Facebook’s compliance with PIPEDA to ensure Canadian Facebook users’ personal information has not been compromised and is being adequately protected. Moreover, we advised Facebook that the investigation would be examining allegations that Facebook allowed Cambridge Analytica, among others, to inappropriately access users’ personal information and did not have sufficient safeguards to prevent such access.

  • 17 Apr 2019 12:35 PM | Deleted user

    Driver wearing wired ear bud headphones convicted of “using” phone, despite dead battery

    In R. v. Grzelak, the accused was ticketed for “holding or using” an electronic device while driving, an offence under s. 214.2 of the BC Motor Vehicle Act. The undisputed facts were that the accused’s iPhone was in a centre cubby hole in the dashboard of his car, and that he was wearing a pair of ear bud headphones (in both ears) which were plugged into the phone. The phone’s battery was dead and no sound of any sort was coming through the ear buds. The offence provision required the Crown to prove that the accused was “holding the device in a position in which it may be used.” The judge noted that if this was proven, a conviction must follow, “even if the battery was dead, and even if the Defendant was not operating one of the functions of the device (such as the telephone or GPS function).” In support of this proposition the judge cited R. v. Judd, which seems an odd choice as in that case the accused was convicted because he was physically holding his phone up to his ear while driving, and there was no evidence about the phone’s battery or which function he might have been using.

    On the issue of “holding” the judge found as follows:

    [9] Obviously, here the cell phone itself was sitting in the centre cubby hole, and was not in the defendants hands, or in his lap. But that is not the end of the matter. In my view, by plugging the earbud wire into the iPhone, the defendant had enlarged the device, such that it included not only the iPhone (proper) but also attached speaker or earbuds. In the same way, I would conclude that if the defendant had attached an exterior keyboard to the device for ease of inputting data, then the keyboard would then be part of the electronic device.

    [10] Since the earbuds were part of the electronic device and since the ear buds were in the defendants ears, it necessarily follows that the defendant was holding the device (or part of the device) in a position in which it could be used, i.e. his ears.

    Even the dead battery could not absolve the accused, as the judge held that simple “holding” was sufficient to make out the offence, “even if it is temporarily inoperative.” Accordingly, the accused was convicted.

    In our view, and with respect, this reasoning seems a bit of stretch. The accused was found to have been “holding the device”… in his ears. Surely this strains a reasonable interpretation of what the BC Legislature intended with the wording of the provision. Was it the fact of the physical connection of the earbuds to the phone, i.e. via a wire, that “enlarged” the device? This does beg the question of whether there would be a different finding if the ear buds (or the judge’s hypothetical keyboard) were connected via Bluetooth, as is increasingly common. It is probably fair to say that what the Legislature intended to capture with these provisions is distracted driving, and that driving with earbuds in (if the phone was not dead, as here) might amount to that. But as this case demonstrates, as do so many other cases like it, it would be preferable for the Legislatures to use more technology-neutral language in these offence provisions.

  • 17 Apr 2019 12:34 PM | Deleted user

    Second search of phone data after update to forensic software held lawful under Charter

    In R. v. Nurse, the two accused had been convicted at trial of first-degree murder of the deceased, Kumar, who was Nurse’s landlord. The Blackberrys belonging to the two were seized incident to their arrest, and as warrant was obtained to search them. As they were locked and password-protected the OPP investigating officers sent them to the RCMP for forensic extraction of data. The software used by the RCMP, called “Cellebrite,” was able to analyze raw data that was extracted from the phones and it showed that there had been some communication between them, but nothing incriminatory was found. However, the data was re-analyzed a year later, by which time there had been significant software updates to Cellebrite, and the new analysis revealed extensive text messages between the two accused which revealed a plan to kill the victim.

    On appeal the accused repeated an argument they had made unsuccessfully at trial: that the re-analysis with the updated software amounted to a second “search” for the purposes of s. 8 of the Charter, and thus a second warrant should have been obtained. In rejecting this argument for a unanimous bench, Trotter J.A. remarked:

    [133] In analyzing this issue, it is important to consider the essential nature of computers and other digital devices. They challenge traditional definitions of a “building, receptacle or place” within the meaning of s. 487 of the Criminal Code. In R. v. Marakah, 2017 SCC 59, [2017] 2 S.C.R. 698, McLachlin C.J. said, at para. 27: “The factor of ‘place’ was largely developed in the context of territorial privacy interests, and digital subject matter, such as an electronic conversation, does not fit easily within the strictures set out by the jurisprudence.” See also R. v. Jones, 2011 ONCA 632, 107 O.R. (3d) 241, at paras. 45-52. Similarly, in R. v. Vu, 2013 SCC 60, [2013] 3 S.C.R. 657, Cromwell J. said, at para. 39: “…computers are not like other receptacles that may be found in a place of search. The particular nature of computers calls for a specific assessment of whether the intrusion of a computer search is justified, which in turn requires prior authorization.”

    [134] Because of these conceptual differences, arguments by analogy to traditional (i.e., non-digital) search scenarios will not always be helpful. For example, the trial judge was right to reject the ultraviolet light testing scenario advanced by trial counsel. It does not work in this context because the second ultraviolet light analysis would require re-entry into the premises resulting in a separate invasion of privacy.

    [135] The re-inspection or re-interpretation of the raw data harvested from the appellants’ devices did not involve a further invasion of privacy. It is not necessary in this case to identify precisely when the appellants’ privacy rights were defeated in favour of law enforcement. Nevertheless, their privacy rights were “implicated” when their devices were seized upon arrest. In R. v. Reeves, 2018 SCC 56, 427 D.L.R. (4th) 579, Karakatsanis J. held at para. 30: “When police seize a computer, they not only deprive individuals of control over intimate data in which they have a reasonable expectation of privacy, they also ensure that such data remains preserved and thus subject to potential future state inspection” (emphasis in original). The same would hold true for the seizure of a cellphone or BlackBerry device.

    Here, whatever privacy interest the accused had in their phone data had been defeated completely by the issuing of the first warrant. The warrant did not have any search protocols attached to it, nor was there any indication that protocols would have been constitutionally necessary. The situation was analogous to a fraud investigation where copies of documents were taken and were continually inspected by police over the course of the investigation, and where it would be appropriate to consult new expert services to interpret them. While it might not always be the case that a re-analysis or re-inspection was not a new search, in this case the data was analyzed within the scope of an ongoing investigation, “the substance of which had not changed” between the two searches. The data was not altered in any way. The passage of time had no impact upon the lawfulness of the search. This ground of appeal was dismissed.

  • 17 Apr 2019 12:33 PM | Deleted user

    In seeking to revise crossborder dataflows, the OPC’s position would require consent for all transfers of personal information for processing

    The Office of the Privacy Commissioner of Canada (OPC) has initiated a consultation that proposes to completely reverse its previous guidance on crossborder dataflows under the Personal Information Protection and Electronic Documents Act (PIPEDA). And because they are trying to fit a round peg in a square hole, their position -- if implemented -- will have a huge impact on all outsourcing.

    In 2009, the OPC published a position that was consistent with the actual wording of the statute. It held that when one organization gives personal information to a service provider, so that the service provider can process the data on behalf of the original organization, it was a transfer and not a disclosure. This is an important distinction because transfers do not require consent from the individual, as is the case with a disclosure. Data is disclosed when it is given to another organization for use by that organization for its own purposes. In a transfer scenario, the personal information is protected by operation of the accountability principle, which means the organization that originally collected the data and has transferred it to a service provider remains responsible for the personal data and has to use contractual and other means to make sure that the service provider takes good care of the personal information at issue. Importantly, in its 2009 guidance, the OPC correctly noted “PIPEDA does not distinguish between domestic and international transfers of data.” Consent was not required, but the OPC did recommend that notice be given to the individual:

    Organizations must be transparent about their personal information handling practices. This includes advising customers that their personal information may be sent to another jurisdiction for processing and that while the information is in another jurisdiction it may be accessed by the courts, law enforcement and national security authorities.

    The 2009 policy position reflects the consensus of most privacy practitioners since PIPEDA came into effect in 2001. The new position is a complete reversal and discards the notion of “transfers” of personal information for processing: 

    Under PIPEDA, any collection, use or disclosure of personal information requires consent, unless an exception to the consent requirement applies. In the absence of an applicable exception, the OPC’s view is that transfers for processing, including cross border transfers, require consent as they involve the disclosure of personal information from one organization to another. Naturally, other disclosures between organizations that are not in a controller/processor relationship, including cross border disclosures, also require consent. [emphasis added]

    The new position concludes that because there is nothing in PIPEDA that specifically exempts transfers from consent, transfers can be folded into the mandatory consent scheme:

    While it is true that Canada does not have an adequacy regime [as in Europe] and that PIPEDA in part regulates cross border data processing through the accountability principle, nothing in PIPEDA exempts data transfers, inside or outside Canada, from consent requirements. Therefore, as a matter of law, consent is required. Our view, then, is that cross-border data flows are not only matters decided by states (trade agreements and laws) and organizations (commercial agreements); individuals ought to and do, under PIPEDA, have a say in whether their personal information will be disclosed outside Canada.

    This new position, while demanding consent, brings the true nature of that consent into question. One one hand, the organization has to get consent. On the other hand, the individual can be given no meaningful choice or ability to opt-out, because the organization can say “take it or leave it”:

    Organizations are free to design their operations to include flows of personal information across borders, but they must respect individuals’ right to make that choice for themselves as part of the consent process. In other words, individuals cannot dictate to an organization that it must design its operations in such a way that personal information must stay in Canada (data localisation), but organizations cannot dictate to individuals that their personal information will cross borders unless, with meaningful information, they consent to this.

    There is little basis in the statute for this position reversal, and the OPC’s consultation document shows some significant mental gymnastics to get where they want to go notwithstanding the actual scheme of the Act. 

    Because PIPEDA does not deal with crossborder transfers in any specific way, the only way for the OPC to get to the result they seek is to impose their new requirements on all transfers for processing by a third party, regardless of whether that processing involves moving the personal information outside of Canada. And to highlight the shortcomings of trying to shoehorn this principle into the existing statute, it would not affect in any way a US company that operates in Canada deciding after the fact to move data to its own US-based data centre because it would not be a disclosure or a transfer from one entity to another. 

    The proposal immediately garnered significant criticism. Lisa Lifshitz wrote for Canadian Lawyer Magazine:

    This is problematic in several respects as this analysis flies in the face of years of guidance from the OPC and reiterated repeatedly, including in the 2012 Privacy and Outsourcing for Businesses guidance document) that a transfer for processing is a "use" of the information, not a disclosure. Assuming the information is being used for the purpose it was originally collected, additional consent for the transfer is not required; it is sufficient for organizations to be transparent about their personal information handling practices. This includes advising Canadians that their personal information may be sent to another jurisdiction for processing and that while the information is in another jurisdiction it may be accessed by the courts, law enforcement and national security authorities. 

    ***

    The OPC’s implement-first-ask-permission-later approach to changing the consent requirements for cross-border data transfers is troublesome at best and judging from initial reactions, sits uneasily with many (me included).

    Likely knowing this, at the same time it released the Equifax decision the privacy commissioner also announced a “Consultation on transborder dataflows” under PIPEDA, not only for cross-border transfers between controllers and processors but for other cross border disclosures of personal information between organizations. The GDPR-style language used in this document is no accident and our regulator is seemingly trying to ensure the continued adequacy designation of PIPEDA (and continued data transfers from the EU to Canada) by adopting policy reinterpretations (and new policies) pending any actual legal reform of our law. Meanwhile, the OPC’s sudden new declaration that express consent is required if personal information will cross borders (and the related requirement that individuals must be informed of any options available to them if they do not wish to have their personal information disclosed across borders) introduces a whole new level of confusion and complexity regarding the advice that practitioners are supposed to be giving their clients pending the results of the consultations review, not to mention the potential negative business impacts (for consumers/vendors of cloud/managed services and mobile/ecommerce services, just to name a few examples) that may arise as a consequence.

    Michael Geist has written about the OPC’s approach on his blog:

    While the OPC position is a preliminary one – the office is accepting comments in a consultation until June 4 – there are distinct similarities with its attempt to add the right to be forgotten (the European privacy rule that allows individuals to request removal of otherwise lawful content about themselves from search results) into Canadian law. In that instance, despite the absence of a right-to-be-forgotten principle in the statute, the OPC simply ruled that it was reading in a right to de-index search results into PIPEDA (Canada’s Personal Information Protection and Electronic Documents Act). The issue is currently being challenged before the courts.

    In this case, the absence of meaningful updates to Canadian privacy law for many years has led to another exceptionally aggressive interpretation of the law by the OPC, effectively seeking to update the law through interpretation rather than actual legislative reform.

    The OPC is inviting comments up to June 4, 2019 and it is expected they’ll get an earful. The Canadian Technology Law Association is planning to make a submission. For more information or to contribute, contact CAN-TECH Law’s President James Kosa.

  • 4 Apr 2019 12:41 PM | Deleted user

    Office of the Superintendent of Financial Institution issue Advisory

    On March 31, 2019, the Technology and Cyber Security Reporting Advisory came into effect, setting out the Office of the Superintendent of Financial Institution’s expectation for federally regulated financial institutions (FRFI) with regard to technology or cyber security incidents. A “technology or cyber security incident” is defined as an incident which has “the potential to, or has been assessed to, materially impact the normal operations of a FRFI, including confidentiality, integrity or availability of its systems and information.” FRFI’s should report an incident which has a high or critical severity level to OSFI. The Advisory indicated that a “reportable incident” is one that may have: 

    • Significant operational impact to key/critical information systems or data;
    • Material impact to FRFI operational or customer data, including confidentiality, integrity or availability of such data; 
    • Significant operational impact to internal users that is material to customers or business operations;
    • Significant levels of system / service disruptions;
    • Extended disruptions to critical business systems / operations; 
    • Number of external customers impacted is significant or growing;
    • Negative reputational impact is imminent (e.g., public/media disclosure); 
    • Material impact to critical deadlines/obligations in financial market settlement or payment systems (e.g., Financial Market Infrastructure);
    • Significant impact to a third party deemed material to the FRFI; 
    • Material consequences to other FRFIs or the Canadian financial system; 
    • A FRFI incident has been reported to the Office of the Privacy Commissioner or local/foreign regulatory authorities.

    An FRFI must give notice to OSFI as promptly as possible, but not later than 72 hours after determining an incident meets the criteria, and must do so in writing. In addition updates must be provided at least daily until all material details have been provided, and until the incident is contained or resolved. The Advisory also goes on to provide four examples of reportable incidents: cyber-attack, service availability and recovery, third party breach, and extortion threat.

  • 4 Apr 2019 12:40 PM | Deleted user

    Alberta Privacy Commissioner orders release of names of blocked Twitter accounts

    The Alberta Information and Privacy Commissioner has given guidance about the interaction between privacy legislation and Twitter with its decision inAlberta Education (Re)The applicant had made a request under the Freedom of Information and Protection of Privacy Act (FOIP Act) to a public body, Alberta Education, requesting a list of Twitter users/accounts that they had been blocked for each Twitter account that they operated or authorized. Alberta Education provided some records in response but refused to provide the names of some of the blocked Twitter accounts. They did so on the basis of section 17(1) of the FOIP Act, which states that

    17(1) The head of a public body must refuse to disclose personal information to an applicant if the disclosure would be an unreasonable invasion of a third party’s personal privacy.

    The Adjudicator ultimately decided that there was insufficient information to show that section 17(1) applied, and therefore directed Alberta Education to give the applicant access to the requested information. 

    Section 17 operates only when the disclosure of personal information would be an unreasonable invasion of a third party’s personal privacy. Under the FOIP Act, not all disclosure of personal information amounts to an unreasonable invasion of personal privacy. Under section 17(2), for example, information which reveals financial details of a contract to supply goods or services to a public body is not an unreasonable invasion: on the other hand section 17(4)(g) states that disclosure presumptively is an invasion of personal privacy if

    (g) the personal information consists of the third party’s name when 

    (i) it appears with other personal information about the third party, or

    (ii) the disclosure of the name itself would reveal personal information about the third party

    Section 17(5) sets out a number of non-exhaustive factors to be considered in determining whether a disclosure of personal information constitutes an unreasonable invasion of a third party’s personal privacy, such as whether the personal information is relevant to a fair determination of the applicant’s rights, or whether the personal information is likely to be inaccurate or unreliable.

    However, before any of that analysis becomes necessary, the information in question must be found to be personal information under s 17(1), which requires that the information must have a personal dimension and be about an identifiable individual. Here, Alberta Education had withheld the names of Twitter accounts and the associated image where it believed the information might reveal the identity and image of the account holder, on the basis that disclosure might enable the applicant to infer the identity of individuals engaged in inappropriate conduct. The Adjudicator questioned whether, given the reality of Twitter, such an inference was possible:

    [para 22] A Twitter account name is the name of an account, rather than the name of an individual. While some individuals may use their names as the name of their Twitter account, others do not. In addition, organizations and “bots” may also use Twitter accounts. I note that a July 11, 2018 article in the New York Times reports: 

    Twitter will begin removing tens of millions of suspicious accounts from users’ followers on Thursday, signaling a major new effort to restore trust on the popular but embattled platform.

    The reform takes aim at a pervasive form of social media fraud. Many users have inflated their followers on Twitter or other services with automated or fake accounts, buying the appearance of social influence to bolster their political activism, business endeavors or entertainment careers.

    Twitter’s decision will have an immediate impact: Beginning on Thursday, many users, including those who have bought fake followers and any others who are followed by suspicious accounts, will see their follower numbers fall. While Twitter declined to provide an exact number of affected users, the company said it would strip tens of millions of questionable accounts from users’ followers. The move would reduce the total combined follower count on Twitter by about 6 percent — a substantial drop. 

    [para 23] I note too, that an article in Vox describes the prevalence of fake and automated Twitter accounts: 

    In April, Pew found that automated accounts on Twitter were responsible for 66 percent of tweeted links to news sites. Those aren’t necessarily the bots Twitter is after: Automation remains okay to use under many circumstances. But the “malicious” are being targeted. Gadde said Wednesday that the new accounts being deleted from follower accounts aren’t necessarily bot accounts: “In most cases, these accounts were created by real people but we cannot confirm that the original person who opened the account still has control and access to it.” Weeding out these accounts might discourage the practice of buying fake followers.

    Twitter has acknowledged it contributed to the spread of fake news during the 2016 U.S. presidential election, and is trying not to have a repeat showing. It’s verifying midterm congressional candidate accounts, it launched an Ads Transparency Center, and now come the new culls.

    […] 

    The Washington Post notes that Twitter suspended more than 70 million accounts in May and June. Twitter also said recently that it’s challenging “more than 9.9 million potentially spammy or automated accounts per week.” [my emphasis] (“Challenged” doesn’t necessarily mean “suspended,” but users are prompted to verify a phone or email address to continue using the account.)

    [para 24] From the foregoing, I understand that millions of Twitter accounts may be automated or fake. As a result, the name of a Twitter account cannot be said to have a personal dimension necessarily, even though an account may have the appearance of being associated with an identifiable individual

    The information requested, the Adjudicator concluded, would be about Twitter accounts, which was not the same thing as being about individuals:

    [para 28] As it is not clearly the case that the accounts severed under section 17 are associated with identifiable individuals, and there is no requirement that a Twitter user use his or her own name or image, or be a human being, the fact that the Twitter account was blocked does not necessarily reveal personal information about an identifiable individual.

    [para 29] To put it in the terms used by the Alberta Court of Appeal, the evidence before me supports finding that the information severed by the Public Body is “about a Twitter account”, rather than “about an identifiable individual”.

    Since the standard for withholding information was that it would be an unreasonable invasion of a third party’s personal privacy to disclose the information, Alberta Education could not refuse the application on the lower standard that it could possibly be personal information. Accordingly she ordered Alberta Education to release the requested information.

    At the request of Alberta Education the adjudicator also commented on the applicability of this reasoning to email accounts, finding that if there was evidence establishing that an email address was connected to an identifiable individual and the email address appears in a context that reveals personal information about the individual, then the information would be personal information and the public body must consider section 17. Where that was not true, however, then section 17 was not applicable.

  • 4 Apr 2019 12:32 PM | Deleted user

    Court refuses to compel accused person to unlock phone

    In R. v. Shergill, Justice Philip Downes of the Ontario Court of Justice heard an application by the Crown which raised the thorny issue of whether accused persons can be compelled to “unlock” password-protected electronic devices. The accused was charged with a variety of sexual and child pornography offences and the police seized his cell phone incident to arrest. Realizing they had no technology that would allow them to open the phone without possibly destroying its contents, the police applied for a search warrant along with an “assistance order” under s. 427 of the Criminal Code. This section provides that a judge who issues a warrant “may order a person to provide assistance, if the person’s assistance may reasonably be considered to be required to give effect to the authorization or warrant.” Unusually, the application did not proceed ex parte and both the Crown and the accused made submissions.

    The Crown argued that the accused’s Charter rights were not engaged by the issuance of the assistance order, because it was a matter of “mere practicality.” Centrally, the principle against self-incrimination was not engaged because the order “only compels Mr. Shergill to provide access to, and not create, material the police are judicially authorized to examine, and because any self-incrimination concerns are met by the grant of use immunity over Mr. Shergill’s knowledge of the password.” The accused argued that the principle against self-incrimination was, indeed, engaged because the order would compel him to produce information that only existed in his mind, “for the purpose of assisting [the police] in obtaining potentially incriminating evidence against him”—thus violating his right to silence and the protection against self-incrimination under s. 7 of the Charter.

    Justice Downes sided with the accused. First, the principle against self-incrimination was engaged:

    The Crown suggests that Mr. Shergill’s s. 7 interests are “not engaged” or minimally compromised because what is sought to be compelled from him has no incriminatory value or effect. All the assistance order seeks is a password, the content of which is of no evidentiary value. Indeed, the Crown says that the police need not even be aware of the actual password as long as Mr. Shergill somehow unlocks the phone without actually touching it himself.

    In my view, however, the protection against self-incrimination can retain its force even where the content of the compelled communication is of no intrinsic evidentiary value. This is particularly so where, as here, that communication is essential to the state’s ability to access the evidence which they are “really after.” To paraphrase the Court in Reeves, to focus exclusively on the incriminatory potential of the password neglects the significant incriminatory effect that revealing the password has on Mr. Shergill. As the Supreme Court held in White:

    The protection afforded by the principle against self-incrimination does not vary based upon the relative importance of the self-incriminatory information sought to be used. If s. 7 is engaged by the circumstances surrounding the admission into evidence of a compelled statement, the concern with self-incrimination applies in relation to all of the information transmitted in the compelled statement. Section 7 is violated and that is the end of the analysis, subject to issues relating to s. 24(1) of the Charter. [footnotes omitted]

    Even more important, the judge found, was the right to silence under s. 11(b) of the Charter:

    In my view, the more significant principle of fundamental justice at stake is the right to silence. This right emerged as a component of the protection against self-incrimination in R. v. Hebert in which McLachlin J. (as she then was), held:

    If the Charter guarantees against self-incrimination at trial are to be given their full effect, an effective right of choice as to whether to make a statement must exist at the pre-trial stage… the right to silence of a detained person under s. 7 of the Charter must be broad enough to accord to the detained person a free choice on the matter of whether to speak to the authorities or to remain silent.

    McLachlin J. also reaffirmed the Court’s prior holding that the right to silence was “a well-settled principle that has for generations been part of the basic tenets of our law.” 

    The “common theme” underlying the right to silence is “the idea that a person in the power of the state in the course of the criminal process has the right to choose whether to speak to the police or remain silent.” In tracing the history of the right, McLachlin J. referred to an “array of distinguished Canadian jurists who recognized the importance of the suspect’s freedom to choose whether to give a statement to the police or not” and described the essence of the right to silence as the “notion that the person whose freedom is placed in question by the judicial process must be given the choice of whether to speak to the authorities or not.” Finally, Hebert held that s. 7 provides “a positive right to make a free choice as to whether to remain silent or speak to the authorities.”

    The pre-trial right to silence is a concept which, as Iacobucci held in R.J.S., has been “elevated to the status of a constitutional right.”[footnotes omitted]

    The court also rejected the Crown’s argument that the accused’s rights were sufficiently protected by providing use immunity for his knowledge of the contents of his phone and the password:

    As a practical matter, without the assistance order, the evidence would never come into the hands of the police. In that sense it strikes me as somewhat artificial to say that the data on the Blackberry is evidence which, in the language of D’Amour, “exist[s] prior to, and independent of, any state compulsion.” Rather, it is evidence which, as far as the police are concerned, is only “brought into existence by the exercise of compulsion by the state.”

    [….]

    Fundamentally, realistically and in any practical sense, granting this application would amount to a court order that Mr. Shergill provide information which is potentially crucial to the success of any prosecution against him, and which could not be obtained without the compelled disclosure of what currently exists only in his mind. It strikes at the heart of what the Supreme Court has held to be a foundational tenet of Canadian criminal law, namely, that an accused person cannot be compelled to speak to the police and thereby assist them in gathering evidence against him or herself.

    In my view nothing short of full derivative use immunity could mitigate the s.7 violation in this case.

    The Court then discussed some of the challenges that law enforcement are facing in light of new technology and encryption in particular. Though there is always a compelling public interest in the investigation and prosecution of crimes, the final balancing came down on the side of the accused's liberty interests under s. 7 of the Charter:

    I accept that the current digital landscape as it relates to effective law enforcement and the protection of privacy presents many challenges. It may be that a different approach to this issue is warranted, whether through legislative initiatives or modifications to what I see as jurisprudence which is binding on me. But on my best application of controlling authority, I am simply not persuaded that the order sought can issue without fundamentally breaching Mr. Shergill’s s. 7 liberty interests, a breach which would not be in accordance with the principle of fundamental justice which says that he has the right to remain silent in the investigative context.

    The search warrant was issued but the assistance order was denied.

  • 20 Mar 2019 1:33 PM | Deleted user

    Detailed questionnaire sent to at least 60 individuals

    According to Forbes Online, the Canada Revenue Agency (CRA) has begun to audit individuals with significant involvement in cryptocurrency holdings or transactions. 

    In 2017, the CRA established a dedicated cryptocurrency unit said to be intended to build intelligence and conduct audits focussed on cryptocurrency risks as part of its Underground Economy Strategy

    Forbes reports there are currently over 60 active cryptocurrency audits, which involve a very detailed questionnaire, consisting of 54 questions with many sub-questions. Examples include: 

    • Do you use any cryptocurrency mixing services and tumblers? (Which can be used to intermix accounts and disguise the origin of the funds.) If so, which services do you use? 
    • Can you please provide us with the tracing history, along with all the cryptocurrency addresses you ‘mixed’? Why do you use these services?”. 

    Further questions address whether the taxpayer has purchased or sold crypto-assets from or to private individuals. If so, how they became aware of the sale opportunity, and how the transaction was facilitated. The questionnaire goes so far as to ask the taxpayer to list all personal crypto-asset addresses that are not associated with their custodial wallet accounts. It further asks whether they have been a victim of crypto-theft, or have been involved in ICOs (initial coin offerings) or participate in crypto-mining.

    Not surprisingly, CRA has not disclosed what criteria it has used to target individuals with the questionnaires.

  

Canadian Technology Law Association

1-189 Queen Street East

Toronto, ON M5A 1S2

contact@cantechlaw.ca

Copyright © 2024 The Canadian Technology Law Association, All rights reserved.