Menu
Log in
Log in


News

  • 15 May 2019 12:29 PM | Deleted user

    Class action for breach of copyright over obituaries and attached photos successful at Federal Court

    In Thomson v. Afterlife Network Inc., Thomson was the representative plaintiff in a class action claiming that obituaries and photographs authored and taken by Thomson (and others in the class) that had been posted online to various funeral homes and newspapers, were taken from the internet without permission and reproduced by Afterlife for profit. The suit alleged copyright infringement and infringement of moral rights of the members of the class, as the Terms of Service on Afterlife’s website stated that Afterlife owned the copyright in the website contents. 

    Thomson’s father passed away in January 2017, at which point she authored an obituary that, along with a photo of her father, was published by a funeral home with her permission. A year later she discovered that Afterlife displayed the obituary and photograph on their website, without her permission, and provided options to buy flowers and virtual candles. Thomson submits that Afterlife caused viewers of their website to believe she had consented to the use of her father’s obituary and image, and to believe that she would profit from sales. 

    Afterlife’s solicitor withdrew and Afterlife did not participate in the proceedings, having shut down its website one month after the class proceeding was certified. Traffic had been redirected to a similar website, with template rather than identical obituaries. 

    The court found, on the basis of CCH Canadian Ltd v Law Society of Upper Canada, 2004 SCC 13, that the obituaries and photographs were original works in which the author, here Thomson, possessed copyright, and that Afterlife’s actions constituted copyright infringement since it had reproduced the original works without permission. As to the moral rights claim, the court invoked Maltz v. Witterick, 2016 FC 524, which held that “an author’s right to the integrity of a work includes not only a highly substantive aspect, which the author of the work must establish, but also an objective element requiring evaluation of the prejudice to that author’s honour or reputation based on public or expert opinion.” The court found that while Thomson was sincere that her honour and reputation were prejudiced, no objective evidence of such was provided, and therefore the court was unable to make a determination as to prejudice. 

    The FC awarded aggregate damages of CA$10 million, and aggravated damages of CA$10 million across 2 million instances of infringement. The aggravated damages were granted due to the court’s finding that Afterlife’s conduct was high-handed and had significant impact on the members of the class.

    (with a contribution from Daniel Roth)

  • 15 May 2019 12:28 PM | Deleted user

    Court of Appeal finds trial judge erred in failing to acknowledge authenticity and admissibility of texts and photos

    In R. v. C.B., the two appellants had been convicted at trial of assault, sexual assault and unlawful confinement of the two 16-year old complainants, arising from events that took place over the course of two days at the home of the appellant C.B. On appeal were issues relating to electronic evidence that had been led at trial. On the main issue, the complainant DP was cross-examined on the basis of texts between her and CB (extracted from CB’s phone) which were contemporaneous to the alleged offences. These texts appeared to show her joking about sex and the use of sex toys, around the time she said she had been sexually assaulted. She acknowledged that the phone number for the phone on which the texts were received was hers, and that one of the texts related to her then-boyfriend. At this point in the judgment Watt J.A., writing for the court, provided a mini-excursus on texting terminology (possibly recounting the witness’s testimony):

    But the term “LMFAO”, which was included in her text, could mean several things. It could mean what it says. Or it could mean that somebody is uncomfortable with the situation and is just laughing about it to show them that. It is undisputed that the term “LMFAO” is a common acronym used in text messaging for “laugh my fucking ass off”.

    When cross-examination resumed the next day, however, DP denied that a number of the texts sent from her phone were authored by her, because she “did not talk like that,” and suggested that a monitoring app had been placed on her phone by CB. In his reasons for judgment the trial judge emphasized the complainant’s latter statements about the texts, emphasized there had been no forensic evidence led about the texts and stated they had “no probative value” because there was “no evidence as to whose phone it was, who put the messages in the phone.”

    The appellants appealed on the basis that the trial judge appeared to have found that the texts had not been authenticated, and thus were inadmissible, when there was sufficient evidence on the record as to the authorship of the texts. Watt J.A. agreed with this argument, tracing the provisions regarding the admissibility of electronic evidence in the Canada Evidence Act and noting that the authentication requirement, s. 31.1, simply mirrored the common law and its very low threshold that there be “some evidence” that the electronic documents were what the offering party purported them to be. With specific record to communications like texts, he remarked:

    [69] At common law, correspondence could be authenticated by the “reply letter” doctrine: to authenticate correspondence as having been sent by one individual to another, evidence is adduced to show it is a reply to a letter sent to that person. As a matter of logic, the same should hold true for text messages and emails. Evidence that A sent a text or email to B whom A believed was linked to a specific address, and evidence of a response purportedly from B affords some evidence of authenticity: David Paciocco, “Proof and Progress: Coping with the Law of Evidence in a Technological Age” (2013) 11 C.J.L.T. 181, at pp. 197-8 (Paciocco).

    [70] In a similar way, text messages may be linked to particular phones by examining the recorded number of the sender and receiving evidence linking that number to a specific individual, as for example, by admission: Paciocco, at p. 198.

    [71] But what of the prospect of tampering? Does it have to be negated before digital evidence can be properly authenticated?

    [72] As a matter of principle, it seems reasonable to infer that the sender has authored a message sent from his or her phone number. This inference is available and should be drawn in the absence of evidence that gives an air of reality to a claim that this may not be so. Rank speculation is not sufficient: R. v. Ambrose, 2015 ONCJ 813 (CanLII), at para. 52. And even if there were an air of reality to such a claim, the low threshold for authentication, whether at common law or under s. 31.1 of the CEA, would seem to assign such a prospect to an assessment of weight.

    Here, it appeared that the trial judge had made two mistakes: he had failed to fasten on the fact that DP had provided testimony proving that the set of texts was a conversation between herself and CB, and he had appeared to require expert forensic evidence in order to establish authenticity, which was unnecessary. Justice Watt concluded on this point:

    [78] Satisfaction of the evidentiary threshold for authentication under s. 31.1 of the CEA or at common law renders the evidence admissible; in other words, available to the trier of fact for ultimate evaluation [editor’s note: assuming, we expect Justice Watt meant, that the evidence does not offend one of the admissibility rules, e.g. hearsay, prior consistent statement]. It does not follow from admissibility that the trier of fact must find that the evidence is in fact what it claims to be. What remains of the dispute after admissibility has been established relates to the weight to be assigned to the evidence. And that issue is left to the trier of fact to decide.

  • 15 May 2019 12:27 PM | Deleted user

    Jarvis applied to screenshots made secretly during Skype chat

    The Ontario Court of Appeal has further clarified the voyeurism offence in section 162(1) of the Criminal Code with its decision in R v Trinchi. In part the decision is an application of the recent Supreme Court decision on the same offence in R v Jarvis (and indeed the appeal was delayed until Jarvis had been decided), but the decision also goes on to settle a point which was not central to the decision in that other case.

    Trinchi and his partner, the complainant, were in a long-distance relationship in the course of which they regularly engaged in intimate webcam video chats over the course of a year and a half. During the video chats both parties were nude, and the complainant willingly posed in sexually provocative positions for Trinchi. Trinchi took screenshots of her from these video livestreams: the complainant was aware that her image was being captured as video and streamed over the internet to Trinchi, but she was unaware that these screenshots were taken. After the complainant ended the relationship, these screenshots were widely distributed to many people by email, and as a result Trinchi was charged with a number of offences, most of which related to the distribution of intimate images. Because of the possibility that Trinchi’s new girlfriend had distributed the images, the trial judge had reasonable doubt and acquitted the accused of the distribution charges. However, the judge had no doubt that Trinchi had taken the screenshots and so convicted him of the voyeurism charge. 

    There are various versions of the voyeurism offence, but there was no dispute that the element set out in section 162(1)(b) was met, namely that

    (b) the person is nude, is exposing his or her genital organs or anal region or her breasts, or is engaged in explicit sexual activity, and the observation or recording is done for the purpose of observing or recording a person in such a state or engaged in such an activity.

    Similarly it was not disputed that the accused was the person who had taken the screenshots. However, the voyeurism offence has two further requirements: 1) that the complainant was “in circumstances that give rise to a reasonable expectation of privacy”, and 2) that the screenshot was made “surreptitiously”. Trinchi argued that neither of these elements were made out: the Ontario Court of Appeal disagreed and upheld the conviction.

    Trinchi argued that the complainant could not have a reasonable expectation of privacy in the circumstances, having knowingly and willingly posed nude in front of the webcam. That is, he argued that although “engaging in sexual activity in one’s own bedroom is a circumstance that attracts a high expectation of privacy…the complainant admitted him within her circle of privacy by voluntarily exposing herself, knowing she was doing so through a camera, a device the very purpose of which is to capture images” (para 17). He argued that the voyeurism offence was designed to apply to “peeping toms,” not to intimate partners, and that the complainant’s act of voluntarily exposing herself in front of a camera negated her reasonable expectation of privacy. 

    In this context, however, the Ontario Court of Appeal relied on Jarvis, noting that the test for a reasonable expectation in the meaning of this section was “whether in the circumstances the person observed or recorded would reasonably expect not to be the subject of the type of observation or recording that in fact occurred” (para 14). They noted that Jarvis had offered, as an example, the possibility of one partner video-recording consensual sexual activity without the knowledge of the other, and held that that was essentially what had occurred here:

    [19] This example, it seems to me, provides a short and direct path to the conclusion that the complainant had a reasonable expectation the appellant would not take screenshots of their consensual sexual activity. It should not make a difference that their consensual activity took place in “virtual space” rather than in a physical room. She necessarily expected to be observed by the appellant in the live-streamed video, but did not expect he would make a permanent recording of her naked.

    The most notable consideration for the Court of Appeal was the distinction between mere observation and recording a permanent image. The making of a permanent image raises the risk of the complainant being observed by others than those by whom she was consenting to be seen, as in fact occurred in this case. On that basis the Ontario Court of Appeal found that the complainant did in fact have a reasonable expectation of privacy.

    The remaining issue was whether the accused had acted “surreptitiously”, which was largely a matter of statutory interpretation. Trinchi argued that the complainant never indicated she did not want screenshots taken, and that he never promised he would not take screenshots. More importantly, though, Trinchi argued that the trial judge had applied the wrong test to determine surreptitiousness: it was a factor that had to be determined by looking at an accused’s intention, not from the complainant’s perspective. He argued that Jarvis had pointed to the difference between “reasonable expectation of privacy” and “surreptitiously”, and that the former focused on the complainant’s perspective, but the latter related to the observer.

    The Crown argued, to the contrary, that it should be sufficient if a complainant did not know of the recording, and the accused was aware the complainant did not know: if there were a requirement to prove the accused’s intention, they argued, it would be too difficult to prove that an accused had acted “surreptitiously”.

    On this matter of statutory interpretation, the Ontario Court of Appeal sided with Trinchi’s argument that it was necessary to prove that the accused intended that the complainant be unaware. They held:

    [46] I am satisfied that the ordinary meaning of the word “surreptitiously” does include intent as part of its meaning. A person who observes or records with the intention that the subject not be aware that he is doing so, is attempting to avoid notice or attention. Moreover, I consider M.E.N.’s articulation of the mental element to be apt. The mental state required by the word “surreptitiously” in s. 162(1) is the intent the subject not be aware that she is being observed or recorded. In a prosecution under s. 162(1)(b), the Crown may prove the accused acted surreptitiously by proving that he observed or recorded the subject with the intention she be unaware he was doing so. 

    They also suggested that this definition was not likely to create the difficulties in proof suggested by the Crown:

    [48] Understanding the word “surreptitiously” in this way would not prevent a successful prosecution in the Crown’s example of the smartphone on the accused’s bedside table recording consensual sexual activity. In the example, the accused would have had to initiate the smart phone’s video recording mode and position the device so its camera focused on the sexual activity. Where the complainant testifies that she did not consent to being recorded and was unaware the recording was being made, and without evidence to explain the positioning and active state of the phone, the fact-finder would have an adequate basis to infer that the accused intended the complainant be unaware he was recording her.

    Essentially on that sort of basis the Ontario Court of Appeal concluded that surreptitiousness had been shown in this case. The trial judge had repeatedly stated that Trinchi had acted “secretly” and that his actions were “clandestine”. Further, 

    [55]…The appellant’s state of mind could be inferred from the circumstantial evidence. The complainant did not know the screenshots were being taken. The appellant never told the complainant he was taking screenshots; the subject of taking screenshots never came up during the parties’ 400-odd video chats. The complainant could see the appellant during their video chats, and he had taken the screenshots in a way that the complainant had not noticed. After taking the screenshots the appellant never mentioned them. Her lack of awareness could also be reasonably expected under the totality of these circumstances. These facts supported the trial judge’s inference that the appellant had intended that the complainant not know he was taking screenshots of her.

    Accordingly the conviction was upheld.

  • 1 May 2019 12:36 PM | Deleted user

    Conclusions go beyond safeguards implicated in data breach and lead to significant re-thinking of transfers of personal information

    On April 9, 2019, the Office of the Privacy Commissioner of Canada (“OPC”) released its report of findings related to the Equifax data breach. On September 7, 2017 Equifax Inc. publicly announced that an attacker had accessed the personal information of more than 143 million individuals and later reported that the breach affected around 19,000 Canadians. The OPC commenced an investigation and concluded that the breach affected some Canadians whose information was collected by US-based Equifax Inc. (also referred to as “Equifax US”) and some Canadians who had purchased or received products, such as fraud alerts from Canada-based Equifax Canada Co. (“Equifax Canada”). The nature of the information and how it was acquired by either Equifax entity is described by the OPC in its report of findings:

    The affected personal information was collected by Equifax Inc. from certain Canadian consumers who had direct-to-consumer products or fraud alerts. The direct-to-consumer products included paid online access by individuals to their Canadian credit report, credit monitoring, and alert services (in relation to their Canada credit files). The information was collected by Equifax Inc. as it plays an integral role in delivering these direct-to-consumer products and processing certain fraud alert transactions.

    Attackers gained access to Equifax Inc.’s systems on May 13, 2017 by exploiting a known vulnerability in the software platform supporting an online dispute resolution portal that is part of Equifax Inc.’s Automated Consumer Information System (“ACIS”). They then operated undetected within Equifax Inc.’s systems for a period of time and ultimately gained access to Canadian personal information unrelated to the functions of the compromised portal.

    Information in Canadians’ credit files is stored by Equifax Canada on servers located in Canada and segregated from Equifax Inc.’s systems. However, during the process of delivering direct-to-consumer products to Canadians, information from credit files needed to fulfil these products is transferred to Equifax Inc. in the US. For instance, a static copy of a full credit file is transferred by Equifax Canada to Equifax Inc. if a credit report is purchased by a consumer. While Equifax Canada’s servers are segregated from Equifax Inc.’s systems, Equifax Canada’s security policies, direction and oversight were, and are, largely managed by Equifax Inc.

    The OPC concluded that both Equifax Canada and the US parent fell short of their privacy obligations to Canadians, focusing on five different areas of compliance:

    (1) Safeguards of Equifax US and Equifax Canada: Directly stemming from the data breach, the OPC found that neither Equifax US nor Equifax Canada implemented safeguards that were adequate as required under PIPEDA. Overall, the OPC concluded that vulnerability management, network segregation, implementation of basic information security practices, and oversight were deficient at Equifax US. Equifax Canada was found to lack adequate safeguards in terms of oversight, vulnerability management and the implementation of basic information security practices.

    (2) Conformity with Retention / Destruction Requirements: The OPC investigated whether personal information was being retained longer than was reasonably necessary It concluded that there was no process in place to delete Canadian personal information in compliance with the Equifax US data retention policy. The policy was not being followed, monitored or complied with. 

    (3) Accountability of Equifax Canada for protecting personal information: The OPC found that in the aftermath of the breach, there were a number of significant communications failures with the public and directed at Canadian consumers. The scope of Canadian data involved was unclear and was communicated in an unclear manner. Some of the information provided by the companies to the OPC were contradicted by information provided by consumers. The companies did not have a sufficient handle on what information they had, where it was from and who was responsible for it. 

    (4) Adequate consent by Canadians for collection and disclosure of information: This may be the most interesting and consequential finding from the Equifax case. Though the OPC has historically seen transfers of personal information from one entity to another for processing as not requiring consent, the OPC has changed its position:

    109. Providing adequate information about available choices when an individual is consenting to the collection, use or disclosure of their information is a key component of valid consent. In this case, it appears reasonable to require consent to the collection of information by, and disclosure of information to, Equifax Inc. as a condition of the online Canadian direct-to-consumer products, as Equifax Canada does not offer these products in-house. However, an individual would still have choices. In addition to the simple option of “not signing-up” for Equifax Canada credit file monitoring or other products, individuals interested in obtaining access to their Equifax Canada credit report could choose to use Equifax Canada’s free credit report service, provided by postal mail and avoiding any information disclosure to Equifax Inc. Equifax Canada does not currently communicate the difference in disclosures to consumers in the course of delivering online or postal access, i.e., that the former involves collection of information by Equifax Inc. and transfers of information to Equifax Inc. in the US, whereas the latter does not.

    110. In summary, Equifax Canada was not adequately clear about: (i) the collection of sensitive personal information by Equifax Inc., in the US, (ii) its subsequent disclosures of sensitive personal information to Equifax Inc., and (iii) the options available to individuals who do not wish to have their information disclosed in this way. Consequently, with respect to Equifax Canada’s practices to obtain consent for collection of personal information by Equifax Inc., and disclosure of personal information to Equifax Inc., the matter is well-founded.

    111. However, as noted in para. 101 above, we acknowledge that in previous guidance our Office has characterized transfers for processing as a ‘use’ of personal information rather than a disclosure of personal information. Our guidance has also previously indicated that such transfers did not, in and of themselves, require consent. In this context, we determined that Equifax Canada was acting in good faith in not seeking express consent for these disclosures.

    (5) Adequate Mitigation Measures: In the aftermath of the breach, the OPC concluded that offering a brief period of credit monitoring was inadequate relative to the scope of service Equifax could provide to Canadians in the circumstances, especially where better products (e.g. lifetime credit freezes) were offered to Americans affected by the same breach.

    In the end the OPC made a number of recommendations, most of which are binding on Equifax as a result of entering into a compliance agreement between Equifax Canada the OPC

    161. The following recommendations relate to contraventions found in Sections 1, 2, and 3 of this report, i.e. Safeguards and Retention by Equifax Inc. and Accountability of Equifax Canada. We recommended that Equifax Canada:

    1. Implement a procedure to keep the written arrangement with Equifax Inc., covering all Canadian personal information under Equifax Canada’s control collected by Equifax Inc. and disclosed to Equifax Inc., up-to-date.
    2. Institute a robust monitoring program by Equifax Canada against the requirements in the arrangement, and a structured framework for addressing any issues arising under it.
    3. Identify Canadians’ personal information that should no longer be retained by Equifax Inc. according to its retention schedule and delete it.
    4. Every two years, for a six-year term, provide to our Office:
      1. a report from Equifax Canada detailing its monitoring for compliance with the arrangement described in b. above;
      2. an audit report and certification, covering all Canadians’ personal information processed by Equifax Inc., against an acceptable security standard, conducted by an appropriate external auditor; and
      3. a third party assessment, covering all Canadians’ personal information processed by Equifax Inc., of Equifax Inc.’s retention practices.

    162. The following recommendations relate to contraventions found in Section 5 of this report, ie. Safeguards of Equifax Canada. We recommended that Equifax Canada:

    1. Provide our office with a detailed security audit report and certification, covering all Canadian personal information it is responsible for, against an acceptable security standard, conducted by an appropriate external auditor every two years for a six-year term.

    The re-thinking of consent and outsourcing in this finding has led to the OPC’s consultation on transborder dataflows, which is seeking input on this radical change in position by the OPC, discussed in the April 17 edition of the CanTech newsletter.

  • 1 May 2019 12:36 PM | Deleted user

    Supreme Court offers mixed reasons for finding no section 8 violations

    The Supreme Court of Canada discussed the nature of electronic communications and the ability of the State to make use of those conversations with its decision in R v Mills, though the case provides less in the way of real guidance than it could have.

    Mills became the subject of an undercover operation conducted by police to catch child lurers on the internet. An officer posed as a 14-year-old girl, ‘Leann’. Mills used Facebook and Hotmail to send sexually explicit messages to ‘Leann’, ultimately arranging to meet her in a park. The police arrested him at the park. During the course of the undercover operation, the officer posing as ‘Leann’ recorded the conversation by taking screenshots using purpose-built software called “Snagit”. The officer did not have prior judicial authorization to make and keep these screenshots. 

    At trial, Mills applied to exclude the screenshots from evidence. The trial judge concluded the screenshots were “private communications” under section 183 of the Code and therefore that prior judicial authorization had been required under section 184.2 of the Code from the point that Mills became the subject of investigation. The trial judge held the screenshots constituted a “seizure of communications” in breach of Mills’ reasonable expectation of privacy in his communications under section 8 of the Charter. However, the trial judge ultimately held that admission would not bring the administration of justice into disrepute, and so Mills was convicted.

    On appeal, the conviction was upheld. However, the Court of Appeal found that the trial judge had erred in concluding that the section 184.2 authorizations were required. Instead, they found that Mills’ did not have a reasonable expectation of privacy in the communications and so section 8 was not engaged.

    The Supreme Court of Canada dismissed the appeal, but with three different sets of reasons and four decisions. 

    Justice Brown (with Justices Abella and Gascon concurring) found that there was no unreasonable search on the basis that there was no search at all, since the accused did not have a reasonable expectation of privacy in his conversation with the undercover officer. This set of reasons pays the least attention, at least explicitly, to the technological aspect of the communication. They concluded that any section 8 claim requires that the accused have a subjectively held and objectively reasonable expectation of privacy in the subject matter of the search, and in the view of this cohort Mills’ subjective expectation of privacy was not objectively reasonable. Specifically, these three judges held that “adults cannot reasonably expect privacy online with children they do not know” (para 23).

    Generally whether an expectation of privacy is objectively reasonable has turned on consideration of a number of factual questions, such as whether the person has the ability to regulate access or whether the accused has abandoned the property. There has always been a certain level of discontinuity between the types of factors listed and the question that the objective portion of the analysis is meant to answer, which is whether, on a normative analysis, the privacy interest concerned is one that a person should be entitled to expect in our society. In essence, Justice Brown’s analysis simply goes directly to that normative issue, and concludes that adults cannot reasonably expect privacy online with children they do not know. Society values privacy in the context of many adult-child relationships “including, but in no way limited to, those with family, friends, professionals, or religious advisors” (para 24), but this relationship was not one of those contexts.

    The challenge for this conclusion, as Justice Brown’s cohort recognizes, is that on its face it runs contrary to the long-accepted principle that privacy must be assessed on “broad and neutral terms” that do not lead to post facto reasoning: for example in R v Wong, [1990] 3 SCR 36 the privacy question was whether the accused had a privacy interest when they rented a hotel room, not whether they could have a privacy interest in an illegal gaming operation being conducted in a hotel room. Justice Brown therefore stresses that no such post facto reasoning was engaged in the particular facts of this case. The officer who created ‘Leann’ knew that any adult who communicated with ‘her’ would be communicating with a child unknown to them, and so no other sort of communication could result. They argued that on these facts, sanctioning this form of unauthorized surveillance does not impinge citizens’ privacy in a way that is inconsistent with a free and open society in which expectations of privacy are normative. Where the police were aware that ‘Leann’ was fictitious and there was no risk to any genuine adult-child relationship, they could be absolutely certain that no section 8 breach could occur from taking the screenshots, because there was no reasonable expectation of privacy. Where there was no potential for a privacy breach, there was no need for prior judicial authorization, and as such section 184.2 of the Code did not apply.

    Justice Karakatsanis J (with Chief Justice Wagner concurring) agreed in the result, and also found that there was no reasonable expectation of privacy in these communications, but for entirely different reasons. She held that 

    [39] The right to be secure against unreasonable searches and seizures must keep pace with technological developments to ensure that citizens remain protected against unauthorized intrusions upon their privacy by the state: R. v. Fearon, 2014 SCC 77, [2014] 3 S.C.R. 621, at para. 102; see also R. v. Wong, [1990] 3 S.C.R. 36, at p. 44. However, as technology evolves, the ways in which crimes are committed — and investigated — also evolve.

    Applying those principles to these circumstances, she concluded that no interaction had taken place here which should be considered an interception by the State. Her reasoning rests on an analogy with R v Duarte, [1990] 1 SCR 30. That case dealt with an undercover officer who had a conversation with the accused, and who surreptitiously recorded it: the case found that prior judicial authorization was required even for “consent interceptions” such as that. The argument Justice Karakatsanis makes, however, is that there has never been any suggestion that prior judicial authorization would be required for the undercover officer to have the conversation with the accused: “it is not reasonable to expect that your messages will be kept private from the intended recipient (even if the intended recipient is an undercover officer)” (para 36). On that basis, the accused here had no reasonable expectation of privacy in his conversation with ‘Leann’.

    There is a need for judicial pre-authorization when the state chooses to surreptitiously make a permanent electronic record of such a communication: however, Justice Karakatsanis holds, that is not what occurred here. All that had occurred here was the conversation itself, which happened to be a conversation which took place by electronic means: the State was not creating the record:

    [48]…Mr. Mills chose to use a written medium to communicate with Constable Hobbs. Email and Facebook messenger users are not only aware that a permanent written record of their communication exists, they actually create the record themselves. The analogy with Duarte is to the oral conversation, not the surreptitious recording of that conversation. 

    There was the further issue that in this case the police had used the program “Snagit” to take screenshots of the electronic messages, but Justice Karakatsanis held that this did not change things. Inherently, she held, the communications existed as a written record, and “I cannot see any relevant difference in the state preserving the conversations by using ‘Snagit’ to take screenshots of them, by using a computer to print them, or by tendering into evidence a phone or laptop with the conversations open and visible” (para 56). She did note, however, that:

    [57] My conclusion that s. 8 is not engaged in this case does not mean that undercover online police operations will never intrude on a reasonable expectation of privacy. As technology and the ways we communicate change, courts play an important role in ensuring that undercover police techniques do not unacceptably intrude on the privacy of Canadians. Particularly in the context of the digital world, it is important for courts to consider both the nature and the scale of an investigative technique in determining whether s. 8 is engaged. With respect to the concern about the prospect of broader surveillance made possible by technological advances, as Binnie J. observed in Tessling, “[w]hatever evolution occurs in future will have to be dealt with by the courts step by step. Concerns should be addressed as they truly arise”: para. 55.

    She also added, as a note of caution:

    [60]…The fact that conversations with undercover officers now occur in written form on the Internet does not, in itself, violate s. 8 of the Charter. However, this conclusion in no way gives the police a broad license to engage in general online surveillance of private conversations.

    Justice Moldaver, writing only for himself, agreed with the reasons of both Brown and Karakatsanis JJ. One could see that as making Justice Brown’s decision the majority one, in that four of seven judges ultimately accept his reasoning.

    Justice Martin, also writing only for herself, however, disagreed with the reasoning of all the other members of the court, found the accused to have a reasonable expectation of privacy, found that the use of the Snagit software was an interception, and found there to be a section 8 violation. However, as she concluded that the evidence should not be excluded, she agreed in the result. Describing the case as “Duarte for the digital age” she suggested that 

    [88] In this case, we have the opportunity to pull the normative principles of Duarte and Wong through this Court’s more recent Charter s. 8 and Code Part VI jurisprudence — in particular, PatrickR. v. TELUS Communications Co., 2013 SCC 16, [2013] 2 S.C.R. 3; R. v. Cole, 2012 SCC 53, [2012] 3 S.C.R. 34; SpencerR. v. Marakah, 2017 SCC 59, [2017] 2 S.C.R. 608; R. v. Jones, 2017SCC 60, [2017] 2 S.C.R. 696; Reeves. The goal is to arrive at a judicial position that, while firmly grounded in the case law, “keep[s] pace with technological development, and, accordingly, . . . ensure[s] that we are ever protected against unauthorized intrusions upon our privacy by the agents of the state, whatever technical form the means of invasion may take”: Wong, at p. 44.

    [89] The risk contemplated in Duarte was that the state could acquire a compelled record of citizens’ private thoughts with no judicial supervision. At the end of the Cold War era, the way to obtain a real-time record of a conversation was to record it. Today, the way to obtain a real-time record of a conversation is simply to engage in that conversation. This Court must assess how and whether the primary concern of documentation in Duarte still applies to cases in which (a) a communication method self-generates documentation of the communication, and (b) the originator of the communication knows that this occurs. Should this shift in communication technology now allow the state to access people’s private online conversations at its sole discretion and thereby threaten our most cherished privacy principles?

    Justice Martin rejected Justice Karakatsanis’ view that the Facebook exchange was equivalent to only the conversation itself in Duarte, holding that it was equivalent to both the conversation and the electronic recording of it, and argued that “[t]his duality should support, not undermine the protection of privacy rights” (para 93). Similarly she held that the issue of whether the State surreptitiously created the record or whether it was created as a by-product of the communication was irrelevant to the underlying policy concerns:

    [100] The consequences of knowing that, at any point and with reference to any of our statements, we will have to contend with a documented record of those statements in the possession of the state, would be no less than the total “annihilat[ion]” (Duarte, at p. 44) of our sense of privacy.

    Justice Martin also disagreed with Justice Brown’s approach of finding no reasonable expectation of privacy in a conversation between an adult and a child not known to them, arguing that “The Court should not create Charter-free zones in certain people’s private, electronic communications on the basis that they might be criminals whose relationships are not socially valuable” (para 111), concluding that that approach was inconsistent with the principle of content-neutrality.

  • 1 May 2019 12:32 PM | Deleted user

    Privacy Commissioner plans to take Facebook to federal court 

    On April 29, 2019, the Office of the Information and Privacy Commissioner of British Columbia and the Office of the Privacy Commissioner of Canada (“OPC”) released the result of their joint investigation into Facebook, Inc. in connection with Cambridge Analytica. In PIPEDA Report of Findings #2019-002, both Commissioners conclude that Facebook had violated the federal and British Columbia privacy statutes. 

    The investigation stemmed from revelations that personal information of users of a third party app on the Facebook platform was later used by third parties for targeted political messaging. The investigation focussed on: (i) consent of users, both those who installed an app and their friends, whose information was disclosed by to the apps, and in particular to the “thisisyourdigitallife” or TYDL App; (ii) safeguards against unauthorized access, use and disclosure by apps; and (iii) accountability for the information under Facebook’s control.

    The OPC reports that they were disappointed with Facebook’s “lack of engagement” with their investigation, with many of the OPC’s questions going unanswered, or the answers provided being deficient. The OPC summarized its findings as follows: 

    1. Facebook failed to obtain valid and meaningful consent of installing users. Facebook relied on apps to obtain consent from users for its disclosures to those apps, but Facebook was unable to demonstrate that: (a) the TYDL App actually obtained meaningful consent for its purposes, including potentially, political purposes; or (b) Facebook made reasonable efforts, in particular by reviewing privacy communications, to ensure that the TYDL App, and apps in general, were obtaining meaningful consent from users.
    2. Facebook also failed to obtain meaningful consent from friends of installing users. Facebook relied on overbroad and conflicting language in its privacy communications that was clearly insufficient to support meaningful consent. That language was presented to users, generally on registration, in relation to disclosures that could occur years later, to unknown apps for unknown purposes. Facebook further relied, unreasonably, on installing users to provide consent on behalf of each of their friends, often counting in the hundreds, to release those friends’ information to an app, even though the friends would have had no knowledge of that disclosure.
    3. Facebook had inadequate safeguards to protect user information. Facebook relied on contractual terms with apps to protect against unauthorized access to users’ information, but then put in place superficial, largely reactive, and thus ineffective, monitoring to ensure compliance with those terms. Furthermore, Facebook was unable to provide evidence of enforcement actions taken in relation to privacy related contraventions of those contractual requirements.
    4. Facebook failed to be accountable for the user information under its control. Facebook did not take responsibility for giving real and meaningful effect to the privacy protection of its users. It abdicated its responsibility for the personal information under its control, effectively shifting that responsibility almost exclusively to users and Apps. Facebook relied on overbroad consent language, and consent mechanisms that were not supported by meaningful implementation. Its purported safeguards with respect to privacy, and implementation of such safeguards, were superficial and did not adequately protect users’ personal information. The sum of these measures resulted in a privacy protection framework that was empty.

    The OPC characterized these findings as particularly concerning, as its previous investigation of Facebook in 2009 found similar issues, leading the OPC to the conclusion that Facebook had not taken the recommendations from that investigation seriously. In this investigation, the OPC made the following recommendations:

    Facebook should implement measures, including adequate monitoring, to ensure that it obtains meaningful and valid consent from installing users and their friends. That consent must: (i) clearly inform users about the nature, purposes and consequences of the disclosures; (ii) occur in a timely manner, before or at the time when their personal information is disclosed; and (iii) be express where the personal information to be disclosed is sensitive. ...

    Facebook should implement an easily accessible mechanism whereby users can: (i) determine, at any time, clearly what apps have access to what elements of their personal information [including by virtue of the app having been installed by one of the user’s friends]; (ii) the nature, purposes and consequences of that access; and (iii) change their preferences to disallow all or part of that access.

    Facebook’s retroactive review and resulting notifications should cover all apps. Further, the resulting notifications should include adequate detail for [each user] to understand the nature, purpose and consequences of disclosures that may have been made to apps installed by a friend. Users should also be able to, from this notification, access the controls to switch off any ongoing disclosure to individual apps, or all apps.

    Facebook disagreed with many of the conclusions and recommendations, and the OPC has indicated that it plans to seek an order from the Federal Court to implement the recommendations.

    The report of findings also includes an interesting discussion about jurisdiction. Facebook asserted that the OPC did not have jurisdiction because there was no evidence that any Canadian user personal information had been disclosed to the operator of the TYDL app. Facebook also asserted that the OIPC of British Columbia did not (and could not) have jurisdiction by operation of Section 3 of the Personal Information Protection Act of British Columbia, which provides that the Act does not apply where PIPEDA applies. The OIPC and OPC pointed to the Organizations in British Columbia Exemption Order, and also asserted that their jurisdiction over the complaint did not depend on information having been provably disclosed to the TYDL app:

    44. While the complaint may have been raised within the context of concerns about access to Facebook users’ personal information by Cambridge Analytica, as noted above, the complaint specifically requested a broad examination of Facebook’s compliance with PIPEDA to ensure Canadian Facebook users’ personal information has not been compromised and is being adequately protected. Moreover, we advised Facebook that the investigation would be examining allegations that Facebook allowed Cambridge Analytica, among others, to inappropriately access users’ personal information and did not have sufficient safeguards to prevent such access.

  • 17 Apr 2019 12:35 PM | Deleted user

    Driver wearing wired ear bud headphones convicted of “using” phone, despite dead battery

    In R. v. Grzelak, the accused was ticketed for “holding or using” an electronic device while driving, an offence under s. 214.2 of the BC Motor Vehicle Act. The undisputed facts were that the accused’s iPhone was in a centre cubby hole in the dashboard of his car, and that he was wearing a pair of ear bud headphones (in both ears) which were plugged into the phone. The phone’s battery was dead and no sound of any sort was coming through the ear buds. The offence provision required the Crown to prove that the accused was “holding the device in a position in which it may be used.” The judge noted that if this was proven, a conviction must follow, “even if the battery was dead, and even if the Defendant was not operating one of the functions of the device (such as the telephone or GPS function).” In support of this proposition the judge cited R. v. Judd, which seems an odd choice as in that case the accused was convicted because he was physically holding his phone up to his ear while driving, and there was no evidence about the phone’s battery or which function he might have been using.

    On the issue of “holding” the judge found as follows:

    [9] Obviously, here the cell phone itself was sitting in the centre cubby hole, and was not in the defendants hands, or in his lap. But that is not the end of the matter. In my view, by plugging the earbud wire into the iPhone, the defendant had enlarged the device, such that it included not only the iPhone (proper) but also attached speaker or earbuds. In the same way, I would conclude that if the defendant had attached an exterior keyboard to the device for ease of inputting data, then the keyboard would then be part of the electronic device.

    [10] Since the earbuds were part of the electronic device and since the ear buds were in the defendants ears, it necessarily follows that the defendant was holding the device (or part of the device) in a position in which it could be used, i.e. his ears.

    Even the dead battery could not absolve the accused, as the judge held that simple “holding” was sufficient to make out the offence, “even if it is temporarily inoperative.” Accordingly, the accused was convicted.

    In our view, and with respect, this reasoning seems a bit of stretch. The accused was found to have been “holding the device”… in his ears. Surely this strains a reasonable interpretation of what the BC Legislature intended with the wording of the provision. Was it the fact of the physical connection of the earbuds to the phone, i.e. via a wire, that “enlarged” the device? This does beg the question of whether there would be a different finding if the ear buds (or the judge’s hypothetical keyboard) were connected via Bluetooth, as is increasingly common. It is probably fair to say that what the Legislature intended to capture with these provisions is distracted driving, and that driving with earbuds in (if the phone was not dead, as here) might amount to that. But as this case demonstrates, as do so many other cases like it, it would be preferable for the Legislatures to use more technology-neutral language in these offence provisions.

  • 17 Apr 2019 12:34 PM | Deleted user

    Second search of phone data after update to forensic software held lawful under Charter

    In R. v. Nurse, the two accused had been convicted at trial of first-degree murder of the deceased, Kumar, who was Nurse’s landlord. The Blackberrys belonging to the two were seized incident to their arrest, and as warrant was obtained to search them. As they were locked and password-protected the OPP investigating officers sent them to the RCMP for forensic extraction of data. The software used by the RCMP, called “Cellebrite,” was able to analyze raw data that was extracted from the phones and it showed that there had been some communication between them, but nothing incriminatory was found. However, the data was re-analyzed a year later, by which time there had been significant software updates to Cellebrite, and the new analysis revealed extensive text messages between the two accused which revealed a plan to kill the victim.

    On appeal the accused repeated an argument they had made unsuccessfully at trial: that the re-analysis with the updated software amounted to a second “search” for the purposes of s. 8 of the Charter, and thus a second warrant should have been obtained. In rejecting this argument for a unanimous bench, Trotter J.A. remarked:

    [133] In analyzing this issue, it is important to consider the essential nature of computers and other digital devices. They challenge traditional definitions of a “building, receptacle or place” within the meaning of s. 487 of the Criminal Code. In R. v. Marakah, 2017 SCC 59, [2017] 2 S.C.R. 698, McLachlin C.J. said, at para. 27: “The factor of ‘place’ was largely developed in the context of territorial privacy interests, and digital subject matter, such as an electronic conversation, does not fit easily within the strictures set out by the jurisprudence.” See also R. v. Jones, 2011 ONCA 632, 107 O.R. (3d) 241, at paras. 45-52. Similarly, in R. v. Vu, 2013 SCC 60, [2013] 3 S.C.R. 657, Cromwell J. said, at para. 39: “…computers are not like other receptacles that may be found in a place of search. The particular nature of computers calls for a specific assessment of whether the intrusion of a computer search is justified, which in turn requires prior authorization.”

    [134] Because of these conceptual differences, arguments by analogy to traditional (i.e., non-digital) search scenarios will not always be helpful. For example, the trial judge was right to reject the ultraviolet light testing scenario advanced by trial counsel. It does not work in this context because the second ultraviolet light analysis would require re-entry into the premises resulting in a separate invasion of privacy.

    [135] The re-inspection or re-interpretation of the raw data harvested from the appellants’ devices did not involve a further invasion of privacy. It is not necessary in this case to identify precisely when the appellants’ privacy rights were defeated in favour of law enforcement. Nevertheless, their privacy rights were “implicated” when their devices were seized upon arrest. In R. v. Reeves, 2018 SCC 56, 427 D.L.R. (4th) 579, Karakatsanis J. held at para. 30: “When police seize a computer, they not only deprive individuals of control over intimate data in which they have a reasonable expectation of privacy, they also ensure that such data remains preserved and thus subject to potential future state inspection” (emphasis in original). The same would hold true for the seizure of a cellphone or BlackBerry device.

    Here, whatever privacy interest the accused had in their phone data had been defeated completely by the issuing of the first warrant. The warrant did not have any search protocols attached to it, nor was there any indication that protocols would have been constitutionally necessary. The situation was analogous to a fraud investigation where copies of documents were taken and were continually inspected by police over the course of the investigation, and where it would be appropriate to consult new expert services to interpret them. While it might not always be the case that a re-analysis or re-inspection was not a new search, in this case the data was analyzed within the scope of an ongoing investigation, “the substance of which had not changed” between the two searches. The data was not altered in any way. The passage of time had no impact upon the lawfulness of the search. This ground of appeal was dismissed.

  • 17 Apr 2019 12:33 PM | Deleted user

    In seeking to revise crossborder dataflows, the OPC’s position would require consent for all transfers of personal information for processing

    The Office of the Privacy Commissioner of Canada (OPC) has initiated a consultation that proposes to completely reverse its previous guidance on crossborder dataflows under the Personal Information Protection and Electronic Documents Act (PIPEDA). And because they are trying to fit a round peg in a square hole, their position -- if implemented -- will have a huge impact on all outsourcing.

    In 2009, the OPC published a position that was consistent with the actual wording of the statute. It held that when one organization gives personal information to a service provider, so that the service provider can process the data on behalf of the original organization, it was a transfer and not a disclosure. This is an important distinction because transfers do not require consent from the individual, as is the case with a disclosure. Data is disclosed when it is given to another organization for use by that organization for its own purposes. In a transfer scenario, the personal information is protected by operation of the accountability principle, which means the organization that originally collected the data and has transferred it to a service provider remains responsible for the personal data and has to use contractual and other means to make sure that the service provider takes good care of the personal information at issue. Importantly, in its 2009 guidance, the OPC correctly noted “PIPEDA does not distinguish between domestic and international transfers of data.” Consent was not required, but the OPC did recommend that notice be given to the individual:

    Organizations must be transparent about their personal information handling practices. This includes advising customers that their personal information may be sent to another jurisdiction for processing and that while the information is in another jurisdiction it may be accessed by the courts, law enforcement and national security authorities.

    The 2009 policy position reflects the consensus of most privacy practitioners since PIPEDA came into effect in 2001. The new position is a complete reversal and discards the notion of “transfers” of personal information for processing: 

    Under PIPEDA, any collection, use or disclosure of personal information requires consent, unless an exception to the consent requirement applies. In the absence of an applicable exception, the OPC’s view is that transfers for processing, including cross border transfers, require consent as they involve the disclosure of personal information from one organization to another. Naturally, other disclosures between organizations that are not in a controller/processor relationship, including cross border disclosures, also require consent. [emphasis added]

    The new position concludes that because there is nothing in PIPEDA that specifically exempts transfers from consent, transfers can be folded into the mandatory consent scheme:

    While it is true that Canada does not have an adequacy regime [as in Europe] and that PIPEDA in part regulates cross border data processing through the accountability principle, nothing in PIPEDA exempts data transfers, inside or outside Canada, from consent requirements. Therefore, as a matter of law, consent is required. Our view, then, is that cross-border data flows are not only matters decided by states (trade agreements and laws) and organizations (commercial agreements); individuals ought to and do, under PIPEDA, have a say in whether their personal information will be disclosed outside Canada.

    This new position, while demanding consent, brings the true nature of that consent into question. One one hand, the organization has to get consent. On the other hand, the individual can be given no meaningful choice or ability to opt-out, because the organization can say “take it or leave it”:

    Organizations are free to design their operations to include flows of personal information across borders, but they must respect individuals’ right to make that choice for themselves as part of the consent process. In other words, individuals cannot dictate to an organization that it must design its operations in such a way that personal information must stay in Canada (data localisation), but organizations cannot dictate to individuals that their personal information will cross borders unless, with meaningful information, they consent to this.

    There is little basis in the statute for this position reversal, and the OPC’s consultation document shows some significant mental gymnastics to get where they want to go notwithstanding the actual scheme of the Act. 

    Because PIPEDA does not deal with crossborder transfers in any specific way, the only way for the OPC to get to the result they seek is to impose their new requirements on all transfers for processing by a third party, regardless of whether that processing involves moving the personal information outside of Canada. And to highlight the shortcomings of trying to shoehorn this principle into the existing statute, it would not affect in any way a US company that operates in Canada deciding after the fact to move data to its own US-based data centre because it would not be a disclosure or a transfer from one entity to another. 

    The proposal immediately garnered significant criticism. Lisa Lifshitz wrote for Canadian Lawyer Magazine:

    This is problematic in several respects as this analysis flies in the face of years of guidance from the OPC and reiterated repeatedly, including in the 2012 Privacy and Outsourcing for Businesses guidance document) that a transfer for processing is a "use" of the information, not a disclosure. Assuming the information is being used for the purpose it was originally collected, additional consent for the transfer is not required; it is sufficient for organizations to be transparent about their personal information handling practices. This includes advising Canadians that their personal information may be sent to another jurisdiction for processing and that while the information is in another jurisdiction it may be accessed by the courts, law enforcement and national security authorities. 

    ***

    The OPC’s implement-first-ask-permission-later approach to changing the consent requirements for cross-border data transfers is troublesome at best and judging from initial reactions, sits uneasily with many (me included).

    Likely knowing this, at the same time it released the Equifax decision the privacy commissioner also announced a “Consultation on transborder dataflows” under PIPEDA, not only for cross-border transfers between controllers and processors but for other cross border disclosures of personal information between organizations. The GDPR-style language used in this document is no accident and our regulator is seemingly trying to ensure the continued adequacy designation of PIPEDA (and continued data transfers from the EU to Canada) by adopting policy reinterpretations (and new policies) pending any actual legal reform of our law. Meanwhile, the OPC’s sudden new declaration that express consent is required if personal information will cross borders (and the related requirement that individuals must be informed of any options available to them if they do not wish to have their personal information disclosed across borders) introduces a whole new level of confusion and complexity regarding the advice that practitioners are supposed to be giving their clients pending the results of the consultations review, not to mention the potential negative business impacts (for consumers/vendors of cloud/managed services and mobile/ecommerce services, just to name a few examples) that may arise as a consequence.

    Michael Geist has written about the OPC’s approach on his blog:

    While the OPC position is a preliminary one – the office is accepting comments in a consultation until June 4 – there are distinct similarities with its attempt to add the right to be forgotten (the European privacy rule that allows individuals to request removal of otherwise lawful content about themselves from search results) into Canadian law. In that instance, despite the absence of a right-to-be-forgotten principle in the statute, the OPC simply ruled that it was reading in a right to de-index search results into PIPEDA (Canada’s Personal Information Protection and Electronic Documents Act). The issue is currently being challenged before the courts.

    In this case, the absence of meaningful updates to Canadian privacy law for many years has led to another exceptionally aggressive interpretation of the law by the OPC, effectively seeking to update the law through interpretation rather than actual legislative reform.

    The OPC is inviting comments up to June 4, 2019 and it is expected they’ll get an earful. The Canadian Technology Law Association is planning to make a submission. For more information or to contribute, contact CAN-TECH Law’s President James Kosa.

  • 4 Apr 2019 12:41 PM | Deleted user

    Office of the Superintendent of Financial Institution issue Advisory

    On March 31, 2019, the Technology and Cyber Security Reporting Advisory came into effect, setting out the Office of the Superintendent of Financial Institution’s expectation for federally regulated financial institutions (FRFI) with regard to technology or cyber security incidents. A “technology or cyber security incident” is defined as an incident which has “the potential to, or has been assessed to, materially impact the normal operations of a FRFI, including confidentiality, integrity or availability of its systems and information.” FRFI’s should report an incident which has a high or critical severity level to OSFI. The Advisory indicated that a “reportable incident” is one that may have: 

    • Significant operational impact to key/critical information systems or data;
    • Material impact to FRFI operational or customer data, including confidentiality, integrity or availability of such data; 
    • Significant operational impact to internal users that is material to customers or business operations;
    • Significant levels of system / service disruptions;
    • Extended disruptions to critical business systems / operations; 
    • Number of external customers impacted is significant or growing;
    • Negative reputational impact is imminent (e.g., public/media disclosure); 
    • Material impact to critical deadlines/obligations in financial market settlement or payment systems (e.g., Financial Market Infrastructure);
    • Significant impact to a third party deemed material to the FRFI; 
    • Material consequences to other FRFIs or the Canadian financial system; 
    • A FRFI incident has been reported to the Office of the Privacy Commissioner or local/foreign regulatory authorities.

    An FRFI must give notice to OSFI as promptly as possible, but not later than 72 hours after determining an incident meets the criteria, and must do so in writing. In addition updates must be provided at least daily until all material details have been provided, and until the incident is contained or resolved. The Advisory also goes on to provide four examples of reportable incidents: cyber-attack, service availability and recovery, third party breach, and extortion threat.

  

Canadian Technology Law Association

1-189 Queen Street East

Toronto, ON M5A 1S2

contact@cantechlaw.ca

Copyright © 2024 The Canadian Technology Law Association, All rights reserved.