Log in
Log in


  • 30 May 2019 12:12 PM | Anonymous

    Broad proposals more of an election platform than an action plan for digital issues

    In a speech at the Empire Club on May 21, 2019 (YouTube recording), Innovation Minister Navdeep Bains outlined a “Digital Charter” intended to guide future legislation and policy priorities in the areas of trust, data policy, privacy, misinformation and democracy. The Charter is based on ten principles, some of which have been further elaborated on in documentation linked from that page:

    1. Universal Access: All Canadians will have equal opportunity to participate in the digital world and the necessary tools to do so, including access, connectivity, literacy and skills.
    2. Safety and Security: Canadians will be able to rely on the integrity, authenticity and security of the services they use and should feel safe online.
    3. Control and Consent: Canadians will have control over what data they are sharing, who is using their personal data and for what purposes, and know that their privacy is protected.
    4. Transparency, Portability and Interoperability: Canadians will have clear and manageable access to their personal data and should be free to share or transfer it without undue burden.
    5. Open and Modern Digital Government: Canadians will be able to access modern digital services from the Government of Canada, which are secure and simple to use.
    6. A Level Playing Field: The Government of Canada will ensure fair competition in the online marketplace to facilitate the growth of Canadian businesses and affirm Canada's leadership on digital and data innovation, while protecting Canadian consumers from market abuses.
    7. Data and Digital for Good: The Government of Canada will ensure the ethical use of data to create value, promote openness and improve the lives of people—at home and around the world.
    8. Strong Democracy: The Government of Canada will defend freedom of expression and protect against online threats and disinformation designed to undermine the integrity of elections and democratic institutions.
    9. Free from Hate and Violent Extremism: Canadians can expect that digital platforms will not foster or disseminate hate, violent extremism or criminal content.
    10. Strong Enforcement and Real Accountability: There will be clear, meaningful penalties for violations of the laws and regulations that support these principles.

    Given that there is a short window before Parliament rises for the summer and with an election expected in October, the Digital Charter has been understood to be as much of an election platform as anything else. And, in many cases, the Digital Charter recites previous statements of principles made by the federal government. 

    In particular, the Minister in his speech and in subsequent documentation, has outlined significant changes to Canada’s private sector privacy law, the federal Personal Information Protection and Electronic Documents Act. This is described in “Strengthening Privacy for the Digital Age”, which does not lay out may specifics about privacy law reform, but includes a list of “possible options” and “considerations and questions” for each of them. Most significant, perhaps, is an intention to increase the Privacy Commissioner’s enforcement powers, though this also has few specifics.

  • 30 May 2019 12:11 PM | Anonymous

    Some causes of action viable, but application fails on commonality of claims across the proposed class

    The Ontario Superior Court of Justice has refused to certify a proposed class action against a casino that was a victim of a cyberattack that saw personal information of about 11,000 customers posted online. In Kaplan v. Casino Rama, the application mainly failed on the question of “commonality” among the proposed class members, but the judge commented upon other important elements of the case put before him.

    The facts are summarized by the judge:

    [1] Two and a half years ago, in November 2016, Casino Rama was targeted in a cyber-attack. An anonymous hacker accessed the Casino’s computer system and stole personal information relating to customers, employees and suppliers. When ransom demands proved futile, the hacker posted the stolen data on the internet. Just under 11,000 people had some personal information posted online.

    [2] The Casino contacted all appropriate authorities, took steps to close down the two websites that contained the stolen information, notified the thousands of customers, employees and suppliers potentially affected by the security breach and offered free credit monitoring services for one-year to many of them.

    [3] Fortunately, some two and half years later, there is no evidence that anyone has experienced fraud or identity theft as a result of the cyber-attack. There is no evidence that anyone has sustained any compensable financial or psychological loss.

    The plaintiffs sought certification in negligence, breach of contract, intrusion upon seclusion, breach of confidence and publicity given to private life. The judge concluded that the claims related to breach of confidence and publicity given to private life are “doomed to fail and should be struck.” It must be noted that this test is solely based on what is in the pleadings, rather than anything that is proven in law. 

    Interestingly, the judge did find that certain representations in the casino’s privacy policy could create contractual representations, the breach of which could create a contractual claim:

    [25] Breach of contract. Nor am I prepared to find that the breach of contract claim as pleaded is doomed to fail. I agree with the defendants that a company’s recitation of a privacy policy whose scope and content is determined solely by federal or provincial privacy law does not generate an enforceable consumer agreement. As recognized in John Doe and Broutzas, courts generally do not enforce agreements that simply repeat without more pre-existing statutory duties.

    [26] Here, however, there is more. The plaintiffs allege breach by the defendants of their own privacy policy (not just the one that was statutorily-mandated) and breach of “industry standards” whatever that may mean.

    [27] I am therefore inclined to find that the breach of contract claim discloses a viable cause of action under s. 5(1)(a) of the CPA. [footnotes omitted]

    For the breach of confidence claim, the Court concluded that a failure to secure the plaintiffs’ confidential and personal information was not a “misuse” of that information, so this claim was doomed to fail. 

    While some of the claims may have been viable individually, the Court concluded that there was no commonality that could permit the certification:

    [55] Section 5(1)(c) of the CPA requires that the claims or defences of the class members raise common issues. There is no dispute about the applicable law. For an issue to be common, it must be capable of being answered once for all class members. As noted in the leading class actions text:

    [I]f an issue can be resolved only by asking it of each class member, it is not a common issue …An issue is not “common” simply because the same question arises in connection with the claim of each class member, if that issue can only be resolved by inquiry into the circumstances of each individual’s claim … The fact of a common cause of action asserted by all class members does not in itself give rise to a common issue since the actual determination of liability for each class member may require individualized assessments.[37]

    [56] The problem here, with almost all of the PCIs [proposed common issues], is that there is no basis in fact for either the existence of the PCI or its overall commonality or both. Further, many of the PCI’s, particularly those that ask about duty of care or breach of a standard of care, require so much in the way of individual inquiry that any commonality is overwhelmed by the need for individualized assessments.

    With the explosion of privacy class action lawsuits following the Ontario Court of Appeal decision in Jones v Tsige, we are beginning to have a body of caselaw refining how courts will at least address certification questions, particularly where there has been no tangible harm to the individual proposed class members.

  • 30 May 2019 12:10 PM | Anonymous

    Man acquitted on parking ticket because electronic document from online “pay for parking” system not authenticated

    In City of St. John’s v. Sean Callahan, the accused had been issued a parking ticket under the relevant city by-law after his mobile home was alleged to have been parked illegally. The motor home had been parked in an area that was clearly indicated by signs to be the site of a “park and pay” mechanism; users were directed to download a parking app onto their phones. The trial judge noted that while the regime expected a certain level of technological sophistication from users, the process to be followed was clearly set out in the by-law and there was a phone number provided on the signage for people needing assistance. The accused testified that he simply did not appreciate the contents of the signage and had parked his vehicle at a parking pole that had no meter, which was permissible under the by-law.

    A city Parking Enforcement Officer had accessed the online parking system and obtained a list of vehicles that had paid for parking at the relevant time, and noted that the accused’s vehicle was not among them. At trial he produced a printed copy of this list, which was filed as an exhibit, but he gave no testimony about the online system or how it worked. This proved to be the fatal flaw in the city’s case. Judge Orr began by noting the requirements for prove as regards electronic documents:

    [9] In this case non- payment at the time the ticket was issued was admitted by Mr. Callahan. However I note that when a document is produced at trial, in this case the printed parking record, the prerequisite to its admission is authentication. Methods of authentication include viva voce testimony, common law rules and presumptions, or statutory instruments. (The Law of Evidence in Canada, Fourth Edition, p. 1243.) Parking Officer Brown did not testify as to how the electronic system worked how or on what system the records were stored or their accuracy. Besides Officer Brown’s evidence there was no other evidence called as to the systems integrity or how the records were stored, created and retrieved no technical evidence of any kind. There are no legislative provisions in the Bylaw itself or the Highway Traffic Act, The City of St John’s Act, The Provincial Offences Act or The Evidence Act that set out how evidence about the payment process and the retrieval of the record from its electronic data base can be admitted and interpreted.

    [10] In Criminal proceedings the admission of digital records is governed by 31.1-31.8 of the Canada Evidence Act. The statutory regime is set up primarily to deal with issues about the integrity of the computer system. It does not deal with the admissibility of the contents of electronic records. Instead it creates two pre-conditions that must be met, the authenticity rules and best evidence rules. Section 31.1provides that a person seeking to admit an electronic document must prove its authenticity by “evidence capable of supporting a finding that the electronic document is that which it purports to be.” Similarly, with respect to the best evidence rule there is a presumption created of system integrity that “there is evidence capable of supporting a finding that the system was operating properly”. This can be easily addressed; R. v. Nichols (2004) O.J. No. 6186 held that viva voce evidence from a system user can be evidence that meets the threshold for both issues and expert evidence was unnecessary. These sections are not part of the evidentiary rules adopted by the Summary Proceedings Act but they do encapsulate in most respects the common law rules that would apply. 

    Judge Orr then went on to express the need to balance functional practicality in the authentication of electronic evidence with the need to avoid over-expansive use of judicial notice:

    [11] Judicial notice by a court of facts without the requirement of proof is permissible only with respect to facts: 1) so notorious as not to be the subject of dispute among reasonable persons; or 2) capable of immediate and accurate demonstration by resorting to readily accessible sources of indisputable accuracy and may be noticed by the court without proof of them by any party. (R v Williams1998 CanLII 782 (SCC), [1998] 1 SCR 1128). Courts have identified the issue of judicial notice as having its own particular problem when dealing with information provided by new technologies. Expert evidence has increasingly but unevenly been held to be unnecessary to explain how technology and social media widely used by the general public and government agencies works. In R. v. Hamilton2011 ONCA 399 (CanLII), [2011] O.J. No. 2306, technicians were permitted to testify about the location of cell phones without being qualified as “experts”. On the other hand, in R. v. Peliech, [2012] O.J. No. 2467, a Mohan voire dire was held to explain how a widely used software program “Lime Wire” was used. Expert evidence implies that the witness has special knowledge. It seems clear that Courts should accept that technologies broadly used and understood by members of the public do not need expert proof to be accepted. At the same time, judges must exercise caution when taking judicial notice of notorious facts and relying on internet sources of “indisputable” accuracy, such as Google Maps. In R. v. Calvert, [2011] ONCA 379, the trial judge reviewed a Google Map on his own initiative to ascertain the distance between the scene and the police station. This was held to be permissible; however, the closer the judicially noted matter is to the central issue, the stricter the requirements of indisputability and notoriety.

    [12] A Court should adopt a functional approach to new technologies and conduct trials effectively and realistically. At the same time when the technology is being relied on to establish an offence even as minor an offence as the breach of a By Law there needs to be a level of confidence in the evidence presented that would justify entering a finding of guilt.

    [13] In this case, there is no proof authenticating the parking record. The evidence of the Parking Enforcement Officer did not provide any detail or information to establish the reliability or authentication of the parking record. Consequently given this gap absent the creation of a legislated rule the City’s evidence of non- payment is inadmissible.

    The City having failed to prove non-compliance with the by-law, the accused was acquitted.

    In our view, this otherwise low-stakes case hits a number of interesting points regarding the admissibility and use of electronic evidence. We are past the point where expert evidence is always required in order to authenticate electronic documents, since it is well within the capacity of lay witnesses to testify as to the practical functionality of various kinds of computer systems. As Justice Paciocco has noted, it is not necessary that witnesses understand the entire inner workings of any kind of machine in order to be able to testify as to how they work (see Justice David M. Paciocco, “Proof and Progress: Coping With the Law of Evidence in a Technological Age” (2013) 11 Canadian Journal of Law & Technology 181). Pragmatism is key. On the other hand, while judicial notice is available in some respects, it should be used cautiously. Equally interesting is Judge Orr’s suggestion that the technical requirements for adducing electronic documents, which are set out in the federal and provincial evidence statutes, can be assimilated into the common law for the purposes of cases brought under the Summary Proceedings Act.

  • 30 May 2019 11:48 AM | Anonymous

    Parents in child support proceeding duel over justiciability of cryptocurrency assets

    In M.M.D. v. J.A.H., a child support proceeding, the Applicant mother argued that the Respondent father had more income available to him to support their child than was indicated by the evidence he had led thus far. Specifically, she alleged that he had a large amount of equity in Bitcoin investments (over $9 million), the details of which he was failing to disclose. The judge agreed with the mother that it was clear that the father had more income than he was claiming for particular tax years but refused to impute income to the father from the Bitcoin investments, given the complexity of the asset and lack of evidence. The mother was awarded an interim payment from the father to retain an expert to analyze the Bitcoin assets. Regarding disclosure of evidence of the Bitcoin, the judge noted:

    [138] The Respondent has investments in cryptocurrency with a value of $9,502,416 as at February 8, 2019. He asks that only redacted documents related to this investment be produced to the Applicant and filed with the court.

    [139] The Respondent states there is a substantial risk that production of information could lead to attacks and give third parties the ability to access and perhaps steal these assets.

    [140] I have no expert evidence on this issue. It is clearly a volatile, emerging, intangible source of wealth which the courts will have to grapple with more frequently in future.

    [141] For purposes of this case, I find there is no prejudice to the Applicant if she receives the disclosure of the Respondent’s cryptocurrency assets in redacted form. There is a greater risk of prejudice to the Respondent if he is required to produce them in an unredacted form which could compromise the security of this substantial asset.

  • 15 May 2019 12:29 PM | Anonymous

    Class action for breach of copyright over obituaries and attached photos successful at Federal Court

    In Thomson v. Afterlife Network Inc., Thomson was the representative plaintiff in a class action claiming that obituaries and photographs authored and taken by Thomson (and others in the class) that had been posted online to various funeral homes and newspapers, were taken from the internet without permission and reproduced by Afterlife for profit. The suit alleged copyright infringement and infringement of moral rights of the members of the class, as the Terms of Service on Afterlife’s website stated that Afterlife owned the copyright in the website contents. 

    Thomson’s father passed away in January 2017, at which point she authored an obituary that, along with a photo of her father, was published by a funeral home with her permission. A year later she discovered that Afterlife displayed the obituary and photograph on their website, without her permission, and provided options to buy flowers and virtual candles. Thomson submits that Afterlife caused viewers of their website to believe she had consented to the use of her father’s obituary and image, and to believe that she would profit from sales. 

    Afterlife’s solicitor withdrew and Afterlife did not participate in the proceedings, having shut down its website one month after the class proceeding was certified. Traffic had been redirected to a similar website, with template rather than identical obituaries. 

    The court found, on the basis of CCH Canadian Ltd v Law Society of Upper Canada, 2004 SCC 13, that the obituaries and photographs were original works in which the author, here Thomson, possessed copyright, and that Afterlife’s actions constituted copyright infringement since it had reproduced the original works without permission. As to the moral rights claim, the court invoked Maltz v. Witterick, 2016 FC 524, which held that “an author’s right to the integrity of a work includes not only a highly substantive aspect, which the author of the work must establish, but also an objective element requiring evaluation of the prejudice to that author’s honour or reputation based on public or expert opinion.” The court found that while Thomson was sincere that her honour and reputation were prejudiced, no objective evidence of such was provided, and therefore the court was unable to make a determination as to prejudice. 

    The FC awarded aggregate damages of CA$10 million, and aggravated damages of CA$10 million across 2 million instances of infringement. The aggravated damages were granted due to the court’s finding that Afterlife’s conduct was high-handed and had significant impact on the members of the class.

    (with a contribution from Daniel Roth)

  • 15 May 2019 12:28 PM | Anonymous

    Court of Appeal finds trial judge erred in failing to acknowledge authenticity and admissibility of texts and photos

    In R. v. C.B., the two appellants had been convicted at trial of assault, sexual assault and unlawful confinement of the two 16-year old complainants, arising from events that took place over the course of two days at the home of the appellant C.B. On appeal were issues relating to electronic evidence that had been led at trial. On the main issue, the complainant DP was cross-examined on the basis of texts between her and CB (extracted from CB’s phone) which were contemporaneous to the alleged offences. These texts appeared to show her joking about sex and the use of sex toys, around the time she said she had been sexually assaulted. She acknowledged that the phone number for the phone on which the texts were received was hers, and that one of the texts related to her then-boyfriend. At this point in the judgment Watt J.A., writing for the court, provided a mini-excursus on texting terminology (possibly recounting the witness’s testimony):

    But the term “LMFAO”, which was included in her text, could mean several things. It could mean what it says. Or it could mean that somebody is uncomfortable with the situation and is just laughing about it to show them that. It is undisputed that the term “LMFAO” is a common acronym used in text messaging for “laugh my fucking ass off”.

    When cross-examination resumed the next day, however, DP denied that a number of the texts sent from her phone were authored by her, because she “did not talk like that,” and suggested that a monitoring app had been placed on her phone by CB. In his reasons for judgment the trial judge emphasized the complainant’s latter statements about the texts, emphasized there had been no forensic evidence led about the texts and stated they had “no probative value” because there was “no evidence as to whose phone it was, who put the messages in the phone.”

    The appellants appealed on the basis that the trial judge appeared to have found that the texts had not been authenticated, and thus were inadmissible, when there was sufficient evidence on the record as to the authorship of the texts. Watt J.A. agreed with this argument, tracing the provisions regarding the admissibility of electronic evidence in the Canada Evidence Act and noting that the authentication requirement, s. 31.1, simply mirrored the common law and its very low threshold that there be “some evidence” that the electronic documents were what the offering party purported them to be. With specific record to communications like texts, he remarked:

    [69] At common law, correspondence could be authenticated by the “reply letter” doctrine: to authenticate correspondence as having been sent by one individual to another, evidence is adduced to show it is a reply to a letter sent to that person. As a matter of logic, the same should hold true for text messages and emails. Evidence that A sent a text or email to B whom A believed was linked to a specific address, and evidence of a response purportedly from B affords some evidence of authenticity: David Paciocco, “Proof and Progress: Coping with the Law of Evidence in a Technological Age” (2013) 11 C.J.L.T. 181, at pp. 197-8 (Paciocco).

    [70] In a similar way, text messages may be linked to particular phones by examining the recorded number of the sender and receiving evidence linking that number to a specific individual, as for example, by admission: Paciocco, at p. 198.

    [71] But what of the prospect of tampering? Does it have to be negated before digital evidence can be properly authenticated?

    [72] As a matter of principle, it seems reasonable to infer that the sender has authored a message sent from his or her phone number. This inference is available and should be drawn in the absence of evidence that gives an air of reality to a claim that this may not be so. Rank speculation is not sufficient: R. v. Ambrose, 2015 ONCJ 813 (CanLII), at para. 52. And even if there were an air of reality to such a claim, the low threshold for authentication, whether at common law or under s. 31.1 of the CEA, would seem to assign such a prospect to an assessment of weight.

    Here, it appeared that the trial judge had made two mistakes: he had failed to fasten on the fact that DP had provided testimony proving that the set of texts was a conversation between herself and CB, and he had appeared to require expert forensic evidence in order to establish authenticity, which was unnecessary. Justice Watt concluded on this point:

    [78] Satisfaction of the evidentiary threshold for authentication under s. 31.1 of the CEA or at common law renders the evidence admissible; in other words, available to the trier of fact for ultimate evaluation [editor’s note: assuming, we expect Justice Watt meant, that the evidence does not offend one of the admissibility rules, e.g. hearsay, prior consistent statement]. It does not follow from admissibility that the trier of fact must find that the evidence is in fact what it claims to be. What remains of the dispute after admissibility has been established relates to the weight to be assigned to the evidence. And that issue is left to the trier of fact to decide.

  • 15 May 2019 12:27 PM | Anonymous

    Jarvis applied to screenshots made secretly during Skype chat

    The Ontario Court of Appeal has further clarified the voyeurism offence in section 162(1) of the Criminal Code with its decision in R v Trinchi. In part the decision is an application of the recent Supreme Court decision on the same offence in R v Jarvis (and indeed the appeal was delayed until Jarvis had been decided), but the decision also goes on to settle a point which was not central to the decision in that other case.

    Trinchi and his partner, the complainant, were in a long-distance relationship in the course of which they regularly engaged in intimate webcam video chats over the course of a year and a half. During the video chats both parties were nude, and the complainant willingly posed in sexually provocative positions for Trinchi. Trinchi took screenshots of her from these video livestreams: the complainant was aware that her image was being captured as video and streamed over the internet to Trinchi, but she was unaware that these screenshots were taken. After the complainant ended the relationship, these screenshots were widely distributed to many people by email, and as a result Trinchi was charged with a number of offences, most of which related to the distribution of intimate images. Because of the possibility that Trinchi’s new girlfriend had distributed the images, the trial judge had reasonable doubt and acquitted the accused of the distribution charges. However, the judge had no doubt that Trinchi had taken the screenshots and so convicted him of the voyeurism charge. 

    There are various versions of the voyeurism offence, but there was no dispute that the element set out in section 162(1)(b) was met, namely that

    (b) the person is nude, is exposing his or her genital organs or anal region or her breasts, or is engaged in explicit sexual activity, and the observation or recording is done for the purpose of observing or recording a person in such a state or engaged in such an activity.

    Similarly it was not disputed that the accused was the person who had taken the screenshots. However, the voyeurism offence has two further requirements: 1) that the complainant was “in circumstances that give rise to a reasonable expectation of privacy”, and 2) that the screenshot was made “surreptitiously”. Trinchi argued that neither of these elements were made out: the Ontario Court of Appeal disagreed and upheld the conviction.

    Trinchi argued that the complainant could not have a reasonable expectation of privacy in the circumstances, having knowingly and willingly posed nude in front of the webcam. That is, he argued that although “engaging in sexual activity in one’s own bedroom is a circumstance that attracts a high expectation of privacy…the complainant admitted him within her circle of privacy by voluntarily exposing herself, knowing she was doing so through a camera, a device the very purpose of which is to capture images” (para 17). He argued that the voyeurism offence was designed to apply to “peeping toms,” not to intimate partners, and that the complainant’s act of voluntarily exposing herself in front of a camera negated her reasonable expectation of privacy. 

    In this context, however, the Ontario Court of Appeal relied on Jarvis, noting that the test for a reasonable expectation in the meaning of this section was “whether in the circumstances the person observed or recorded would reasonably expect not to be the subject of the type of observation or recording that in fact occurred” (para 14). They noted that Jarvis had offered, as an example, the possibility of one partner video-recording consensual sexual activity without the knowledge of the other, and held that that was essentially what had occurred here:

    [19] This example, it seems to me, provides a short and direct path to the conclusion that the complainant had a reasonable expectation the appellant would not take screenshots of their consensual sexual activity. It should not make a difference that their consensual activity took place in “virtual space” rather than in a physical room. She necessarily expected to be observed by the appellant in the live-streamed video, but did not expect he would make a permanent recording of her naked.

    The most notable consideration for the Court of Appeal was the distinction between mere observation and recording a permanent image. The making of a permanent image raises the risk of the complainant being observed by others than those by whom she was consenting to be seen, as in fact occurred in this case. On that basis the Ontario Court of Appeal found that the complainant did in fact have a reasonable expectation of privacy.

    The remaining issue was whether the accused had acted “surreptitiously”, which was largely a matter of statutory interpretation. Trinchi argued that the complainant never indicated she did not want screenshots taken, and that he never promised he would not take screenshots. More importantly, though, Trinchi argued that the trial judge had applied the wrong test to determine surreptitiousness: it was a factor that had to be determined by looking at an accused’s intention, not from the complainant’s perspective. He argued that Jarvis had pointed to the difference between “reasonable expectation of privacy” and “surreptitiously”, and that the former focused on the complainant’s perspective, but the latter related to the observer.

    The Crown argued, to the contrary, that it should be sufficient if a complainant did not know of the recording, and the accused was aware the complainant did not know: if there were a requirement to prove the accused’s intention, they argued, it would be too difficult to prove that an accused had acted “surreptitiously”.

    On this matter of statutory interpretation, the Ontario Court of Appeal sided with Trinchi’s argument that it was necessary to prove that the accused intended that the complainant be unaware. They held:

    [46] I am satisfied that the ordinary meaning of the word “surreptitiously” does include intent as part of its meaning. A person who observes or records with the intention that the subject not be aware that he is doing so, is attempting to avoid notice or attention. Moreover, I consider M.E.N.’s articulation of the mental element to be apt. The mental state required by the word “surreptitiously” in s. 162(1) is the intent the subject not be aware that she is being observed or recorded. In a prosecution under s. 162(1)(b), the Crown may prove the accused acted surreptitiously by proving that he observed or recorded the subject with the intention she be unaware he was doing so. 

    They also suggested that this definition was not likely to create the difficulties in proof suggested by the Crown:

    [48] Understanding the word “surreptitiously” in this way would not prevent a successful prosecution in the Crown’s example of the smartphone on the accused’s bedside table recording consensual sexual activity. In the example, the accused would have had to initiate the smart phone’s video recording mode and position the device so its camera focused on the sexual activity. Where the complainant testifies that she did not consent to being recorded and was unaware the recording was being made, and without evidence to explain the positioning and active state of the phone, the fact-finder would have an adequate basis to infer that the accused intended the complainant be unaware he was recording her.

    Essentially on that sort of basis the Ontario Court of Appeal concluded that surreptitiousness had been shown in this case. The trial judge had repeatedly stated that Trinchi had acted “secretly” and that his actions were “clandestine”. Further, 

    [55]…The appellant’s state of mind could be inferred from the circumstantial evidence. The complainant did not know the screenshots were being taken. The appellant never told the complainant he was taking screenshots; the subject of taking screenshots never came up during the parties’ 400-odd video chats. The complainant could see the appellant during their video chats, and he had taken the screenshots in a way that the complainant had not noticed. After taking the screenshots the appellant never mentioned them. Her lack of awareness could also be reasonably expected under the totality of these circumstances. These facts supported the trial judge’s inference that the appellant had intended that the complainant not know he was taking screenshots of her.

    Accordingly the conviction was upheld.

  • 1 May 2019 12:36 PM | Anonymous

    Conclusions go beyond safeguards implicated in data breach and lead to significant re-thinking of transfers of personal information

    On April 9, 2019, the Office of the Privacy Commissioner of Canada (“OPC”) released its report of findings related to the Equifax data breach. On September 7, 2017 Equifax Inc. publicly announced that an attacker had accessed the personal information of more than 143 million individuals and later reported that the breach affected around 19,000 Canadians. The OPC commenced an investigation and concluded that the breach affected some Canadians whose information was collected by US-based Equifax Inc. (also referred to as “Equifax US”) and some Canadians who had purchased or received products, such as fraud alerts from Canada-based Equifax Canada Co. (“Equifax Canada”). The nature of the information and how it was acquired by either Equifax entity is described by the OPC in its report of findings:

    The affected personal information was collected by Equifax Inc. from certain Canadian consumers who had direct-to-consumer products or fraud alerts. The direct-to-consumer products included paid online access by individuals to their Canadian credit report, credit monitoring, and alert services (in relation to their Canada credit files). The information was collected by Equifax Inc. as it plays an integral role in delivering these direct-to-consumer products and processing certain fraud alert transactions.

    Attackers gained access to Equifax Inc.’s systems on May 13, 2017 by exploiting a known vulnerability in the software platform supporting an online dispute resolution portal that is part of Equifax Inc.’s Automated Consumer Information System (“ACIS”). They then operated undetected within Equifax Inc.’s systems for a period of time and ultimately gained access to Canadian personal information unrelated to the functions of the compromised portal.

    Information in Canadians’ credit files is stored by Equifax Canada on servers located in Canada and segregated from Equifax Inc.’s systems. However, during the process of delivering direct-to-consumer products to Canadians, information from credit files needed to fulfil these products is transferred to Equifax Inc. in the US. For instance, a static copy of a full credit file is transferred by Equifax Canada to Equifax Inc. if a credit report is purchased by a consumer. While Equifax Canada’s servers are segregated from Equifax Inc.’s systems, Equifax Canada’s security policies, direction and oversight were, and are, largely managed by Equifax Inc.

    The OPC concluded that both Equifax Canada and the US parent fell short of their privacy obligations to Canadians, focusing on five different areas of compliance:

    (1) Safeguards of Equifax US and Equifax Canada: Directly stemming from the data breach, the OPC found that neither Equifax US nor Equifax Canada implemented safeguards that were adequate as required under PIPEDA. Overall, the OPC concluded that vulnerability management, network segregation, implementation of basic information security practices, and oversight were deficient at Equifax US. Equifax Canada was found to lack adequate safeguards in terms of oversight, vulnerability management and the implementation of basic information security practices.

    (2) Conformity with Retention / Destruction Requirements: The OPC investigated whether personal information was being retained longer than was reasonably necessary It concluded that there was no process in place to delete Canadian personal information in compliance with the Equifax US data retention policy. The policy was not being followed, monitored or complied with. 

    (3) Accountability of Equifax Canada for protecting personal information: The OPC found that in the aftermath of the breach, there were a number of significant communications failures with the public and directed at Canadian consumers. The scope of Canadian data involved was unclear and was communicated in an unclear manner. Some of the information provided by the companies to the OPC were contradicted by information provided by consumers. The companies did not have a sufficient handle on what information they had, where it was from and who was responsible for it. 

    (4) Adequate consent by Canadians for collection and disclosure of information: This may be the most interesting and consequential finding from the Equifax case. Though the OPC has historically seen transfers of personal information from one entity to another for processing as not requiring consent, the OPC has changed its position:

    109. Providing adequate information about available choices when an individual is consenting to the collection, use or disclosure of their information is a key component of valid consent. In this case, it appears reasonable to require consent to the collection of information by, and disclosure of information to, Equifax Inc. as a condition of the online Canadian direct-to-consumer products, as Equifax Canada does not offer these products in-house. However, an individual would still have choices. In addition to the simple option of “not signing-up” for Equifax Canada credit file monitoring or other products, individuals interested in obtaining access to their Equifax Canada credit report could choose to use Equifax Canada’s free credit report service, provided by postal mail and avoiding any information disclosure to Equifax Inc. Equifax Canada does not currently communicate the difference in disclosures to consumers in the course of delivering online or postal access, i.e., that the former involves collection of information by Equifax Inc. and transfers of information to Equifax Inc. in the US, whereas the latter does not.

    110. In summary, Equifax Canada was not adequately clear about: (i) the collection of sensitive personal information by Equifax Inc., in the US, (ii) its subsequent disclosures of sensitive personal information to Equifax Inc., and (iii) the options available to individuals who do not wish to have their information disclosed in this way. Consequently, with respect to Equifax Canada’s practices to obtain consent for collection of personal information by Equifax Inc., and disclosure of personal information to Equifax Inc., the matter is well-founded.

    111. However, as noted in para. 101 above, we acknowledge that in previous guidance our Office has characterized transfers for processing as a ‘use’ of personal information rather than a disclosure of personal information. Our guidance has also previously indicated that such transfers did not, in and of themselves, require consent. In this context, we determined that Equifax Canada was acting in good faith in not seeking express consent for these disclosures.

    (5) Adequate Mitigation Measures: In the aftermath of the breach, the OPC concluded that offering a brief period of credit monitoring was inadequate relative to the scope of service Equifax could provide to Canadians in the circumstances, especially where better products (e.g. lifetime credit freezes) were offered to Americans affected by the same breach.

    In the end the OPC made a number of recommendations, most of which are binding on Equifax as a result of entering into a compliance agreement between Equifax Canada the OPC

    161. The following recommendations relate to contraventions found in Sections 1, 2, and 3 of this report, i.e. Safeguards and Retention by Equifax Inc. and Accountability of Equifax Canada. We recommended that Equifax Canada:

    1. Implement a procedure to keep the written arrangement with Equifax Inc., covering all Canadian personal information under Equifax Canada’s control collected by Equifax Inc. and disclosed to Equifax Inc., up-to-date.
    2. Institute a robust monitoring program by Equifax Canada against the requirements in the arrangement, and a structured framework for addressing any issues arising under it.
    3. Identify Canadians’ personal information that should no longer be retained by Equifax Inc. according to its retention schedule and delete it.
    4. Every two years, for a six-year term, provide to our Office:
      1. a report from Equifax Canada detailing its monitoring for compliance with the arrangement described in b. above;
      2. an audit report and certification, covering all Canadians’ personal information processed by Equifax Inc., against an acceptable security standard, conducted by an appropriate external auditor; and
      3. a third party assessment, covering all Canadians’ personal information processed by Equifax Inc., of Equifax Inc.’s retention practices.

    162. The following recommendations relate to contraventions found in Section 5 of this report, ie. Safeguards of Equifax Canada. We recommended that Equifax Canada:

    1. Provide our office with a detailed security audit report and certification, covering all Canadian personal information it is responsible for, against an acceptable security standard, conducted by an appropriate external auditor every two years for a six-year term.

    The re-thinking of consent and outsourcing in this finding has led to the OPC’s consultation on transborder dataflows, which is seeking input on this radical change in position by the OPC, discussed in the April 17 edition of the CanTech newsletter.

  • 1 May 2019 12:36 PM | Anonymous

    Supreme Court offers mixed reasons for finding no section 8 violations

    The Supreme Court of Canada discussed the nature of electronic communications and the ability of the State to make use of those conversations with its decision in R v Mills, though the case provides less in the way of real guidance than it could have.

    Mills became the subject of an undercover operation conducted by police to catch child lurers on the internet. An officer posed as a 14-year-old girl, ‘Leann’. Mills used Facebook and Hotmail to send sexually explicit messages to ‘Leann’, ultimately arranging to meet her in a park. The police arrested him at the park. During the course of the undercover operation, the officer posing as ‘Leann’ recorded the conversation by taking screenshots using purpose-built software called “Snagit”. The officer did not have prior judicial authorization to make and keep these screenshots. 

    At trial, Mills applied to exclude the screenshots from evidence. The trial judge concluded the screenshots were “private communications” under section 183 of the Code and therefore that prior judicial authorization had been required under section 184.2 of the Code from the point that Mills became the subject of investigation. The trial judge held the screenshots constituted a “seizure of communications” in breach of Mills’ reasonable expectation of privacy in his communications under section 8 of the Charter. However, the trial judge ultimately held that admission would not bring the administration of justice into disrepute, and so Mills was convicted.

    On appeal, the conviction was upheld. However, the Court of Appeal found that the trial judge had erred in concluding that the section 184.2 authorizations were required. Instead, they found that Mills’ did not have a reasonable expectation of privacy in the communications and so section 8 was not engaged.

    The Supreme Court of Canada dismissed the appeal, but with three different sets of reasons and four decisions. 

    Justice Brown (with Justices Abella and Gascon concurring) found that there was no unreasonable search on the basis that there was no search at all, since the accused did not have a reasonable expectation of privacy in his conversation with the undercover officer. This set of reasons pays the least attention, at least explicitly, to the technological aspect of the communication. They concluded that any section 8 claim requires that the accused have a subjectively held and objectively reasonable expectation of privacy in the subject matter of the search, and in the view of this cohort Mills’ subjective expectation of privacy was not objectively reasonable. Specifically, these three judges held that “adults cannot reasonably expect privacy online with children they do not know” (para 23).

    Generally whether an expectation of privacy is objectively reasonable has turned on consideration of a number of factual questions, such as whether the person has the ability to regulate access or whether the accused has abandoned the property. There has always been a certain level of discontinuity between the types of factors listed and the question that the objective portion of the analysis is meant to answer, which is whether, on a normative analysis, the privacy interest concerned is one that a person should be entitled to expect in our society. In essence, Justice Brown’s analysis simply goes directly to that normative issue, and concludes that adults cannot reasonably expect privacy online with children they do not know. Society values privacy in the context of many adult-child relationships “including, but in no way limited to, those with family, friends, professionals, or religious advisors” (para 24), but this relationship was not one of those contexts.

    The challenge for this conclusion, as Justice Brown’s cohort recognizes, is that on its face it runs contrary to the long-accepted principle that privacy must be assessed on “broad and neutral terms” that do not lead to post facto reasoning: for example in R v Wong, [1990] 3 SCR 36 the privacy question was whether the accused had a privacy interest when they rented a hotel room, not whether they could have a privacy interest in an illegal gaming operation being conducted in a hotel room. Justice Brown therefore stresses that no such post facto reasoning was engaged in the particular facts of this case. The officer who created ‘Leann’ knew that any adult who communicated with ‘her’ would be communicating with a child unknown to them, and so no other sort of communication could result. They argued that on these facts, sanctioning this form of unauthorized surveillance does not impinge citizens’ privacy in a way that is inconsistent with a free and open society in which expectations of privacy are normative. Where the police were aware that ‘Leann’ was fictitious and there was no risk to any genuine adult-child relationship, they could be absolutely certain that no section 8 breach could occur from taking the screenshots, because there was no reasonable expectation of privacy. Where there was no potential for a privacy breach, there was no need for prior judicial authorization, and as such section 184.2 of the Code did not apply.

    Justice Karakatsanis J (with Chief Justice Wagner concurring) agreed in the result, and also found that there was no reasonable expectation of privacy in these communications, but for entirely different reasons. She held that 

    [39] The right to be secure against unreasonable searches and seizures must keep pace with technological developments to ensure that citizens remain protected against unauthorized intrusions upon their privacy by the state: R. v. Fearon, 2014 SCC 77, [2014] 3 S.C.R. 621, at para. 102; see also R. v. Wong, [1990] 3 S.C.R. 36, at p. 44. However, as technology evolves, the ways in which crimes are committed — and investigated — also evolve.

    Applying those principles to these circumstances, she concluded that no interaction had taken place here which should be considered an interception by the State. Her reasoning rests on an analogy with R v Duarte, [1990] 1 SCR 30. That case dealt with an undercover officer who had a conversation with the accused, and who surreptitiously recorded it: the case found that prior judicial authorization was required even for “consent interceptions” such as that. The argument Justice Karakatsanis makes, however, is that there has never been any suggestion that prior judicial authorization would be required for the undercover officer to have the conversation with the accused: “it is not reasonable to expect that your messages will be kept private from the intended recipient (even if the intended recipient is an undercover officer)” (para 36). On that basis, the accused here had no reasonable expectation of privacy in his conversation with ‘Leann’.

    There is a need for judicial pre-authorization when the state chooses to surreptitiously make a permanent electronic record of such a communication: however, Justice Karakatsanis holds, that is not what occurred here. All that had occurred here was the conversation itself, which happened to be a conversation which took place by electronic means: the State was not creating the record:

    [48]…Mr. Mills chose to use a written medium to communicate with Constable Hobbs. Email and Facebook messenger users are not only aware that a permanent written record of their communication exists, they actually create the record themselves. The analogy with Duarte is to the oral conversation, not the surreptitious recording of that conversation. 

    There was the further issue that in this case the police had used the program “Snagit” to take screenshots of the electronic messages, but Justice Karakatsanis held that this did not change things. Inherently, she held, the communications existed as a written record, and “I cannot see any relevant difference in the state preserving the conversations by using ‘Snagit’ to take screenshots of them, by using a computer to print them, or by tendering into evidence a phone or laptop with the conversations open and visible” (para 56). She did note, however, that:

    [57] My conclusion that s. 8 is not engaged in this case does not mean that undercover online police operations will never intrude on a reasonable expectation of privacy. As technology and the ways we communicate change, courts play an important role in ensuring that undercover police techniques do not unacceptably intrude on the privacy of Canadians. Particularly in the context of the digital world, it is important for courts to consider both the nature and the scale of an investigative technique in determining whether s. 8 is engaged. With respect to the concern about the prospect of broader surveillance made possible by technological advances, as Binnie J. observed in Tessling, “[w]hatever evolution occurs in future will have to be dealt with by the courts step by step. Concerns should be addressed as they truly arise”: para. 55.

    She also added, as a note of caution:

    [60]…The fact that conversations with undercover officers now occur in written form on the Internet does not, in itself, violate s. 8 of the Charter. However, this conclusion in no way gives the police a broad license to engage in general online surveillance of private conversations.

    Justice Moldaver, writing only for himself, agreed with the reasons of both Brown and Karakatsanis JJ. One could see that as making Justice Brown’s decision the majority one, in that four of seven judges ultimately accept his reasoning.

    Justice Martin, also writing only for herself, however, disagreed with the reasoning of all the other members of the court, found the accused to have a reasonable expectation of privacy, found that the use of the Snagit software was an interception, and found there to be a section 8 violation. However, as she concluded that the evidence should not be excluded, she agreed in the result. Describing the case as “Duarte for the digital age” she suggested that 

    [88] In this case, we have the opportunity to pull the normative principles of Duarte and Wong through this Court’s more recent Charter s. 8 and Code Part VI jurisprudence — in particular, PatrickR. v. TELUS Communications Co., 2013 SCC 16, [2013] 2 S.C.R. 3; R. v. Cole, 2012 SCC 53, [2012] 3 S.C.R. 34; SpencerR. v. Marakah, 2017 SCC 59, [2017] 2 S.C.R. 608; R. v. Jones, 2017SCC 60, [2017] 2 S.C.R. 696; Reeves. The goal is to arrive at a judicial position that, while firmly grounded in the case law, “keep[s] pace with technological development, and, accordingly, . . . ensure[s] that we are ever protected against unauthorized intrusions upon our privacy by the agents of the state, whatever technical form the means of invasion may take”: Wong, at p. 44.

    [89] The risk contemplated in Duarte was that the state could acquire a compelled record of citizens’ private thoughts with no judicial supervision. At the end of the Cold War era, the way to obtain a real-time record of a conversation was to record it. Today, the way to obtain a real-time record of a conversation is simply to engage in that conversation. This Court must assess how and whether the primary concern of documentation in Duarte still applies to cases in which (a) a communication method self-generates documentation of the communication, and (b) the originator of the communication knows that this occurs. Should this shift in communication technology now allow the state to access people’s private online conversations at its sole discretion and thereby threaten our most cherished privacy principles?

    Justice Martin rejected Justice Karakatsanis’ view that the Facebook exchange was equivalent to only the conversation itself in Duarte, holding that it was equivalent to both the conversation and the electronic recording of it, and argued that “[t]his duality should support, not undermine the protection of privacy rights” (para 93). Similarly she held that the issue of whether the State surreptitiously created the record or whether it was created as a by-product of the communication was irrelevant to the underlying policy concerns:

    [100] The consequences of knowing that, at any point and with reference to any of our statements, we will have to contend with a documented record of those statements in the possession of the state, would be no less than the total “annihilat[ion]” (Duarte, at p. 44) of our sense of privacy.

    Justice Martin also disagreed with Justice Brown’s approach of finding no reasonable expectation of privacy in a conversation between an adult and a child not known to them, arguing that “The Court should not create Charter-free zones in certain people’s private, electronic communications on the basis that they might be criminals whose relationships are not socially valuable” (para 111), concluding that that approach was inconsistent with the principle of content-neutrality.

  • 1 May 2019 12:32 PM | Anonymous

    Privacy Commissioner plans to take Facebook to federal court 

    On April 29, 2019, the Office of the Information and Privacy Commissioner of British Columbia and the Office of the Privacy Commissioner of Canada (“OPC”) released the result of their joint investigation into Facebook, Inc. in connection with Cambridge Analytica. In PIPEDA Report of Findings #2019-002, both Commissioners conclude that Facebook had violated the federal and British Columbia privacy statutes. 

    The investigation stemmed from revelations that personal information of users of a third party app on the Facebook platform was later used by third parties for targeted political messaging. The investigation focussed on: (i) consent of users, both those who installed an app and their friends, whose information was disclosed by to the apps, and in particular to the “thisisyourdigitallife” or TYDL App; (ii) safeguards against unauthorized access, use and disclosure by apps; and (iii) accountability for the information under Facebook’s control.

    The OPC reports that they were disappointed with Facebook’s “lack of engagement” with their investigation, with many of the OPC’s questions going unanswered, or the answers provided being deficient. The OPC summarized its findings as follows: 

    1. Facebook failed to obtain valid and meaningful consent of installing users. Facebook relied on apps to obtain consent from users for its disclosures to those apps, but Facebook was unable to demonstrate that: (a) the TYDL App actually obtained meaningful consent for its purposes, including potentially, political purposes; or (b) Facebook made reasonable efforts, in particular by reviewing privacy communications, to ensure that the TYDL App, and apps in general, were obtaining meaningful consent from users.
    2. Facebook also failed to obtain meaningful consent from friends of installing users. Facebook relied on overbroad and conflicting language in its privacy communications that was clearly insufficient to support meaningful consent. That language was presented to users, generally on registration, in relation to disclosures that could occur years later, to unknown apps for unknown purposes. Facebook further relied, unreasonably, on installing users to provide consent on behalf of each of their friends, often counting in the hundreds, to release those friends’ information to an app, even though the friends would have had no knowledge of that disclosure.
    3. Facebook had inadequate safeguards to protect user information. Facebook relied on contractual terms with apps to protect against unauthorized access to users’ information, but then put in place superficial, largely reactive, and thus ineffective, monitoring to ensure compliance with those terms. Furthermore, Facebook was unable to provide evidence of enforcement actions taken in relation to privacy related contraventions of those contractual requirements.
    4. Facebook failed to be accountable for the user information under its control. Facebook did not take responsibility for giving real and meaningful effect to the privacy protection of its users. It abdicated its responsibility for the personal information under its control, effectively shifting that responsibility almost exclusively to users and Apps. Facebook relied on overbroad consent language, and consent mechanisms that were not supported by meaningful implementation. Its purported safeguards with respect to privacy, and implementation of such safeguards, were superficial and did not adequately protect users’ personal information. The sum of these measures resulted in a privacy protection framework that was empty.

    The OPC characterized these findings as particularly concerning, as its previous investigation of Facebook in 2009 found similar issues, leading the OPC to the conclusion that Facebook had not taken the recommendations from that investigation seriously. In this investigation, the OPC made the following recommendations:

    Facebook should implement measures, including adequate monitoring, to ensure that it obtains meaningful and valid consent from installing users and their friends. That consent must: (i) clearly inform users about the nature, purposes and consequences of the disclosures; (ii) occur in a timely manner, before or at the time when their personal information is disclosed; and (iii) be express where the personal information to be disclosed is sensitive. ...

    Facebook should implement an easily accessible mechanism whereby users can: (i) determine, at any time, clearly what apps have access to what elements of their personal information [including by virtue of the app having been installed by one of the user’s friends]; (ii) the nature, purposes and consequences of that access; and (iii) change their preferences to disallow all or part of that access.

    Facebook’s retroactive review and resulting notifications should cover all apps. Further, the resulting notifications should include adequate detail for [each user] to understand the nature, purpose and consequences of disclosures that may have been made to apps installed by a friend. Users should also be able to, from this notification, access the controls to switch off any ongoing disclosure to individual apps, or all apps.

    Facebook disagreed with many of the conclusions and recommendations, and the OPC has indicated that it plans to seek an order from the Federal Court to implement the recommendations.

    The report of findings also includes an interesting discussion about jurisdiction. Facebook asserted that the OPC did not have jurisdiction because there was no evidence that any Canadian user personal information had been disclosed to the operator of the TYDL app. Facebook also asserted that the OIPC of British Columbia did not (and could not) have jurisdiction by operation of Section 3 of the Personal Information Protection Act of British Columbia, which provides that the Act does not apply where PIPEDA applies. The OIPC and OPC pointed to the Organizations in British Columbia Exemption Order, and also asserted that their jurisdiction over the complaint did not depend on information having been provably disclosed to the TYDL app:

    44. While the complaint may have been raised within the context of concerns about access to Facebook users’ personal information by Cambridge Analytica, as noted above, the complaint specifically requested a broad examination of Facebook’s compliance with PIPEDA to ensure Canadian Facebook users’ personal information has not been compromised and is being adequately protected. Moreover, we advised Facebook that the investigation would be examining allegations that Facebook allowed Cambridge Analytica, among others, to inappropriately access users’ personal information and did not have sufficient safeguards to prevent such access.


Canadian Technology Law Association

1-189 Queen Street East

Toronto, ON M5A 1S2

Copyright © 2023 The Canadian Technology Law Association, All rights reserved.