Menu
Log in

                   

Log in


News

  • 20 Sep 2021 2:29 PM | Deleted user

    Unhappy plastic surgery patient held to have defamed surgeon in web reviews

    In Peterson v. Deck, Justice G.P. Weatherill of the British Columbia Supreme Court presided over a summary trial in a defamation case brought by a plastic surgeon against a patient who was dissatisfied with the result of her breast augmentation surgery. The defendant had posted comments about the surgeon’s work on both her own website and in Google Reviews, in which she suggested he had made fundamental errors in his consultation with her and the surgical work, and generally imputing that he was incompetent. She defended on the basis of justification, fair comment and qualified privilege.

    Justice Weatherill first dealt with the defendant’s argument that the claim was barred by the B.C. Protection of Public Participation Act, the province’s “anti-SLAPP” legislation which is designed to ensure valid commentary on issues of public interest is not suppressed by well-resourced defendants bringing legal proceedings. He accepted the defendant’s argument that “a consumer review of a plastic surgeon’s skills is within the ambit of public interest,” citing earlier authority for the proposition that:

    Online reviews of goods or services offered to members of the public… are commonplace on Google or other web sites. While much of the general public may not be interested in [such] reviews…, it is enough that some segment of the community would have a genuine interest in receiving information on the subject.

    However, the claim was not sufficiently in the public interest to meet the requirements under the Act, because the claim had merit and the harm to the doctor’s interests outweighed the value of protecting the expression:

    This action was not brought to stifle or frustrate the defendant’s freedom of expression or prevent her from making reviews or participating in matters of public debate. Consumer reviews, as a general principle, ought to be encouraged and there is a very real danger of a chilling effect if they are curtailed. However, such reviews should not be left unbridled. Online review platforms are not a carte blanche to say whatever one wishes without potential consequences. This case was brought to vindicate the plaintiff’s reputation as a plastic surgeon in light of the Posts.

    On the main claim, the court easily found that the posts were defamatory, given that they would tend to lower the reputation of the plaintiff in the eyes of the public. Justice Weatherill found that the posts had been “published” in spite of little evidence being led on whether anyone had read the reviews; however, they had been read by the surgeon’s administrative assistant, and the defendant herself had received responses to them and stated publicly that they had “gone viral,” which the court held to be sufficient.

    Moving on to defences, Weatherill J. held that justification/truth was not available to the defendant, since the evidence indicated that she had made factual statements that were demonstrably false, regarding what had been discussed with her by the surgeon and what had taken place during the surgery. Nor was fair comment available, since the subjective opinions the defendant expressed in the posts were based on untrue facts. The court held for the plaintiff on liability.

    As to damages, Weatherill J. noted that “the defendant published the Posts using the internet, a medium with tremendous power to harm a person’s reputation by spreading falsehoods far and wide,” and imposed damages of $30,000.00. He also imposed a mandatory injunction requiring the defendant to take down the posts and not place them anywhere else.

  • 20 Sep 2021 2:28 PM | Deleted user

    Methodology problems result in exclusion of survey results in trademark dispute

    At issue in Tokai of Canada Ltd. v. Kingsford Products Company, LLC, was the admissibility of evidence that was generated by a consumer survey, which was designed to test consumer reaction to the use of the word “KING.” The plaintiff was trying to register this as a trademark for barbeques and butane lighters, but the defendant objected based on its numerous trademarks with the word “KINGSFORD” used with similar products. As fresh evidence on judicial review the plaintiffs had sought to lead expert evidence of an internet survey that comprised 707 interviews with consumers who had purchased, or planned to purchase, a butane lighter. Justice Fuhrer noted that as a survey evidence is a species of expert opinion evidence it must meet the usual requirements for admissibility, and:

    Further, to be considered relevant, the survey must be both reliable (in that if it were repeated it would produce the same results) and valid (in that the right questions were put to the right pool of survey participants in the right way and in the right circumstances to produce the evidence sought)[.]

    Here, there were both validity and reliability problems with the survey evidence proffered, which “highlights the challenges in attempting to simulate a consumer’s imperfect recollection at the time when they encounter the products and trademark in issue in the marketplace.” There were four particular deficiencies. First, the survey referred only to “butane lighters” but it might not have been clear to the average consumer, who was in a hurry, whether this referred to cigarette lighters or utility lighters, which could have skewed results. Second, some completed surveys were pulled from the results evaluated because the market researcher conducting the survey judged they had been “completed too quickly,” yet quick completion was likely to be a feature of a survey which was meant to capture the first impressions of the average, hurried, consumer.

    Third, some survey participants were permitted to take many hours to complete the survey, which again was incorrect methodology for a survey meant to obtain first impressions. Finally there were contextual and other gaps in some survey questions. For example, “[t]he manner in which the survey participant was shown the brand name KING online [was] not reflective of the manner in which the trademark would be encountered in the marketplace in the applicable circumstances (i.e. on packaging or the goods themselves, potentially along side other similar products, such as on a store shelf).” In particular, an online survey, Justice Fuhrer felt, was not a good means by which to emulate how consumers would encounter the goods in a store, as opposed to a commercial website. Due to all of these flaws, the evidence was not sufficiently reliable or valid, and thus was excluded.

  • 20 Sep 2021 2:27 PM | Deleted user

    Ontario College of Physicians and Surgeons issues guidelines for physician use of social media

    The College of Physicians and Surgeons of Ontario recently released a document entitled Social Media—Appropriate Use by Physicians, the stated goal of which was to provide “guidance to physicians about how to engage in social media while continuing to meet relevant legal and professional obligations.” The document notes that it is not itself a policy or formal means of establishing rules for conduct, but rather is intended to help physicians comply with existing professional expectations “in the social media sphere.”

    The guidelines recommend that physicians:

    1. Assume that all content on the Internet is public and accessible to all.
    2. Exercise caution when posting information online that relates to an actual patient, in order to ensure compliance with legal and professional obligations to maintain privacy and confidentiality. Bear in mind that an unnamed patient may still be identified through a range of other information, such as a description of their clinical condition, or area of residence.
    3. Refrain from providing clinical advice to specific patients through social media. It is acceptable, however, to use social media to disseminate generic medical or health information for educational or information sharing purposes.
    4. Protect their own reputation, the reputation of the profession, and the public trust by not posting content that could be viewed as unprofessional.
    5. Be mindful of their Internet presence, and be proactive in removing content posted by themselves or others which may be viewed as unprofessional.
    6. Refrain from establishing personal connections with patients or persons closely associated with them online, as this may not allow physicians to maintain appropriate professional boundaries and may compromise physicians’ objectivity. It is acceptable to create an online connection with patients for professional purposes only.
    7. Refrain from seeking out patient information that may be available online without prior consent.
    8. Read, understand, and apply the strictest privacy settings necessary to maintain control over access to their personal information, and social media presence undertaken for personal purposes only.
    9. Remember that social media platforms are constantly evolving, and be proactive in considering how professional expectations apply in any given set of circumstances.
  • 20 Sep 2021 2:26 PM | Deleted user

    New statutory tort for intimate images was determined to not cover all the harms alleged by the plaintiff

    Madam Justice Inglis of the Alberta Court of Queen’s Bench has recognized, for the first time, the tort of “public disclosure of private facts” in the province of Alberta. In ES v Shillington, the plaintiff had previously obtained default judgement against the defendant for a number of recognized claims, including assault, battery, sexual assault and intentional infliction of emotional distress. A separate hearing was held for an assessment of damages and to determine whether judgement would be granted for the novel tort and for breach of confidence.

    The case flowed from a very unpleasant marital break-up in which the plaintiff fled her husband from New Brunswick, where he as posted with the military, to her home in Alberta. In addition to a number of assaults, the plaintiff alleged that he had posted images of her online:

    [11] While he was deployed, near the end of their relationship, the Defendant confessed to the Plaintiff that he had posted her images online. Through accessing the Defendant’s social media accounts the Plaintiff was able to track some of these postings and was disturbed to find many of those private, explicit images available on the internet at pornography sites. At no time did the Defendant have the Plaintiff’s consent to publish these images. The Defendant admitted that he had posted photos as early as 2006, and the Plaintiff has located images posted as late as 2018. As recently as early 2021 the Plaintiff was able to find some of these images online.

    [12] The availability of these photos, including the fact that the Plaintiff is identifiable in some images, resulted in the Plaintiff being recognized in them by a neighbour that spoke to her sexually, having seen her likeness on a website. She has experienced significant mental distress and embarrassment as a result of the postings. She suffers nervous shock, psychological and emotional suffering, depression, anxiety, sleep disturbances, embarrassment, humiliation, and other impacts to her wellbeing.

    The Court carried out a fulsome review of whether the novel tort should be recognized in Alberta, considering the criteria set by the Supreme Court of Canada in Nevsun Resources Ltd v Araya:

    Three clear rules for when the courts will not recognize a new nominate tort have emerged: (1) The courts will not recognize a new tort where there are adequate alternative remedies (see, for example, Scalera); (2) the courts will not recognize a new tort that does not reflect and address a wrong visited by one person upon another (Saskatchewan Wheat Pool, at pp 224-25); and (3) the courts will not recognize a new tort where the change wrought upon the legal system would be indeterminate or substantial (Wallace v United Grain Growers Ltd, [1997] 3 SCR 701 (SCC), at paras 76-77). Put another way, for a proposed nominate tort to be recognized by the courts, at a minimum it must reflect a wrong, be necessary to address that wrong, and be an appropriate subject of judicial consideration.

    Since the publication of the images, Alberta has created a statutory tort for the non-consensual distribution of intimate images (the Protecting Victims of Non-Consensual Distribution of Intimate Images Act), but it could not be applied retrospectively to provide the plaintiff with a remedy. Even if it would, the statutory tort is narrowly circumscribed and does not address all the harms she has experienced. In the circumstances, without recognizing the new tort, there would be no alternative remedies available for her.

    The Court found that this tort was necessary to address an act of deliberate wrongdoing carried out by the defendant, and was one that is appropriate for adjudication:

    [62] The conduct complained of by the Plaintiff clearly meets this third test; it is appropriate for judicial adjudication. The change sought of this court is a determinate and substantial change that recognizes the inherent harm done by dissemination of private content. When conduct attracts legislative and parliamentary attention, its wrongfulness is apparent. From Jane Doe #2 at para 88: “…Failing to develop the legal tools to guard against the intentional, unauthorized distribution of intimate images and recordings on the internet would have a profound negative significance for public order as well as the personal wellbeing and freedom of individuals.”

    Conclusion re cause of action for Public Disclosure of Private Facts

    [63] The existence of a right of action for Public Disclosure of Private Facts is thus confirmed in Alberta. To do so recognizes these particular facts where a wrong exists for which there are no other adequate remedies. The tort reflects wrongdoing that the court should address. Finally, declaring the existence of this tort in Alberta is a determinate incremental change that identifies action that is appropriate for judicial adjudication.

    Following a review of the relevant caselaw from outside of Alberta, the Court stated the elements of the tort in the province:

    [68] Therefore, in Alberta, to establish liability for the tort of Public Disclosure of Private Facts, the Plaintiff must prove that:

    (a) the defendant publicized an aspect of the plaintiff’s private life;

    (b) the plaintiff did not consent to the publication;

    (c) the matter publicized or its publication would be highly offensive to a reasonable person in the position of the plaintiff; and,

    (d) the publication was not of legitimate concern to the public.

    Given the overlap of damages among the intentional torts claimed, the damage award was not divided among them. Under those heads, the plaintiff sought and the court awarded general damages of $80,000, punitive damages of $50,000 and aggravated damages in the amount of $25,000.

  • 20 Sep 2021 2:24 PM | Deleted user

    Government is seeking input on framework to make internet companies responsible for policing content available in Canada

    Almost immediately before the 2021 general election was called, the federal Department of Heritage launched a consultation on a proposed framework to address “online harms” by the imposition of obligations on online communications service providers, creation of a new regulator in Ottawa and enormous penalties. The announcement was accompanied by a discussion guide and a technical paper, the latter of which fulsomely describes the proposed framework put forward by the government. In the subsequent campaign, the Liberal Party has indicated their intention of passing “online harms” legislation in their first 100 days if re-elected.

    The framework is intended to address five categories of content that is largely illegal, but the proposed definitions go beyond what the courts have determined to be unlawful: (a) terrorist content; (b) content that incites violence; (c) hate speech; (d) non-consensual sharing of intimate images; and (e) child sexual exploitation content.

    Highlights of the proposed framework include requiring online communications service providers to have a flagging mechanism for harmful content, with review and removal within 24 hours. It specifically calls for making the content inaccessible in Canada. The new regulator, the Digital Safety Commissioner, would have very broad powers of supervision and significant transparency obligations would be placed on the providers. An appeal mechanism would rest in a new Digital Recourse Council of Canada, with further appeals going to a new Personal Information and Data Protection Tribunal.

    The proposal anticipates massive penalties for non-compliance in an amount up to 3% of a provider’s gross global revenue or up to ten million dollars ($10,000,000), whichever is higher.

    While the framework is detailed, it anticipates significant provisions would be left to regulation either by the Digital Safety Commissioner or the Governor-in-Council, or both.

    A number of experts, including Professors Michael Geist and Emily Laidlaw have been critical of the approach taken by the Department, and Daphne Keller of Stanford has said the proposal “disregard(s) international experience with past laws and similar proposals around the world, as well as recommendations from legal and human rights experts inside and outside of Canada.”

    The consultation is open for comments until September 25, 2021.

  • 23 Jul 2021 2:41 PM | Deleted user

    Demand for passwords does not amount to self-incrimination; Alberta Court of Appeal finds child porn admissible despite Charter breach

    In R. v. Al-Askari, the Alberta Court of Appeal rendered a decision on the continuously controversial issue of searches of electronic devices at Canada’s borders. The accused was a refugee claimant of Palestinian nationality who made a refugee claim at the Canada-US border in Coutts, Alberta. As part of routine screening under the Immigration and Refugee Protection Act (IRPA), a CBSA official asked for and received the passwords for the accused’s two cell phones, on one of which she saw child pornography. She then stopped the search and arrested him. Subsequent warranted searches of the phones and several other electronic devices revealed hundreds of child pornography images and videos. At trial, the accused made an unsuccessful Charter challenge to the initial search of the two phones, and was convicted of importing and possessing child pornography. He appealed the finding on the search and also (with leave from the Court of Appeal) contested the device search subsequent to the search of the phones.

    While most “border search” cases have concerned searches conducted under the Customs Act, the Court of Appeal carefully explored the legislative context of this search, arising from the IRPA. It identified two possible provisions under which a search of a refugee claimant could be authorized: under section 139, a search may be conducted if the officer believes on reasonable grounds that the person has not revealed their identity, has undisclosed information about their identity, or has committed or has evidence about particular offences (human smuggling/trafficking, etc.). The officer had testified that she had been looking for evidence of inadmissibility on the basis of security, criminality or identity. Thus s. 130 had not authorized the search.

    The second was section 16 of IRPA. The Court held that 16(1), which requires the person to produce all information relevant to their refugee claim, did not apply. Section 16(3) allows officers to obtain “any evidence that may be used to establish their identity or compliance with this Act.” The Court held that the Crown’s argument that this crated a broad and general search power was not in keeping with the need for a constitutionally-protected core of privacy in electronic devices, even at the border where the expectation of privacy is somewhat attenuated. Their findings are worth setting out in some detail, in part because the Court reviewed its earlier finding in R. v. Canfield which dealt with e-device searches under the Customs Act (but was released while this newsletter was on a COVID-required hiatus):

    [48] The Crown reads s 16(3) more broadly as providing an unqualified search power. As Mr Al Askari emphasizes, this would create one of the broadest search powers in Canadian law. Without any restraint or need to provide an articulated basis, the officer could require a strip search, cavity search, DNA seizure, and encryption passwords, as long as the search was directed toward establishing the applicant’s identity or to ensure compliance with IRPA.

    [49] The more limited approach suggested by Mr Al Askari is supported by R v LE2019 ONCA 961, paras 66-67, 382 CCC (3d) 202, leave to appeal dism’d 2020 CanLII 33846 (SCC). The Court of Appeal for Ontario concluded that s 16(3) creates a qualified statutory search power.

    [50] LE involved a search of a cell phone of the accused who was in Canada illegally and subject to a removal order. The officer had a reasonably grounded belief that the accused was attempting to contravene her removal order and had wrongly made phone contact with her husband. The officer expressly relied upon s 16(3) as the source of authority for the search: para 44.

    [51] The court held that the search was lawful and the scope of the search was restricted to establishing the person’s identity or determining compliance with IRPA. Although there are procedural and substantive limits on this search process, there is no limit on the subject matter of the search since the officer is permitted to obtain “any evidence” as long as that evidence is to establish the person’s identity or determine compliance with IRPA: paras 68-69.

    [52] The court suggested that the search power under s 16(3) requires a “reasonable grounds belief”, para 70:

    In my view, s. 16(3) authorized the CBSA officer’s search of the appellant’s cell phone. The appellant was a foreign national; she had been arrested and detained and was subject to a removal order. The CBSA officers sought evidence that the appellant was attempting to contravene her removal order. They sought evidence from the LG Nexus cell phone in the appellant’s possession on arrest, to determine the appellant’s compliance (or lack thereof) with the IRPA, having information that could support a reasonable ground belief the appellant was obstructing her removal from Canada. [emphasis added]

    [53] This approach is consistent with R c Patel2018 QCCQ 7262, paras 64-66. It held that a cell phone search of a refugee claimant was authorized under ss 16(1), 16(3), 139 and 140 of IRPA upon which the officer explicitly relied. The search was necessary to determine the accused’s true identity because of bona fide concerns about his identification documents and the answers he provided when questioned.

    [54] Both LE and Patel are examples of what is contemplated by the text of s 16(3). Patel was concerned with further evidence of identity. LE addressed evidence of “compliance with the Act” as the accused was subject to a removal order under the IRPA. Neither case involved a broad suspicionless search for criminality.

    [55] A finding that s 16(3) does not authorize suspicionless searches is consistent with this Court’s decision in Canfield.

    [56] At issue in Canfield was the constitutionality of the Customs Act provision that permits the routine inspection of “goods”: s 99(1)(a). Earlier jurisprudence treated the search of electronic devices as coming within the definition of “goods” under s 99(1)(a) and falling within the first category of Simmons: routine searches that could be undertaken without any individualized grounds. This Court held that s 99(1)(a) of the Customs Act was unconstitutional to the extent that it imposed no limits on searches of electronic devices at the border. The definition of “goods” in s 2 of the Customs Act was deemed of no force or effect insofar as the definition included the contents of personal electronic devices for the purpose of s 99(1)(a): para 7. Notably, this Court said that not all searches of phones are the same; some will be more invasive than others: para 34. But routine, suspicionless searches of these devices are not constitutional under s 99(1)(a).

    [57] This Court went on to say, paras 75-76, that a justified search of a personal electronic device needs a threshold requirement of suspicion, but was reluctant to define the boundaries of that threshold, preferring to leave that question to Parliament:

    …To be reasonable, such a search must have a threshold requirement…[I]n our view the threshold for the search of electronic devices may be something less than the reasonable grounds to suspect required for a strip search under the Customs Act … [but] … we decline to set a threshold requirement for the search of electronic devices at this time. Whether the appropriate threshold is reasonable suspicion, or something less than that having regard to the unique nature of the border, will have to be decided by Parliament and fleshed out in further cases. However, to the extent that s 99(1)(a) permits the unlimited search of personal electronic devices without any threshold requirement at all, it violates the protection against unreasonable search in s 8 of the Charter.

    We hasten to add that not all searches of personal electronic devices are equal …

    [58] This Court was alive to the reality that travellers often store relevant documents for customs purposes on their electronic devices. Although an unlimited and suspicionless search of a device would breach the Charter, some documents stored on devices must be made available to border agents as part of the routine screening process. For example, receipts and other information relating to the value of imported goods and travel-related documents, would be essential to routine screening. “The review of such items on a personal electronic device during a routine screening would not constitute an unreasonable search under s 8”: para 79.

    [59] Routine and suspicionless searches of personal electronic devices under IRPA must be limited to the purposes provided in the text: identification and admissibility. Persons have a higher privacy interest in their devices even at the border. Not all searches of devices are overly intrusive, and relevant documents are often stored on these devices. It follows that, under s 16, officers may review documents on personal electronic devices where necessary for identification and admissibility purposes. For example, an officer could ask a refugee claimant to locate the relevant documents on their device instead of independently searching for them. In this situation, a search would only occur if the person could not meet the request.

    [60] In addition, Canfield followed the guidance from R v Fearon2014 SCC 77, paras 74-83, [2014] 3 SCR 621, regarding tailored and precise search protocols. The court warned against open-ended searches, even if done for statutorily prescribed purposes. Thus, a justifiable search of a personal electronic device for the purposes of identification and admissibility must limit the invasion of privacy by conducting the search in a manner that is tailored, and only where the officer is unable to otherwise satisfy themselves of identity and admissibility.

    Here, the CBSA officer had not had any indicators of inadmissibility or criminality when she conducted the search, and explicitly stated that suspicion had arisen only when she viewed the child porn images on the phone. The Court concluded its analysis by finding that a search under s. 16(3) must be grounded in “a reasonable suspicion with respect to the claimant’s identity, admissibility, or other compliance with the IRPA.” Accordingly, both the initial search of the phones and the subsequent search of the other devices had breached s. 8 of the Charter as unreasonable searches and seizures.

    The accused also argued that by demanding his phone passwords the officials had breached his right against self-incrimination under s. 7 of the Charter. Relying on the 2006 decision by the Ontario Court of Appeal in R. v. Jones, as well as its own decision in Canfield, the Court noted that there is no ab initio right to remain silent during inspection at the border, due to its unique nature. While questioning grounded in some “strongly particularized suspicion” that created a detention of the person might engage the protection against self-incrimination, routine border screening did not. Here, the screening had been routine and no detention had arisen.

    Having found the searches unconstitutional, the Court of Appeal nonetheless refused to exclude the evidence under s. 24(2) of the Charter. The state of the law at the time of the search—six years earlier—had been in flux and the officer’s belief that her search was permissible had been reasonable. The examination of photos was intrusive upon the accused’s privacy, particularly as he was religiously concerned about people viewing photos of female members of his family. However, the photos were highly reliable real evidence and society’s interest in trial on the merits had been high.

  • 23 Jul 2021 2:38 PM | Deleted user

    Yay! Another consultation

    Innovation, Science and Economic Development Canada has launched a consultation on copyright, artificial intelligence and the internet of things, which is open for comment until September 17, 2021. The launch was accompanied by a consultation paper, which sets out its goals:

    With this consultation, the Government invites both evidence of a technical nature and views on potential policy directions described in more detail in the paper. AI and IoT are fast evolving technologies, uses of these technologies are changing, and consumers and businesses are facing new copyright-related challenges when using these complex technologies.

    The types of technical evidence sought in this consultation include technical information about how an AI model integrates data from copyright-protected works as it "learns" from that data, the roles of various human players involved in the creation of works using AI, the extent to which copyrighted-protected works are integrated in AI applications after they are trained and commercialised, and the uses of AI-assisted and AI-generated works by businesses and consumers. With respect to IoT, evidence sought includes technical information about TPMs, how stakeholders interact with TPMs in their respective industries, and the necessary steps, third party assistance, and devices required to circumvent a TPM and perform associated tasks, such as repair or achieving interoperability of two products. Relaying experiences in other markets or jurisdictions that have enacted new measures related to AI and copyright would also be of interest.

    In considering possible copyright measures relating to AI and IoT, the Government will be guided by the extent to which measures would help achieve the following objectives:

    1. Support innovation and investment in AI and other digital and emerging technologies in all sectors in Canada. AI has tremendous potential for society if used ethically and responsibly, and could also drive productivity growth across the economy.
    2. Support Canada's cultural industries and preserve the incentive to create and invest provided by the economic rights set out in the Act. Creators, innovators and rights holders should be adequately remunerated for their works or other copyright subject matter.
    3. Support competition and marketplace needs regarding IoT devices and other software-enabled products. Consumers want to be able to maintain and more easily repair the products they own, while innovators want flexibility and certainty to develop software-enabled products that are interoperable with those of other manufacturers.

    Specific topics covered include the questions of authorship and ownership of works created by artificial intelligence and the right to repair in the context of the “internet of things”.

  • 23 Jul 2021 2:37 PM | Deleted user

    Ontario Divisional Court strikes intrusion upon seclusion claim based on recklessness

    In a two to one split decision before the Ontario Divisional Court in Owsianik v. Equifax Canada Co., the majority of the three judge panel has struck a class action claim against Equifax Canada that was based on the tort of “intrusion upon seclusion”. The tort, as first established in Canada in Jones v. Tsige, could be applicable where the defendant was intentional or reckless in the intrusion. In the case before the Court, Equifax Canada appealed a certification decision that would have allowed the case to proceed based on the allegation that Equifax had been reckless.

    The plaintiffs argued on appeal that the contours of the privacy tort are evolving and it should be up to the trial judge to determine whether Equifax had been reckless and, if so, whether it triggered the intrusion tort. Equifax, on the other hand, argued that the certification judge’s decision went beyond the “incremental development principle” and that novel claims such as these should be vetted at the certification stage.

    The majority of the three judge panel of the Divisional Court allowed Equifax’s appeal, reasoning:

    [54] The tort of intrusion upon seclusion was defined authoritatively only nine years ago. It has nothing to do with a database defendant. It need not even involve databases. It has to do with humiliation and emotional harm suffered by a personal intrusion into private affairs, for which there is no other remedy because the loss cannot be readily quantified in monetary terms. I agree that Sharpe J.A.’s definition of the tort is not necessarily the last word, but to extend liability to a person who does not intrude, but who fails to prevent the intrusion of another, in the face of Sharpe J.A.’s advertence to the danger of opening the floodgates, would, in my view, be more than an incremental change in the common law.

    [55] I agree with my colleague (paragraph 43) that Equifax’s actions, if proven, amount to conduct that a reasonable person could find to be highly offensive. But no one says that Equifax intruded, and that is the central element of the tort. The intrusion need not be intentional; it can be reckless. But it still has to be an intrusion. It is the intrusion that has to be intentional or reckless and the intrusion that has to be highly offensive. Otherwise the tort assigns liability for a completely different category of conduct, a category that is adequately controlled by the tort of negligence.

    The court also concluded that if a defendant had not taken adequate steps to secure their databases, the tort of negligence “protects them adequately and has the advantage that it does not require them to prove recklessness.”

  • 23 Jul 2021 2:34 PM | Deleted user

    Federal regulator calls it “a step back overall” for privacy

    The federal Privacy Commissioner, who oversees the Personal Information Protection and Electronic Documents Act and who would be the lead regulator if Bill C-11 ever becomes law has slammed the bill as a “step back overall” for privacy. In a lengthy submission to the House of Commons Standing Committee on Access to Information, Privacy and Ethics, the Commissioner says the bill does not get the balance between privacy and commercial interests right and is out of step with legislation in other jurisdictions. The main concerns are summarized in a press release issued at the same time by the Commissioner:

    Control

    Instead of giving consumers greater control over the collection, use and disclosure of their personal information, Bill C-11 offers less control. It omits the requirement under existing law that individuals understand the consequences of what they are consenting to for it to be considered meaningful, and it allows the purposes for which organizations seek consent to be expressed in vague, if not obscure, language.

    New flexibility without increased accountability

    In the digital economy, organizations need some degree of flexibility to use personal information, sometimes without consent, in order to maximize the potential of the digital revolution for socio-economic development. But with greater flexibility for companies should come greater accountability.

    Unfortunately, Bill C-11 weakens existing accountability provisions in the law by defining accountability in a manner akin to self-regulation.

    Organizations should be required to apply the principles of Privacy by Design and undertake privacy impact assessments for new higher risk activities. The law should also subject organizations to proactive audits by the OPC to ensure they are acting responsibly.

    Responsible innovation

    Bill C-11 seeks to provide greater flexibility to organizations through new exceptions to consent. However, certain exceptions are too broad or ill-defined to promote responsible innovation. The preferred approach would be to adopt an exception to consent based on legitimate business interests, within a rights-based approach.

    A rights-based foundation

    Bill C-11 prioritizes commercial interests over the privacy rights of individuals. While it is possible to protect privacy while giving businesses greater flexibility to innovate responsibly, when there is a conflict, privacy rights should prevail.

    To that end, the Bill should be amended to adopt a rights-based framework that would entrench privacy as a human right and as an essential element for the exercise of other fundamental rights. The OPC submission recommends doing this in a way that would strengthen the constitutional foundation of the law as properly within the jurisdiction of Parliament.

    Access to quick and effective remedies

    Bill C-11 gives the OPC order-making power and the ability to recommend very high monetary penalties. However, both are subject to severe limitations and conditions, including the addition of an administrative appeal between the OPC and the courts that would deny consumers quick and effective remedies.

    Only a narrow list of violations could lead to the imposition of administrative penalties. The list does not include obligations related to the form or validity of consent or the numerous exceptions to consent. It also does not include violations of the accountability provisions.

    In the case of failure to comply with these obligations, only criminal sanctions would apply and only after a process that could take approximately seven years. A process that would take a maximum of two years is recommended.

  • 20 May 2021 2:47 PM | Deleted user

    Various recent court decisions show judges and parties wrestling—mostly successfully—with faked and misused electronic evidence

    As far back as the Uniform Law Conference of Canada’s 1998 Uniform Electronic Evidence Act—which formed the basis for the provisions in the Canada Evidence Act and various provincial acts dealing with the admissibility of electronic data—courts have been concerned about the provenance of electronic data when it is led as evidence. Specifically, there has always been concern that due to the inherent manipulability of digital data, electronic evidence could be fabricated by dishonest litigating parties and used to undermine the truth-seeking function of the trial process. Anecdotally this is something that happens often but, in our experience, rulings about it seldom show up in reported decisions. However, very recent case law indicates that, where parties and judges are properly attuned to these kinds of problems, they can be prevented and exposed, and the dishonest parties will reap the consequences.

    The trial judge in Lenihan v. Shankarwas adjudicating upon a hotly-contested custody, access and mobility dispute in a family law case, which (the judge held) featured remarkable amounts of subterfuge by the mother. Justice McGee decided in favour of the father, and provided written reasons for potential appellate review but that were also “to draw attention to the evidentiary challenges of spoofed communications and postings created to damage a parent’s credibility and tendered to gain litigation advantage.” Among the various evidentiary issues were arguments by the mother, both that emails and texts adduced in evidence by the father were fake, and that emails and other messages that the mother adduced in evidence were genuine.

    Justice McGee first reviewed the provisions of the Ontario Evidence Act relating to the admissibility of “electronic records,” noting that s. 34.1(4) simply codified the “low threshold test at common law” for authentication at the admissibility stage, namely that the adducing party simply provide “some evidence” that the record is what it purports to be; final determination of authenticity is left for fact-finding. The primary focus at the admissibility stage, she held, is the integrity of the electronic record itself, which under s. 34.1(5.1) can established by “evidence of the integrity of the electronic records system by or in which the data was recorded or stored, or by evidence that reliable encryption techniques were used to support the integrity of the electronic record.” She noted that these provisions, intended to satisfy “best evidence” concerns,

    …work to ensure that an electronic document accurately reflects the original information that was inputted or recorded on the device. With electronic documents, the focus shifts to the information contained in the document, rather than the document itself. The threshold for admissibility is low and at this stage, concerns are generally limited to the completeness and accuracy of the record.

    Here, the father had adduced in evidence a series of text messages between the parties over a 5-month period, which he “had exported…from his phone into a printable format using an application called ‘GIIApps SMSShare 2’.” The mother argued that the texts were all fake and created to “make her look bad,” but the judge rejected this on a number of bases. The mother had refused to produce her own version of the text exchanges; the texts contained pictures that she had taken of herself which she claimed were downloaded from her Facebook page, but there was no evidence that these pictures were ever on her Facebook page; in an earlier Notice to Admit she had admitted to sending a number of the texts she now claimed were fake; and while she claimed that the phone number identified in the texts as hers was also falsified, it appeared as her number in her own Exhibit Brief. The mother also argued that some emails from her to the father which had been adduced had also been faked “to make her look bad,” but the judge noted that the mother only ever used the originating email account, and that despite her claims that someone (probably the father) had been accessing her account to send the emails, she had made no effort to “change the account, change her password or set up a new account, any one of which would be the natural next step were her email to have been “hacked” or used inappropriately.”

    The mother also adduced a particular set of emails purportedly from the father, but Justice McGee held that these were fake on the basis that: the father testified that the sending email address was not his; the emails reflected the mother’s writing style and not the father’s, “inclusive of content, word choice and spelling”; the emails repeated false claims that had been earlier made by the mother and would have made no sense if attributed to the father; the emails indicated knowledge in the sender which the father would not have had at the relevant time; and the email had markers indicating they had been printed from the mother’s known account. Although they were not authentic, the judge admitted them as evidence of the mother’s “extensive efforts to damage [the father]’s character, particularly in the eyes of their daughter’s service providers and the Court.” Other emails were similarly held to be bogus, in one case due to their clearly having been “copied and then pasted into a Word or other word processing document.”

    The mother also adduced in evidence communication logs from a co-parenting and custody app called “Our Family Wizard” which contained messages purportedly between the parties; however, the communications had numerous inconsistencies and ordering flaws, and the judge concluded that the mother had generated two accounts (one in the father’s name) and simply generated “messages that could be used as evidence.”

    Having found for the father on all matters, McGee J. concluded the judgment with some “final thoughts” on the subject of electronic evidence:

    247. As our court transitions to a fully digital platform, this trial was a stark reminder of the potential for the manipulation and misuse of electronic evidence.

    248. The most common internet definition of a spoofed email is when the email address in the “From” field is not that of the sender. It is easy to spoof an email, and not always so easy to detect. For sophisticated senders – such as actors who are “phishing” for information of commercial value – the origins of a spoofed email may never be detected.

    249. Spoofing originates from the idea of a hoax or a parody, and in the early days of the internet, it was a legitimate tool for managing communications so that a user believed that an email came from one source, when it actually came from another.

    250. Spoofing first arose as a term in family law (more commonly referred to in the U.S.A. as divorce law) to describe cell phone users hiding their identity and/or location for nefarious purposes. As a result of advances in mobile apps, websites, forwarding services and other technologies, callers are now able to change how their voice sounds, to evade a blocked number or to pretend to be a person or institution with whom their target was familiar. Targets can be tricked into disclosing sensitive information, harassed, stalked and frightened.

    251. Any electronic medium can be spoofed: texts, emails, postings to social media, and even messaging through a reputable software program specifically designed to provide secure communications between sparring parents.

    252. What stood out in this case was the purpose of the spoofed communications. Instead of tricking or scaring the target, electronic communications were spoofed to deliberately damage the other parent’s credibility and to gain litigation advantage.

    253. In R. v. C. B., the Ontario Court of Appeal foreshadowed the relevance of inauthentic electronic evidence. “[T]endered as bogus” is a critical catch that is not always apparent. A party’s lament that “it wasn’t me” may appear credible at one stage of the proceeding but may no longer be credible at a later stage. An email or text that on first reading appears authentic might later be found to be inauthentic when examined within the evidence as a whole.

    254. Fake electronic evidence has the potential to open up a whole new battleground in high conflict family law litigation, and it poses specific challenges for Courts. Generally, email and social media protocols have no internal mechanism for authentication, and the low threshold in the Evidence Act that requires only some evidence: direct and/or circumstantial that the thing “is what it appears to be;” can make determinations highly contextual.

    255. In a digital landscape, spoofing is the new “catch-me-if-you-can” game of credibility.

    256. I urge lawyers, family service providers and institutions to be on guard, and to be part of a better way forward. Courts cannot do this work alone, and the work must be done well. High conflict litigation not only damages kids and diminishes parents; it weakens society as a whole, for generations to come.

    In R. v. Aslami, the Ontario Court of Appeal reversed the appellant’s conviction on multiple charges related to the firebombing of a home. The crux of the Crown’s case at trial had been various messages sent via Facebook, SMS text and a texting app called “TextNow.” The Court of Appeal accepted the appellant’s argument that the trial judge had failed to take into account issues with the authenticity/integrity of the messages, and that these issues tended to support the defence theory that the complainant’s ex-wife and her new boyfriend had been attempting to frame the appellant for the firebombing. Each group of messages, the Court wrote, “has its own particular frailties.”

    The SMS messages originated from a sender whom the ex-wife had identified in her own phone as the appellant, under the name “Sumal Jan” which she claimed was a name he was called by some in their home country. However, “[a] police detective gave evidence that there were several entries on the appellant’s ex-wife’s cellphone for the name ‘Sumal Jan’ that had different phone numbers associated to them.” The TextNow messages had been retrieved via screenshots of the ex-wife’s phone, but the timestamps had been uncertain, and the Crown had not led “any expert evidence regarding the functioning of the TextNow app, or its reliability, or any ability to manipulate the date, number, name of the sender, or any other details as to the operation of the app.” The trial judge had engaged in a speculative evaluation of this evidence, comparing spelling, phrasing and substantive content between the SMS texts and the TextNow messages, which had the effect of assuming that the appellant had sent them without there being reliable evidence to this effect. The Facebook messages had been exchanged between the ex-wife’s boyfriend and someone using the name “Trustnoone Mob,” and the only evidence linking the latter identity to the accused was the boyfriend’s testimony that he believed he was communicating with the accused.

    In allowing the appeal, Nordheimer J.A. commented:

    [30] As I said at the outset, trial judges need to be very careful in how they deal with electronic evidence of this type. There are entirely too many ways for an individual, who is of a mind to do so, to make electronic evidence appear to be something other than what it is. Trial judges need to be rigorous in their evaluation of such evidence, when it is presented, both in terms of its reliability and its probative value. The trial judge did not engage in that rigorous analysis in this case. In fairness, the trial judge was not assisted by the prosecution in this task. The prosecution ought to have called expert evidence to address the issues that the evidence posed, but they did not.

    Another case of interest is R. v. H.S.S., where Judge Chen of the British Columbia Provincial Court presided over the prosecution of a young person for the alleged sexual assault of another young person (both were 16 years old at the time of the incident). The complainant alleged that during a school day, her friends were using her phone to exchange Instagram messages with the accused, and told her that the accused wished to meet with her in the school’s “handicapped bathroom to talk.” She went to the meeting, not reviewing any of the messages her friends had exchanged with the accused, and he assaulted her by touching her sexually. It transpired, she said, that her friends and her sister had been exchanging Instagram messages with the accused for several days; she selected the “relevant” ones to give to the police upon reporting the alleged assault, and deleted the rest.

    In his defence the accused adduced all of the Instagram messages between himself and the complainant, which amounted to hundreds sent between the two over several days, and which disclosed that they had agreed to have a sexual encounter in the bathroom. He testified that she was a willing participant in the kissing and heavy petting that constituted the alleged assault. In cross-examination the complainant tried to explain away the many inconsistences in her evidence by saying that her sister and friends (none of whom were called as witnesses) must have been using her phone to communicate with the accused, but the judge found her explanations unconvincing in light of both her proven conduct, implausible explanations and the numerous credibility problems her various stories presented.

    In the end Judge Chen held that “even the evidence of the Complainant leads to the inescapable conclusion that the Accused was indeed ‘set up’.” The judge ruled that the complainant and her sister had carried out a campaign of “cruel and callous” sexualized teasing of the accused, who was infatuated with the complainant, and that the Crown could not prove lack of consent to the activity in the bathroom. The accused was acquitted.

  

Canadian Technology Law Association

1-189 Queen Street East

Toronto, ON M5A 1S2

contact@cantechlaw.ca

Copyright © 2024 The Canadian Technology Law Association, All rights reserved.