Menu
Log in
Log in


News

  • 4 Apr 2019 12:40 PM | Deleted user

    Alberta Privacy Commissioner orders release of names of blocked Twitter accounts

    The Alberta Information and Privacy Commissioner has given guidance about the interaction between privacy legislation and Twitter with its decision inAlberta Education (Re)The applicant had made a request under the Freedom of Information and Protection of Privacy Act (FOIP Act) to a public body, Alberta Education, requesting a list of Twitter users/accounts that they had been blocked for each Twitter account that they operated or authorized. Alberta Education provided some records in response but refused to provide the names of some of the blocked Twitter accounts. They did so on the basis of section 17(1) of the FOIP Act, which states that

    17(1) The head of a public body must refuse to disclose personal information to an applicant if the disclosure would be an unreasonable invasion of a third party’s personal privacy.

    The Adjudicator ultimately decided that there was insufficient information to show that section 17(1) applied, and therefore directed Alberta Education to give the applicant access to the requested information. 

    Section 17 operates only when the disclosure of personal information would be an unreasonable invasion of a third party’s personal privacy. Under the FOIP Act, not all disclosure of personal information amounts to an unreasonable invasion of personal privacy. Under section 17(2), for example, information which reveals financial details of a contract to supply goods or services to a public body is not an unreasonable invasion: on the other hand section 17(4)(g) states that disclosure presumptively is an invasion of personal privacy if

    (g) the personal information consists of the third party’s name when 

    (i) it appears with other personal information about the third party, or

    (ii) the disclosure of the name itself would reveal personal information about the third party

    Section 17(5) sets out a number of non-exhaustive factors to be considered in determining whether a disclosure of personal information constitutes an unreasonable invasion of a third party’s personal privacy, such as whether the personal information is relevant to a fair determination of the applicant’s rights, or whether the personal information is likely to be inaccurate or unreliable.

    However, before any of that analysis becomes necessary, the information in question must be found to be personal information under s 17(1), which requires that the information must have a personal dimension and be about an identifiable individual. Here, Alberta Education had withheld the names of Twitter accounts and the associated image where it believed the information might reveal the identity and image of the account holder, on the basis that disclosure might enable the applicant to infer the identity of individuals engaged in inappropriate conduct. The Adjudicator questioned whether, given the reality of Twitter, such an inference was possible:

    [para 22] A Twitter account name is the name of an account, rather than the name of an individual. While some individuals may use their names as the name of their Twitter account, others do not. In addition, organizations and “bots” may also use Twitter accounts. I note that a July 11, 2018 article in the New York Times reports: 

    Twitter will begin removing tens of millions of suspicious accounts from users’ followers on Thursday, signaling a major new effort to restore trust on the popular but embattled platform.

    The reform takes aim at a pervasive form of social media fraud. Many users have inflated their followers on Twitter or other services with automated or fake accounts, buying the appearance of social influence to bolster their political activism, business endeavors or entertainment careers.

    Twitter’s decision will have an immediate impact: Beginning on Thursday, many users, including those who have bought fake followers and any others who are followed by suspicious accounts, will see their follower numbers fall. While Twitter declined to provide an exact number of affected users, the company said it would strip tens of millions of questionable accounts from users’ followers. The move would reduce the total combined follower count on Twitter by about 6 percent — a substantial drop. 

    [para 23] I note too, that an article in Vox describes the prevalence of fake and automated Twitter accounts: 

    In April, Pew found that automated accounts on Twitter were responsible for 66 percent of tweeted links to news sites. Those aren’t necessarily the bots Twitter is after: Automation remains okay to use under many circumstances. But the “malicious” are being targeted. Gadde said Wednesday that the new accounts being deleted from follower accounts aren’t necessarily bot accounts: “In most cases, these accounts were created by real people but we cannot confirm that the original person who opened the account still has control and access to it.” Weeding out these accounts might discourage the practice of buying fake followers.

    Twitter has acknowledged it contributed to the spread of fake news during the 2016 U.S. presidential election, and is trying not to have a repeat showing. It’s verifying midterm congressional candidate accounts, it launched an Ads Transparency Center, and now come the new culls.

    […] 

    The Washington Post notes that Twitter suspended more than 70 million accounts in May and June. Twitter also said recently that it’s challenging “more than 9.9 million potentially spammy or automated accounts per week.” [my emphasis] (“Challenged” doesn’t necessarily mean “suspended,” but users are prompted to verify a phone or email address to continue using the account.)

    [para 24] From the foregoing, I understand that millions of Twitter accounts may be automated or fake. As a result, the name of a Twitter account cannot be said to have a personal dimension necessarily, even though an account may have the appearance of being associated with an identifiable individual

    The information requested, the Adjudicator concluded, would be about Twitter accounts, which was not the same thing as being about individuals:

    [para 28] As it is not clearly the case that the accounts severed under section 17 are associated with identifiable individuals, and there is no requirement that a Twitter user use his or her own name or image, or be a human being, the fact that the Twitter account was blocked does not necessarily reveal personal information about an identifiable individual.

    [para 29] To put it in the terms used by the Alberta Court of Appeal, the evidence before me supports finding that the information severed by the Public Body is “about a Twitter account”, rather than “about an identifiable individual”.

    Since the standard for withholding information was that it would be an unreasonable invasion of a third party’s personal privacy to disclose the information, Alberta Education could not refuse the application on the lower standard that it could possibly be personal information. Accordingly she ordered Alberta Education to release the requested information.

    At the request of Alberta Education the adjudicator also commented on the applicability of this reasoning to email accounts, finding that if there was evidence establishing that an email address was connected to an identifiable individual and the email address appears in a context that reveals personal information about the individual, then the information would be personal information and the public body must consider section 17. Where that was not true, however, then section 17 was not applicable.

  • 4 Apr 2019 12:32 PM | Deleted user

    Court refuses to compel accused person to unlock phone

    In R. v. Shergill, Justice Philip Downes of the Ontario Court of Justice heard an application by the Crown which raised the thorny issue of whether accused persons can be compelled to “unlock” password-protected electronic devices. The accused was charged with a variety of sexual and child pornography offences and the police seized his cell phone incident to arrest. Realizing they had no technology that would allow them to open the phone without possibly destroying its contents, the police applied for a search warrant along with an “assistance order” under s. 427 of the Criminal Code. This section provides that a judge who issues a warrant “may order a person to provide assistance, if the person’s assistance may reasonably be considered to be required to give effect to the authorization or warrant.” Unusually, the application did not proceed ex parte and both the Crown and the accused made submissions.

    The Crown argued that the accused’s Charter rights were not engaged by the issuance of the assistance order, because it was a matter of “mere practicality.” Centrally, the principle against self-incrimination was not engaged because the order “only compels Mr. Shergill to provide access to, and not create, material the police are judicially authorized to examine, and because any self-incrimination concerns are met by the grant of use immunity over Mr. Shergill’s knowledge of the password.” The accused argued that the principle against self-incrimination was, indeed, engaged because the order would compel him to produce information that only existed in his mind, “for the purpose of assisting [the police] in obtaining potentially incriminating evidence against him”—thus violating his right to silence and the protection against self-incrimination under s. 7 of the Charter.

    Justice Downes sided with the accused. First, the principle against self-incrimination was engaged:

    The Crown suggests that Mr. Shergill’s s. 7 interests are “not engaged” or minimally compromised because what is sought to be compelled from him has no incriminatory value or effect. All the assistance order seeks is a password, the content of which is of no evidentiary value. Indeed, the Crown says that the police need not even be aware of the actual password as long as Mr. Shergill somehow unlocks the phone without actually touching it himself.

    In my view, however, the protection against self-incrimination can retain its force even where the content of the compelled communication is of no intrinsic evidentiary value. This is particularly so where, as here, that communication is essential to the state’s ability to access the evidence which they are “really after.” To paraphrase the Court in Reeves, to focus exclusively on the incriminatory potential of the password neglects the significant incriminatory effect that revealing the password has on Mr. Shergill. As the Supreme Court held in White:

    The protection afforded by the principle against self-incrimination does not vary based upon the relative importance of the self-incriminatory information sought to be used. If s. 7 is engaged by the circumstances surrounding the admission into evidence of a compelled statement, the concern with self-incrimination applies in relation to all of the information transmitted in the compelled statement. Section 7 is violated and that is the end of the analysis, subject to issues relating to s. 24(1) of the Charter. [footnotes omitted]

    Even more important, the judge found, was the right to silence under s. 11(b) of the Charter:

    In my view, the more significant principle of fundamental justice at stake is the right to silence. This right emerged as a component of the protection against self-incrimination in R. v. Hebert in which McLachlin J. (as she then was), held:

    If the Charter guarantees against self-incrimination at trial are to be given their full effect, an effective right of choice as to whether to make a statement must exist at the pre-trial stage… the right to silence of a detained person under s. 7 of the Charter must be broad enough to accord to the detained person a free choice on the matter of whether to speak to the authorities or to remain silent.

    McLachlin J. also reaffirmed the Court’s prior holding that the right to silence was “a well-settled principle that has for generations been part of the basic tenets of our law.” 

    The “common theme” underlying the right to silence is “the idea that a person in the power of the state in the course of the criminal process has the right to choose whether to speak to the police or remain silent.” In tracing the history of the right, McLachlin J. referred to an “array of distinguished Canadian jurists who recognized the importance of the suspect’s freedom to choose whether to give a statement to the police or not” and described the essence of the right to silence as the “notion that the person whose freedom is placed in question by the judicial process must be given the choice of whether to speak to the authorities or not.” Finally, Hebert held that s. 7 provides “a positive right to make a free choice as to whether to remain silent or speak to the authorities.”

    The pre-trial right to silence is a concept which, as Iacobucci held in R.J.S., has been “elevated to the status of a constitutional right.”[footnotes omitted]

    The court also rejected the Crown’s argument that the accused’s rights were sufficiently protected by providing use immunity for his knowledge of the contents of his phone and the password:

    As a practical matter, without the assistance order, the evidence would never come into the hands of the police. In that sense it strikes me as somewhat artificial to say that the data on the Blackberry is evidence which, in the language of D’Amour, “exist[s] prior to, and independent of, any state compulsion.” Rather, it is evidence which, as far as the police are concerned, is only “brought into existence by the exercise of compulsion by the state.”

    [….]

    Fundamentally, realistically and in any practical sense, granting this application would amount to a court order that Mr. Shergill provide information which is potentially crucial to the success of any prosecution against him, and which could not be obtained without the compelled disclosure of what currently exists only in his mind. It strikes at the heart of what the Supreme Court has held to be a foundational tenet of Canadian criminal law, namely, that an accused person cannot be compelled to speak to the police and thereby assist them in gathering evidence against him or herself.

    In my view nothing short of full derivative use immunity could mitigate the s.7 violation in this case.

    The Court then discussed some of the challenges that law enforcement are facing in light of new technology and encryption in particular. Though there is always a compelling public interest in the investigation and prosecution of crimes, the final balancing came down on the side of the accused's liberty interests under s. 7 of the Charter:

    I accept that the current digital landscape as it relates to effective law enforcement and the protection of privacy presents many challenges. It may be that a different approach to this issue is warranted, whether through legislative initiatives or modifications to what I see as jurisprudence which is binding on me. But on my best application of controlling authority, I am simply not persuaded that the order sought can issue without fundamentally breaching Mr. Shergill’s s. 7 liberty interests, a breach which would not be in accordance with the principle of fundamental justice which says that he has the right to remain silent in the investigative context.

    The search warrant was issued but the assistance order was denied.

  • 20 Mar 2019 1:33 PM | Deleted user

    Detailed questionnaire sent to at least 60 individuals

    According to Forbes Online, the Canada Revenue Agency (CRA) has begun to audit individuals with significant involvement in cryptocurrency holdings or transactions. 

    In 2017, the CRA established a dedicated cryptocurrency unit said to be intended to build intelligence and conduct audits focussed on cryptocurrency risks as part of its Underground Economy Strategy

    Forbes reports there are currently over 60 active cryptocurrency audits, which involve a very detailed questionnaire, consisting of 54 questions with many sub-questions. Examples include: 

    • Do you use any cryptocurrency mixing services and tumblers? (Which can be used to intermix accounts and disguise the origin of the funds.) If so, which services do you use? 
    • Can you please provide us with the tracing history, along with all the cryptocurrency addresses you ‘mixed’? Why do you use these services?”. 

    Further questions address whether the taxpayer has purchased or sold crypto-assets from or to private individuals. If so, how they became aware of the sale opportunity, and how the transaction was facilitated. The questionnaire goes so far as to ask the taxpayer to list all personal crypto-asset addresses that are not associated with their custodial wallet accounts. It further asks whether they have been a victim of crypto-theft, or have been involved in ICOs (initial coin offerings) or participate in crypto-mining.

    Not surprisingly, CRA has not disclosed what criteria it has used to target individuals with the questionnaires.

  • 20 Mar 2019 1:29 PM | Deleted user

    Calls for a creation of a “Digital Authority”, increased onus to police user generated content and reducing market concentration

    The United Kingdom Select Committee on Communications has released a very interesting report on the regulation of the internet. Entitled Regulating in a Digital World, the report calls for a whole new era and methodology for regulating both online service providers and platforms, and the content that is made available through them. It calls for regulation based upon ten principles enunciated in the introduction:

    1. Parity: the same level of protection must be provided online as offline
    2. Accountability: processes must be in place to ensure individuals and organisations are held to account for their actions and policies
    3. Transparency: powerful businesses and organisations operating in the digital world must be open to scrutiny
    4. Openness: the internet must remain open to innovation and competition
    5. Privacy: to protect the privacy of individuals
    6. Ethical design: services must act in the interests of users and society
    7. Recognition of childhood: to protect the most vulnerable users of the internet
    8. Respect for human rights and equality: to safeguard the freedoms of expression and information online
    9. Education and awareness-raising: to enable people to navigate the digital world safely
    10. Democratic accountability, proportionality and evidence-based approach.

    At its heart, the Report calls for the creation of what's called a Digital Authority that would advise government and regulators about the online environment: 

    238. We recommend that a new body, which we call the Digital Authority, should be established to co-ordinate regulators in the digital world. We recommend that the Digital Authority should have the following functions:

    • to continually assess regulation in the digital world and make recommendations on where additional powers are necessary to fill gaps;
    • to establish an internal centre of expertise on digital trends which helps to scan the horizon for emerging risks and gaps in regulation;
    • to help regulators to implement the law effectively and in the public interest, in line with the 10 principles set out in this report;
    • to inform Parliament, the Government and public bodies of technological developments;
    • to provide a pool of expert investigators to be consulted by regulators for specific investigations;
    • to survey the public to identify how their attitudes to technology change over time, and to ensure that the concerns of the public are taken into account by regulators and policy-makers;
    • to raise awareness of issues connected to the digital world among the public;
    • to engage with the tech sector;
    • to ensure that human rights and children’s rights are upheld in the digital world;
    • to liaise with European and international bodies responsible for internet regulation.

    239. Policy-makers across different sectors have not responded adequately to changes in the digital world. The Digital Authority should be empowered to instruct regulators to address specific problems or areas. In cases where this is not possible because problems are not within the remit of any regulator, the Digital Authority should advise the Government and Parliament that new or strengthened legal powers are needed.

    The Report further critiques the presence of large companies that it says dominate the digital space, and calls for greater regulation and scrutiny of mergers and challenges the paradigm of cross-subsidies that result in free services:

    15. Mergers and acquisitions should not allow large companies to become data monopolies. We recommend that in its review of competition law in the context of digital markets the Government should consider implementing a public-interest test for data-driven mergers and acquisitions. The public-interest standard would be the management, in the public interest and through competition law, of the accumulation of data. If necessary, the Competition and Markets Authority (CMA) could therefore intervene as it currently does in cases relevant to media plurality or national security. 

    16. The modern internet is characterised by the concentration of market power in a small number of companies which operate online platforms. These services have been very popular and networks effects have helped them to become dominant. Yet the nature of digital markets challenges traditional competition law. The meticulous ex post analyses that competition regulators use struggle to keep pace with the digital economy. The ability of platforms to cross-subsidise their products and services across markets to deliver them free or discounted to users challenges traditional understanding of the consumer welfare standard. 

    With respect to problematic content, the Report proposes the removal of safe harbours that current protect platform providers and replacing it with an obligation to police and be accountable for that user generated content: “a duty of care should be imposed on online services which host and curate content which can openly be uploaded and accessed by the public. This would aim to create a culture of risk management at all stages of the design and delivery of services.”

    The Report also addresses children’s issues, the difficult-to-understand terms of use and privacy by default. 

    How the Report will be received and translated into new regulation remains to be seen.

  • 20 Mar 2019 1:28 PM | Deleted user

    Media parties denied standing in Reference on right to be forgotten

    The Prothonotary of the Federal Court dismissed an application by various media parties to be involved in a Reference case in Reference re subsection 18.3(1) of the Federal Courts Act. A complainant brought a complaint to the Privacy Commissioner alleging that Google contravened the Personal Information Protection and Electronic Documents Act [PIPEDA] by continuing to prominently display links to news articles about him in search results corresponding to his name. The complainant alleges the articles in question are outdated and inaccurate and disclose sensitive and private information, and has requested that Google “de-index” him: that is, that they remove the articles from search results using his name, a process colloquially referred to as “the right to be forgotten”. Before investigating the complaint, the Privacy Commissioner sent two Reference questions to be determined by the Federal Court, and the media parties sought to take part in those Reference proceedings. The prothonotary denied that request, but left open the possibility that the media parties could make the same application at a later point.

    The central issue was, in fact, what the issue was. There was no question that the underlying complaint to the Privacy Commissioner against Google raised “important and ground-breaking issues relating to online reputation, including whether a ‘right to be forgotten’ should be recognized in Canada, and if so, how such a right can be balanced with the Charter protected rights to freedom of expression and freedom of the press” (para 7). That was not, however what the Reference was about. Rather, the Privacy Commissioner had asked only two questions:

    1. Does Google, in the operation of its search engine service, collect, use or disclose personal information in the course of commercial activities within the meaning of paragraph 4(1)(a) of PIPEDA when it indexes webpages and presents search results in response to searches of an individual’s name?
    2. Is the operation of Google’s search engine service excluded from the application of Part I of PIPEDA by virtue of paragraph 4(2)(c) of PIPEDA because it involves the collection, use or disclosure of personal information for journalistic, artistic or literary purposes and for no other purpose?

    Google, as a party to the Reference, had brought an application to expand the questions to include issues relating to whether, if PIPEDA applied to the operation of its search engine and requires deindexing, it would contravene s 2(b) of theCharter. However, that application had not yet been heard at the time the media parties sought to be added (in part at their insistence).

    The media parties argued that the true issue underlying the reference was the Privacy Commissioner’s proposed regulation of internet searches and whether that offends the expression and press freedoms in the Charter, and therefore that they should be added as either parties or intervenors. The Prothonotary, however, held that the question which needed to be asked at this time was whether the media parties should be parties on intervenors on the Reference questions which actually existed at the time, and that there was no basis to grant that application.

    The media parties were not, for example, necessary for a full and effectual determination of all issues in the reference:

    [36] …What is at issue here is only whether Google is subject to or exempt from the application of Part 1 of PIPEDA in respect of how it collects, uses or discloses personal information in the operation of its search engine service when it presents search results in response to an individual’s name. 

    [37] The only direct result or effect of the answer to the questions raised in this reference will be to determine whether the OPC may proceed to investigate the complaint made against Google. The media parties are neither intended nor required to be bound by that result. The questions, as framed in the reference, can be effectually and completely settled without the presence of the media parties.

    Even if the scope of the Reference were expanded to include Google’s Charter question, the Prothonotary noted, she would be hesitant to conclude that the media parties were necessary:

    [39] The Court accepts, for the purpose of this argument, that deindexing may significantly affect the ability of content providers to reach their intended audience and for the public to access media content. Even as argued by the media parties, however, that is only the practical effect of the implementation of a recommendation to deindex. If deindexing is recommended or required, its implementation does not require that any action be compelled from or prohibited against the media parties, any other content provider, or any user of the search engine. The only action required would be by Google. Deindexing could and would produce its effect without the need for the other persons “affected” by it to be “bound” by the result of the proposed expanded reference. 

    [40] The impact of a potential deindexing requirement may be significant, but it does not affect the media parties any more directly than it would affect other content providers or those who use Google’s search engine service to gain access to content. To hold that the media parties are, by reason of the practical effect of a decision, necessary to the full and effectual determination of all issues would require that all others that are equally affected also be recognized as necessary parties and be made parties to the reference. 

    Similarly the media parties were not found to have shown that they would add anything of value if they were allowed to be intervenors in the Reference:

    [47] It seems to the Court that the media parties have not given much thought to what they would have to contribute to the determination of the reference if it were limited to the questions as currently framed in the Notice of Application. Indeed, given that the issues currently framed in the reference focus on whether Google’s operation of its search engine is a commercial activity and the purpose for which Google collects, uses or discloses personal information, it is not clear what evidence the media parties might be able to contribute that might assist the Court’s determination. Asked at the hearing to state the position they might take in respect of each of the questions as framed in the reference, counsel for the media parties candidly admitted that they could not provide an answer, having not even seen the evidentiary record constituted by the Privacy Commissioner for the purpose of the reference.

    The Prothonotary did allow, however, that the media parties could apply for intervenor status again once Google’s application to expand the Reference had been decided, so long as “the proposed intervener’s contribution is well-defined and the Court is satisfied that this contribution is relevant, important and in the interest of justice” (para 50).

    [Editor’s note: one of the authors of this newsletter was counsel to one of the parties in this case, but was not involved in writing up this summary.]

  • 20 Mar 2019 1:27 PM | Deleted user

    Supreme Court concludes you don’t necessarily believe what people tell you on the Internet.

    The Supreme Court of Canada struck down portions of the child-luring provisions with its decision in R v Morrison. The accused was charged with child luring for the purposes of inviting sexual touching of a person under age 16, contrary to ss 172.1(1)(b) and 152 of the Criminal Code. That section, the Court noted, 

    40…creates an essentially inchoate offence — that is, a preparatory crime that captures conduct intended to culminate in the commission of a completed offence: see Legare, at para. 25; R. v. Alicandro, 2009 ONCA 133, 95 O.R. (3d) 173, at para. 20, citing A. Ashworth, Principles of Criminal Law, (5th ed. 2006), at pp. 468-70. There is no requirement that the accused meet or even intend to meet with the other person with a view to committing any of the designated offences: see Legare, at para. 25. The offence reflects Parliament’s desire to “close the cyberspace door before the predator gets in to prey”: para. 25.

    The accused had posted an advertisement on Craigslist saying “Daddy looking for his little girl”, which was responded to by a police officer who posed as ‘Mia,’ a 14-year-old. Over the course of more than two months, Morrison invited ‘Mia’ to touch herself sexually and proposed they engage in sexual activity. As a result he was charged with the child luring offence, and defended himself on the basis that he believed he was communicating with an adult female engaged in role play who was determined to stay in character: as he said to the police when arrested, “on the internet, you don’t really know whether you’re speaking to a child or an adult”. However, that statute affected his ability to make that argument. 

    The offence in section 172.1(1)(b) requires proof that the communication took place with a person who is or who the accused believes is under the age of 16 years. Section 172.1(3) creates a presumption around that belief: 

    (3) Evidence that the person referred to in paragraph (1)(a), (b) or (c) was represented to the accused as being under the age of … sixteen years … is, in the absence of evidence to the contrary, proof that the accused believed that the person was under that age.

    In addition, section 172.1(4) imposes a further burden on the accused in that same regard: 

    (4) It is not a defence to a charge under paragraph (1)(a), (b) or (c) that the accused believed that the person referred to in that paragraph was at least … sixteen years …unless the accused took reasonable steps to ascertain the age of the person.

    Taken in combination, these provisions mean that if the other person is represented as being under 16, then the accused must show evidence to the contrary that he believed that representation, and that evidence to the contrary must include the taking of reasonable steps. As a result

    [49]…the combined effect of subss. (3) and (4) is to create two pathways to conviction where the other person is represented as being underage to the accused: the Crown must prove that the accused either (1) believed the other person was underage or(2) failed to take reasonable steps to ascertain the other person’s age. In the context of child luring cases involving police sting operations, such as in Levigne, where it can be assumed that the undercover police officer posing as a child will represent that he or she is underage, these two pathways to conviction would have been available to the trier of fact.

    The accused challenged the constitutionality of both sections 172.1(3) and 172.1(4). In addition he challenged the constitutionality of the mandatory minimum sentence in section 172.1(2)(a). 

    The Court concluded that the presumption that a person who was told someone was under sixteen therefore believed that that person was under sixteen violated the Charter, specifically the presumption of innocence in section 11(d). It is well-established that the substitution of proof of one thing (the accused was told she was under sixteen) for proof of another thing (the accused believed she was under sixteen) will violate section 11(d) unless the connection from one to the other is “inexorable”: that is, “one that necessarily holds true in all cases” (para 53). That could not be said of a communication on the internet:

    [58] Deception and deliberate misrepresentations are commonplace on the Internet: see R. v. Pengelley, 2010 ONSC 5488, 261 C.C.C. (3d) 93, at para. 17. As the Court of Appeal in this case aptly put it:

    There is simply no expectation that representations made during internet conversations about sexual matters will be accurate or that a participant will be honest about his or her personal attributes, including age. Indeed, the expectation is quite the opposite, as true personal identities are often concealed in the course of online communication about sexual matters. [para. 60]

    Accordingly the Court found that section 172.1(3) violated section 11(d), and they went on to conclude that it could not be saved by section 1. They held that although the goal of protecting children from Internet predators was sufficiently important, the provision was not minimally impairing, because it would be sufficient to “rely on the prosecution’s ability to secure convictions by inviting the trier of fact to find, based on a logical, common sense inference drawn from the evidence, that the accused believed the other person was underage.”

    The majority did not, however, strike down the “reasonable steps” requirement in section 172.1(4), holding that as it required proof of “belief”, it set a high mens rea standard (which excluded recklessness) and did not violate section 7. (Justice Abella, in a concurring judgment, would also have concluded that section 172.1(4) was unconstitutional.) The majority also declined to decide whether the mandatory minimum sentence was unconstitutional or not, preferring to have that issue argued at the retrial they ordered. Justice Karakatsanis, writing a concurring judgment, would have struck down the mandatory minimum.

  • 7 Mar 2019 1:45 PM | Deleted user

    Settlement against Devumi LLC prohibits it from further sales of online accounts, followers, likes and endorsements 

    In a press release issued on January 30, 2019, the Attorney General of New York announced that following an investigation of Devumi LLC, it had entered into a settlement that puts an end to the company’s sales of fake activity from fake social media accounts. 

    The company was found to have engaged in the sale of fake followers, “likes” and views on Twitter, YouTube, LinkedIn, SoundCloud and Pinterest using fake activity from false accounts (both bots – computer operated accounts – and sock-puppet accounts – one person pretending to be multiple other people). From the Attorney General’s press release:

    Devumi LLC and related companies owned by German Calas, Jr. – including DisruptX Inc.; Social Bull Inc.; and Bytion Inc. (collectively, “Devumi”) – sold fake followers, “likes,” views and other forms of online endorsement and activity to users of social media platforms. Devumi supplied the fraudulent activity using bot and sock-puppet accounts. These bot and sock-puppet accounts falsely pretended to express the genuine positive opinions of real people. In some instances, Devumi supplied fake accounts that copied real people’s social media profiles without consent, including their name and picture.

    In addition, Devumi sold endorsements from social media influencers without disclosing that the influencers had been paid for their recommendations. This is especially troubling when considering that the opinions of influencers can have particularly strong influence over the reputation and sales for any product, company, service or person they endorse.

    These business practices deceived and attempted to affect the decision-making of social media audiences, including: other platform users’ decisions about what content merits their own attention; consumers’ decisions about what to buy; advertisers’ decisions about whom to sponsor; and the decisions by policymakers, voters, and journalists about which people and policies have public support.

    Devumi’s practices deceived some of the company’s own customers who mistakenly believed they were paying for authentic endorsements, while many other Devumi customers knew they were buying fake activity and endorsements. Devumi also deceived the social media platforms, which have policies prohibiting fake activity.

    Devumi ceased operations in 2018 after the AG launched her investigation, which caused a major decline in sales. The settlement did not address whether the customers did anything illegal. Further commentary can be found here: https://www.manatt.com/Insights/Newsletters/Advertising-Law/Fake-Likes,-Followers-Yield-Real-Legal-Action.

  • 7 Mar 2019 1:45 PM | Deleted user

    Review will examine choice and affordability in Canada’s mobile wireless market

    On February 28, 2019, the CRTC announced a review of the state of the mobile wireless market in Canada to determine whether action is required to improve choice and affordability for Canadians. They are seeking feedback from the public on a variety of matters, including whether mobile virtual network operators should have mandated access to national wireless provider networks until they can establish themselves in the market, and whether regulatory measures are needed to facilitate the deployment of 5G network infrastructure.

    At the same time, the CRTC issued its Notice of Consultation CRTC 2019-57, a notice of hearing for a public hearing on 13 January 2020 looking at competition in the retail wireless service market, the wholesale mobile service regulatory framework (roaming and mobile virtual network operator), and the future of mobile wireless services in Canada (especially in regard to reducing barriers to infrastructure deployment). They provide 16 questions for Canadians to provide their feedback on, all of which can be found here: https://crtc.gc.ca/eng/archive/2019/2019-57.htm.

  • 7 Mar 2019 1:44 PM | Deleted user

    Finds that Facebook abused its position of dominance in the German market by combining collected user data improperly

    On February 6, 2019, the German Federal Cartel Office (the “Bundeskartellamt”) issued a decision finding that Facebook had abused its position as the dominant company in the market for social networks by collecting user data outside the Facebook platform (via apps like WhatsApp, Snapchat, etc.) and assigning this data to its users. This amounted to unlawful collection and use of user data. The President of the Cartel Office commented:

    With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data.In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts. The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data.The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users. Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.

    A good write-up on the decision and its context can be found here.

  • 7 Mar 2019 1:43 PM | Deleted user

    Law clerk affidavits attaching technical reports as exhibits insufficient to properly balance the interests of rights holders and internet users under the notice-and-notice copyright regime

    As the Federal Court continues to settle outstanding questions related to the notice-and-notice regime and Norwich Orders, while a number of rights holders seek information related to the customers of internet service providers. In this case, ME2 Productions, Inc. v. Doe, TekSavvy brought an appeal of a decision rendered by a Federal Court Prothonotary sitting as a case management judge (the “CMJ”) in which the CMJ granted the application for a Norwich order, requiring TekSavvy to provide customer names and addresses to the applicant. The main issue on the appeal was whether the applicant had brought sufficient evidence – or the best evidence – to support its request for a Norwich order.

    The main evidence brought by the applicant was in the form of affidavits sworn by law clerks from the plaintiff’s lawyers’ law firm. Attached to one of those affidavits was a solemn declaration signed by an employee of a company called Maverickeye, which performs services to detect and log online sharing of content. That process and the resulting report is described by the Court:

    [10] The Plaintiffs are movie production companies that own copyright in several films. They are concerned about illegal distribution of their films, so they hired a company called Maverickeye UG (Maverickeye) to monitor the Internet for illegal sharing of their films. The Plaintiffs received a report from Maverickeye about suspected illegal downloads which included the date and time of the activity together with the Internet Protocol (IP) addresses associated with the downloads. These IP addresses were correlated to numbers that were held by several different Internet Service Providers (ISPs). The IP addresses at issue here had been allocated to TekSavvy – each ISP is allocated a bank of IP addresses, and these are available for search, so it was possible for Maverickeye to link a particular IP address with a specific ISP. 

    [11] It was not possible, however, for Maverickeye to link the particular IP address with the name of the individual customer. That information is held by the ISP and is not otherwise available for search by other parties. Pursuant to the notice and notice regime established by the Copyright Modernization Act (described more fully below), the Plaintiffs sent notices to TekSavvy alleging that its customers infringed their copyright and providing the relevant information as to the date and time of the alleged illegal activity, as well as the associated IP address. Under the regime, TekSavvy had to forward such notices to its subscribers, and to retain certain information about the subscribers and their activity.

    On the appeal, TekSavvy attacked the evidence put forward by the plaintiffs, arguing that the law clerks affidavits are improper hearsay, the affidavits do not comply with the requirements of the Federal Courts Rules because they do not set out the basis for the deponents’ belief in their truth and finally the declaration from the Maverickeye employee is expert evidence which was also not in compliance with the requirements set out in the Rules.

    [109] In this case, TekSavvy says that the evidence is inadequate: the key evidence that is found in the Arheidt Declaration is hearsay which cannot be subject to cross-examination since it is simply an exhibit to the affidavits. This is exactly the type of evidence which was rejected in BMG, which remains good law. Further, it contends that the affidavits provide no basis to explain why the law clerks adopt the Arheidt Declaration or believe it to be true and that it is opinion evidence which is not properly submitted in accordance with the Rules or jurisprudence. TekSavvy argues that before the Court can grant extraordinary relief such as a Norwich order, it must demand better evidence from copyright owners.

    The Court noted that this type and configuration of evidence has been routinely accepted in similar proceedings and has been accepted as a basis for the issuance of a Norwich order. The Court noted, at paragraph 103, that “[t]his is a case of first impression: there is no binding authority directly on point nor any decisions of this Court, so I must step back and assess it from first principles, guided by the decision in Rogers Communications.”

    Ultimately, the Court found that the CMJ had erred is relying on the evidence put forward in this manner. Decision-making in cases such as these need to balance the interests of the rights holders against the rights of the individual internet users who may be prejudiced by disclosures under Norwich orders. 

    [121] First, the affidavits of the law clerks do not meet the requirements of Rule 81.To put it bluntly, there is absolutely no indication of the basis for the statement that the law clerks swearing the affidavits adopt the Arheidt Declaration or believe it to be true. Nor is there any explanation as to why the best evidence is not available. The fact that they work in the “copyright enforcement group” of the law firm representing the Plaintiffs may be relevant, in that they may have gained knowledge and expertise about how Maverickeye and its software function and why their reports should be viewed as accurate, but this is not explained. Nor is there any explanation as to why Mr. Arheidt did not swear an affidavit in these matters. 

    [122] Second, the key evidence in support of the granting of the Norwich order is set out in the Arheidt Declaration, but it is simply an exhibit to an affidavit. It is therefore beyond the reach of cross-examination. 

    [123] This evidence is simply not good enough. I find that before granting a Norwich order better evidence must be filed. I have noted earlier the obligation on the copyright owner to provide as accurate information as is possible in the circumstances. To this I would add that the key evidence of alleged copyright infringements must normally be set out in an affidavit, sworn either by a person with direct personal knowledge of how the evidence was gathered, or by someone who can explain why such evidence is not available and why they have reason to believe the truth of the material they are submitting.

    The appeal was allowed and the Norwich order was quashed, but the plaintiffs were granted leave to apply again with better evidence in accord with the Court’s decision.

  

Canadian Technology Law Association

1-189 Queen Street East

Toronto, ON M5A 1S2

contact@cantechlaw.ca

Copyright © 2024 The Canadian Technology Law Association, All rights reserved.