Log in
Log in


  • 17 Apr 2019 12:35 PM | Anonymous

    Driver wearing wired ear bud headphones convicted of “using” phone, despite dead battery

    In R. v. Grzelak, the accused was ticketed for “holding or using” an electronic device while driving, an offence under s. 214.2 of the BC Motor Vehicle Act. The undisputed facts were that the accused’s iPhone was in a centre cubby hole in the dashboard of his car, and that he was wearing a pair of ear bud headphones (in both ears) which were plugged into the phone. The phone’s battery was dead and no sound of any sort was coming through the ear buds. The offence provision required the Crown to prove that the accused was “holding the device in a position in which it may be used.” The judge noted that if this was proven, a conviction must follow, “even if the battery was dead, and even if the Defendant was not operating one of the functions of the device (such as the telephone or GPS function).” In support of this proposition the judge cited R. v. Judd, which seems an odd choice as in that case the accused was convicted because he was physically holding his phone up to his ear while driving, and there was no evidence about the phone’s battery or which function he might have been using.

    On the issue of “holding” the judge found as follows:

    [9] Obviously, here the cell phone itself was sitting in the centre cubby hole, and was not in the defendants hands, or in his lap. But that is not the end of the matter. In my view, by plugging the earbud wire into the iPhone, the defendant had enlarged the device, such that it included not only the iPhone (proper) but also attached speaker or earbuds. In the same way, I would conclude that if the defendant had attached an exterior keyboard to the device for ease of inputting data, then the keyboard would then be part of the electronic device.

    [10] Since the earbuds were part of the electronic device and since the ear buds were in the defendants ears, it necessarily follows that the defendant was holding the device (or part of the device) in a position in which it could be used, i.e. his ears.

    Even the dead battery could not absolve the accused, as the judge held that simple “holding” was sufficient to make out the offence, “even if it is temporarily inoperative.” Accordingly, the accused was convicted.

    In our view, and with respect, this reasoning seems a bit of stretch. The accused was found to have been “holding the device”… in his ears. Surely this strains a reasonable interpretation of what the BC Legislature intended with the wording of the provision. Was it the fact of the physical connection of the earbuds to the phone, i.e. via a wire, that “enlarged” the device? This does beg the question of whether there would be a different finding if the ear buds (or the judge’s hypothetical keyboard) were connected via Bluetooth, as is increasingly common. It is probably fair to say that what the Legislature intended to capture with these provisions is distracted driving, and that driving with earbuds in (if the phone was not dead, as here) might amount to that. But as this case demonstrates, as do so many other cases like it, it would be preferable for the Legislatures to use more technology-neutral language in these offence provisions.

  • 17 Apr 2019 12:34 PM | Anonymous

    Second search of phone data after update to forensic software held lawful under Charter

    In R. v. Nurse, the two accused had been convicted at trial of first-degree murder of the deceased, Kumar, who was Nurse’s landlord. The Blackberrys belonging to the two were seized incident to their arrest, and as warrant was obtained to search them. As they were locked and password-protected the OPP investigating officers sent them to the RCMP for forensic extraction of data. The software used by the RCMP, called “Cellebrite,” was able to analyze raw data that was extracted from the phones and it showed that there had been some communication between them, but nothing incriminatory was found. However, the data was re-analyzed a year later, by which time there had been significant software updates to Cellebrite, and the new analysis revealed extensive text messages between the two accused which revealed a plan to kill the victim.

    On appeal the accused repeated an argument they had made unsuccessfully at trial: that the re-analysis with the updated software amounted to a second “search” for the purposes of s. 8 of the Charter, and thus a second warrant should have been obtained. In rejecting this argument for a unanimous bench, Trotter J.A. remarked:

    [133] In analyzing this issue, it is important to consider the essential nature of computers and other digital devices. They challenge traditional definitions of a “building, receptacle or place” within the meaning of s. 487 of the Criminal Code. In R. v. Marakah, 2017 SCC 59, [2017] 2 S.C.R. 698, McLachlin C.J. said, at para. 27: “The factor of ‘place’ was largely developed in the context of territorial privacy interests, and digital subject matter, such as an electronic conversation, does not fit easily within the strictures set out by the jurisprudence.” See also R. v. Jones, 2011 ONCA 632, 107 O.R. (3d) 241, at paras. 45-52. Similarly, in R. v. Vu, 2013 SCC 60, [2013] 3 S.C.R. 657, Cromwell J. said, at para. 39: “…computers are not like other receptacles that may be found in a place of search. The particular nature of computers calls for a specific assessment of whether the intrusion of a computer search is justified, which in turn requires prior authorization.”

    [134] Because of these conceptual differences, arguments by analogy to traditional (i.e., non-digital) search scenarios will not always be helpful. For example, the trial judge was right to reject the ultraviolet light testing scenario advanced by trial counsel. It does not work in this context because the second ultraviolet light analysis would require re-entry into the premises resulting in a separate invasion of privacy.

    [135] The re-inspection or re-interpretation of the raw data harvested from the appellants’ devices did not involve a further invasion of privacy. It is not necessary in this case to identify precisely when the appellants’ privacy rights were defeated in favour of law enforcement. Nevertheless, their privacy rights were “implicated” when their devices were seized upon arrest. In R. v. Reeves, 2018 SCC 56, 427 D.L.R. (4th) 579, Karakatsanis J. held at para. 30: “When police seize a computer, they not only deprive individuals of control over intimate data in which they have a reasonable expectation of privacy, they also ensure that such data remains preserved and thus subject to potential future state inspection” (emphasis in original). The same would hold true for the seizure of a cellphone or BlackBerry device.

    Here, whatever privacy interest the accused had in their phone data had been defeated completely by the issuing of the first warrant. The warrant did not have any search protocols attached to it, nor was there any indication that protocols would have been constitutionally necessary. The situation was analogous to a fraud investigation where copies of documents were taken and were continually inspected by police over the course of the investigation, and where it would be appropriate to consult new expert services to interpret them. While it might not always be the case that a re-analysis or re-inspection was not a new search, in this case the data was analyzed within the scope of an ongoing investigation, “the substance of which had not changed” between the two searches. The data was not altered in any way. The passage of time had no impact upon the lawfulness of the search. This ground of appeal was dismissed.

  • 17 Apr 2019 12:33 PM | Anonymous

    In seeking to revise crossborder dataflows, the OPC’s position would require consent for all transfers of personal information for processing

    The Office of the Privacy Commissioner of Canada (OPC) has initiated a consultation that proposes to completely reverse its previous guidance on crossborder dataflows under the Personal Information Protection and Electronic Documents Act (PIPEDA). And because they are trying to fit a round peg in a square hole, their position -- if implemented -- will have a huge impact on all outsourcing.

    In 2009, the OPC published a position that was consistent with the actual wording of the statute. It held that when one organization gives personal information to a service provider, so that the service provider can process the data on behalf of the original organization, it was a transfer and not a disclosure. This is an important distinction because transfers do not require consent from the individual, as is the case with a disclosure. Data is disclosed when it is given to another organization for use by that organization for its own purposes. In a transfer scenario, the personal information is protected by operation of the accountability principle, which means the organization that originally collected the data and has transferred it to a service provider remains responsible for the personal data and has to use contractual and other means to make sure that the service provider takes good care of the personal information at issue. Importantly, in its 2009 guidance, the OPC correctly noted “PIPEDA does not distinguish between domestic and international transfers of data.” Consent was not required, but the OPC did recommend that notice be given to the individual:

    Organizations must be transparent about their personal information handling practices. This includes advising customers that their personal information may be sent to another jurisdiction for processing and that while the information is in another jurisdiction it may be accessed by the courts, law enforcement and national security authorities.

    The 2009 policy position reflects the consensus of most privacy practitioners since PIPEDA came into effect in 2001. The new position is a complete reversal and discards the notion of “transfers” of personal information for processing: 

    Under PIPEDA, any collection, use or disclosure of personal information requires consent, unless an exception to the consent requirement applies. In the absence of an applicable exception, the OPC’s view is that transfers for processing, including cross border transfers, require consent as they involve the disclosure of personal information from one organization to another. Naturally, other disclosures between organizations that are not in a controller/processor relationship, including cross border disclosures, also require consent. [emphasis added]

    The new position concludes that because there is nothing in PIPEDA that specifically exempts transfers from consent, transfers can be folded into the mandatory consent scheme:

    While it is true that Canada does not have an adequacy regime [as in Europe] and that PIPEDA in part regulates cross border data processing through the accountability principle, nothing in PIPEDA exempts data transfers, inside or outside Canada, from consent requirements. Therefore, as a matter of law, consent is required. Our view, then, is that cross-border data flows are not only matters decided by states (trade agreements and laws) and organizations (commercial agreements); individuals ought to and do, under PIPEDA, have a say in whether their personal information will be disclosed outside Canada.

    This new position, while demanding consent, brings the true nature of that consent into question. One one hand, the organization has to get consent. On the other hand, the individual can be given no meaningful choice or ability to opt-out, because the organization can say “take it or leave it”:

    Organizations are free to design their operations to include flows of personal information across borders, but they must respect individuals’ right to make that choice for themselves as part of the consent process. In other words, individuals cannot dictate to an organization that it must design its operations in such a way that personal information must stay in Canada (data localisation), but organizations cannot dictate to individuals that their personal information will cross borders unless, with meaningful information, they consent to this.

    There is little basis in the statute for this position reversal, and the OPC’s consultation document shows some significant mental gymnastics to get where they want to go notwithstanding the actual scheme of the Act. 

    Because PIPEDA does not deal with crossborder transfers in any specific way, the only way for the OPC to get to the result they seek is to impose their new requirements on all transfers for processing by a third party, regardless of whether that processing involves moving the personal information outside of Canada. And to highlight the shortcomings of trying to shoehorn this principle into the existing statute, it would not affect in any way a US company that operates in Canada deciding after the fact to move data to its own US-based data centre because it would not be a disclosure or a transfer from one entity to another. 

    The proposal immediately garnered significant criticism. Lisa Lifshitz wrote for Canadian Lawyer Magazine:

    This is problematic in several respects as this analysis flies in the face of years of guidance from the OPC and reiterated repeatedly, including in the 2012 Privacy and Outsourcing for Businesses guidance document) that a transfer for processing is a "use" of the information, not a disclosure. Assuming the information is being used for the purpose it was originally collected, additional consent for the transfer is not required; it is sufficient for organizations to be transparent about their personal information handling practices. This includes advising Canadians that their personal information may be sent to another jurisdiction for processing and that while the information is in another jurisdiction it may be accessed by the courts, law enforcement and national security authorities. 


    The OPC’s implement-first-ask-permission-later approach to changing the consent requirements for cross-border data transfers is troublesome at best and judging from initial reactions, sits uneasily with many (me included).

    Likely knowing this, at the same time it released the Equifax decision the privacy commissioner also announced a “Consultation on transborder dataflows” under PIPEDA, not only for cross-border transfers between controllers and processors but for other cross border disclosures of personal information between organizations. The GDPR-style language used in this document is no accident and our regulator is seemingly trying to ensure the continued adequacy designation of PIPEDA (and continued data transfers from the EU to Canada) by adopting policy reinterpretations (and new policies) pending any actual legal reform of our law. Meanwhile, the OPC’s sudden new declaration that express consent is required if personal information will cross borders (and the related requirement that individuals must be informed of any options available to them if they do not wish to have their personal information disclosed across borders) introduces a whole new level of confusion and complexity regarding the advice that practitioners are supposed to be giving their clients pending the results of the consultations review, not to mention the potential negative business impacts (for consumers/vendors of cloud/managed services and mobile/ecommerce services, just to name a few examples) that may arise as a consequence.

    Michael Geist has written about the OPC’s approach on his blog:

    While the OPC position is a preliminary one – the office is accepting comments in a consultation until June 4 – there are distinct similarities with its attempt to add the right to be forgotten (the European privacy rule that allows individuals to request removal of otherwise lawful content about themselves from search results) into Canadian law. In that instance, despite the absence of a right-to-be-forgotten principle in the statute, the OPC simply ruled that it was reading in a right to de-index search results into PIPEDA (Canada’s Personal Information Protection and Electronic Documents Act). The issue is currently being challenged before the courts.

    In this case, the absence of meaningful updates to Canadian privacy law for many years has led to another exceptionally aggressive interpretation of the law by the OPC, effectively seeking to update the law through interpretation rather than actual legislative reform.

    The OPC is inviting comments up to June 4, 2019 and it is expected they’ll get an earful. The Canadian Technology Law Association is planning to make a submission. For more information or to contribute, contact CAN-TECH Law’s President James Kosa.

  • 4 Apr 2019 12:41 PM | Anonymous

    Office of the Superintendent of Financial Institution issue Advisory

    On March 31, 2019, the Technology and Cyber Security Reporting Advisory came into effect, setting out the Office of the Superintendent of Financial Institution’s expectation for federally regulated financial institutions (FRFI) with regard to technology or cyber security incidents. A “technology or cyber security incident” is defined as an incident which has “the potential to, or has been assessed to, materially impact the normal operations of a FRFI, including confidentiality, integrity or availability of its systems and information.” FRFI’s should report an incident which has a high or critical severity level to OSFI. The Advisory indicated that a “reportable incident” is one that may have: 

    • Significant operational impact to key/critical information systems or data;
    • Material impact to FRFI operational or customer data, including confidentiality, integrity or availability of such data; 
    • Significant operational impact to internal users that is material to customers or business operations;
    • Significant levels of system / service disruptions;
    • Extended disruptions to critical business systems / operations; 
    • Number of external customers impacted is significant or growing;
    • Negative reputational impact is imminent (e.g., public/media disclosure); 
    • Material impact to critical deadlines/obligations in financial market settlement or payment systems (e.g., Financial Market Infrastructure);
    • Significant impact to a third party deemed material to the FRFI; 
    • Material consequences to other FRFIs or the Canadian financial system; 
    • A FRFI incident has been reported to the Office of the Privacy Commissioner or local/foreign regulatory authorities.

    An FRFI must give notice to OSFI as promptly as possible, but not later than 72 hours after determining an incident meets the criteria, and must do so in writing. In addition updates must be provided at least daily until all material details have been provided, and until the incident is contained or resolved. The Advisory also goes on to provide four examples of reportable incidents: cyber-attack, service availability and recovery, third party breach, and extortion threat.

  • 4 Apr 2019 12:40 PM | Anonymous

    Alberta Privacy Commissioner orders release of names of blocked Twitter accounts

    The Alberta Information and Privacy Commissioner has given guidance about the interaction between privacy legislation and Twitter with its decision inAlberta Education (Re)The applicant had made a request under the Freedom of Information and Protection of Privacy Act (FOIP Act) to a public body, Alberta Education, requesting a list of Twitter users/accounts that they had been blocked for each Twitter account that they operated or authorized. Alberta Education provided some records in response but refused to provide the names of some of the blocked Twitter accounts. They did so on the basis of section 17(1) of the FOIP Act, which states that

    17(1) The head of a public body must refuse to disclose personal information to an applicant if the disclosure would be an unreasonable invasion of a third party’s personal privacy.

    The Adjudicator ultimately decided that there was insufficient information to show that section 17(1) applied, and therefore directed Alberta Education to give the applicant access to the requested information. 

    Section 17 operates only when the disclosure of personal information would be an unreasonable invasion of a third party’s personal privacy. Under the FOIP Act, not all disclosure of personal information amounts to an unreasonable invasion of personal privacy. Under section 17(2), for example, information which reveals financial details of a contract to supply goods or services to a public body is not an unreasonable invasion: on the other hand section 17(4)(g) states that disclosure presumptively is an invasion of personal privacy if

    (g) the personal information consists of the third party’s name when 

    (i) it appears with other personal information about the third party, or

    (ii) the disclosure of the name itself would reveal personal information about the third party

    Section 17(5) sets out a number of non-exhaustive factors to be considered in determining whether a disclosure of personal information constitutes an unreasonable invasion of a third party’s personal privacy, such as whether the personal information is relevant to a fair determination of the applicant’s rights, or whether the personal information is likely to be inaccurate or unreliable.

    However, before any of that analysis becomes necessary, the information in question must be found to be personal information under s 17(1), which requires that the information must have a personal dimension and be about an identifiable individual. Here, Alberta Education had withheld the names of Twitter accounts and the associated image where it believed the information might reveal the identity and image of the account holder, on the basis that disclosure might enable the applicant to infer the identity of individuals engaged in inappropriate conduct. The Adjudicator questioned whether, given the reality of Twitter, such an inference was possible:

    [para 22] A Twitter account name is the name of an account, rather than the name of an individual. While some individuals may use their names as the name of their Twitter account, others do not. In addition, organizations and “bots” may also use Twitter accounts. I note that a July 11, 2018 article in the New York Times reports: 

    Twitter will begin removing tens of millions of suspicious accounts from users’ followers on Thursday, signaling a major new effort to restore trust on the popular but embattled platform.

    The reform takes aim at a pervasive form of social media fraud. Many users have inflated their followers on Twitter or other services with automated or fake accounts, buying the appearance of social influence to bolster their political activism, business endeavors or entertainment careers.

    Twitter’s decision will have an immediate impact: Beginning on Thursday, many users, including those who have bought fake followers and any others who are followed by suspicious accounts, will see their follower numbers fall. While Twitter declined to provide an exact number of affected users, the company said it would strip tens of millions of questionable accounts from users’ followers. The move would reduce the total combined follower count on Twitter by about 6 percent — a substantial drop. 

    [para 23] I note too, that an article in Vox describes the prevalence of fake and automated Twitter accounts: 

    In April, Pew found that automated accounts on Twitter were responsible for 66 percent of tweeted links to news sites. Those aren’t necessarily the bots Twitter is after: Automation remains okay to use under many circumstances. But the “malicious” are being targeted. Gadde said Wednesday that the new accounts being deleted from follower accounts aren’t necessarily bot accounts: “In most cases, these accounts were created by real people but we cannot confirm that the original person who opened the account still has control and access to it.” Weeding out these accounts might discourage the practice of buying fake followers.

    Twitter has acknowledged it contributed to the spread of fake news during the 2016 U.S. presidential election, and is trying not to have a repeat showing. It’s verifying midterm congressional candidate accounts, it launched an Ads Transparency Center, and now come the new culls.


    The Washington Post notes that Twitter suspended more than 70 million accounts in May and June. Twitter also said recently that it’s challenging “more than 9.9 million potentially spammy or automated accounts per week.” [my emphasis] (“Challenged” doesn’t necessarily mean “suspended,” but users are prompted to verify a phone or email address to continue using the account.)

    [para 24] From the foregoing, I understand that millions of Twitter accounts may be automated or fake. As a result, the name of a Twitter account cannot be said to have a personal dimension necessarily, even though an account may have the appearance of being associated with an identifiable individual

    The information requested, the Adjudicator concluded, would be about Twitter accounts, which was not the same thing as being about individuals:

    [para 28] As it is not clearly the case that the accounts severed under section 17 are associated with identifiable individuals, and there is no requirement that a Twitter user use his or her own name or image, or be a human being, the fact that the Twitter account was blocked does not necessarily reveal personal information about an identifiable individual.

    [para 29] To put it in the terms used by the Alberta Court of Appeal, the evidence before me supports finding that the information severed by the Public Body is “about a Twitter account”, rather than “about an identifiable individual”.

    Since the standard for withholding information was that it would be an unreasonable invasion of a third party’s personal privacy to disclose the information, Alberta Education could not refuse the application on the lower standard that it could possibly be personal information. Accordingly she ordered Alberta Education to release the requested information.

    At the request of Alberta Education the adjudicator also commented on the applicability of this reasoning to email accounts, finding that if there was evidence establishing that an email address was connected to an identifiable individual and the email address appears in a context that reveals personal information about the individual, then the information would be personal information and the public body must consider section 17. Where that was not true, however, then section 17 was not applicable.

  • 4 Apr 2019 12:32 PM | Anonymous

    Court refuses to compel accused person to unlock phone

    In R. v. Shergill, Justice Philip Downes of the Ontario Court of Justice heard an application by the Crown which raised the thorny issue of whether accused persons can be compelled to “unlock” password-protected electronic devices. The accused was charged with a variety of sexual and child pornography offences and the police seized his cell phone incident to arrest. Realizing they had no technology that would allow them to open the phone without possibly destroying its contents, the police applied for a search warrant along with an “assistance order” under s. 427 of the Criminal Code. This section provides that a judge who issues a warrant “may order a person to provide assistance, if the person’s assistance may reasonably be considered to be required to give effect to the authorization or warrant.” Unusually, the application did not proceed ex parte and both the Crown and the accused made submissions.

    The Crown argued that the accused’s Charter rights were not engaged by the issuance of the assistance order, because it was a matter of “mere practicality.” Centrally, the principle against self-incrimination was not engaged because the order “only compels Mr. Shergill to provide access to, and not create, material the police are judicially authorized to examine, and because any self-incrimination concerns are met by the grant of use immunity over Mr. Shergill’s knowledge of the password.” The accused argued that the principle against self-incrimination was, indeed, engaged because the order would compel him to produce information that only existed in his mind, “for the purpose of assisting [the police] in obtaining potentially incriminating evidence against him”—thus violating his right to silence and the protection against self-incrimination under s. 7 of the Charter.

    Justice Downes sided with the accused. First, the principle against self-incrimination was engaged:

    The Crown suggests that Mr. Shergill’s s. 7 interests are “not engaged” or minimally compromised because what is sought to be compelled from him has no incriminatory value or effect. All the assistance order seeks is a password, the content of which is of no evidentiary value. Indeed, the Crown says that the police need not even be aware of the actual password as long as Mr. Shergill somehow unlocks the phone without actually touching it himself.

    In my view, however, the protection against self-incrimination can retain its force even where the content of the compelled communication is of no intrinsic evidentiary value. This is particularly so where, as here, that communication is essential to the state’s ability to access the evidence which they are “really after.” To paraphrase the Court in Reeves, to focus exclusively on the incriminatory potential of the password neglects the significant incriminatory effect that revealing the password has on Mr. Shergill. As the Supreme Court held in White:

    The protection afforded by the principle against self-incrimination does not vary based upon the relative importance of the self-incriminatory information sought to be used. If s. 7 is engaged by the circumstances surrounding the admission into evidence of a compelled statement, the concern with self-incrimination applies in relation to all of the information transmitted in the compelled statement. Section 7 is violated and that is the end of the analysis, subject to issues relating to s. 24(1) of the Charter. [footnotes omitted]

    Even more important, the judge found, was the right to silence under s. 11(b) of the Charter:

    In my view, the more significant principle of fundamental justice at stake is the right to silence. This right emerged as a component of the protection against self-incrimination in R. v. Hebert in which McLachlin J. (as she then was), held:

    If the Charter guarantees against self-incrimination at trial are to be given their full effect, an effective right of choice as to whether to make a statement must exist at the pre-trial stage… the right to silence of a detained person under s. 7 of the Charter must be broad enough to accord to the detained person a free choice on the matter of whether to speak to the authorities or to remain silent.

    McLachlin J. also reaffirmed the Court’s prior holding that the right to silence was “a well-settled principle that has for generations been part of the basic tenets of our law.” 

    The “common theme” underlying the right to silence is “the idea that a person in the power of the state in the course of the criminal process has the right to choose whether to speak to the police or remain silent.” In tracing the history of the right, McLachlin J. referred to an “array of distinguished Canadian jurists who recognized the importance of the suspect’s freedom to choose whether to give a statement to the police or not” and described the essence of the right to silence as the “notion that the person whose freedom is placed in question by the judicial process must be given the choice of whether to speak to the authorities or not.” Finally, Hebert held that s. 7 provides “a positive right to make a free choice as to whether to remain silent or speak to the authorities.”

    The pre-trial right to silence is a concept which, as Iacobucci held in R.J.S., has been “elevated to the status of a constitutional right.”[footnotes omitted]

    The court also rejected the Crown’s argument that the accused’s rights were sufficiently protected by providing use immunity for his knowledge of the contents of his phone and the password:

    As a practical matter, without the assistance order, the evidence would never come into the hands of the police. In that sense it strikes me as somewhat artificial to say that the data on the Blackberry is evidence which, in the language of D’Amour, “exist[s] prior to, and independent of, any state compulsion.” Rather, it is evidence which, as far as the police are concerned, is only “brought into existence by the exercise of compulsion by the state.”


    Fundamentally, realistically and in any practical sense, granting this application would amount to a court order that Mr. Shergill provide information which is potentially crucial to the success of any prosecution against him, and which could not be obtained without the compelled disclosure of what currently exists only in his mind. It strikes at the heart of what the Supreme Court has held to be a foundational tenet of Canadian criminal law, namely, that an accused person cannot be compelled to speak to the police and thereby assist them in gathering evidence against him or herself.

    In my view nothing short of full derivative use immunity could mitigate the s.7 violation in this case.

    The Court then discussed some of the challenges that law enforcement are facing in light of new technology and encryption in particular. Though there is always a compelling public interest in the investigation and prosecution of crimes, the final balancing came down on the side of the accused's liberty interests under s. 7 of the Charter:

    I accept that the current digital landscape as it relates to effective law enforcement and the protection of privacy presents many challenges. It may be that a different approach to this issue is warranted, whether through legislative initiatives or modifications to what I see as jurisprudence which is binding on me. But on my best application of controlling authority, I am simply not persuaded that the order sought can issue without fundamentally breaching Mr. Shergill’s s. 7 liberty interests, a breach which would not be in accordance with the principle of fundamental justice which says that he has the right to remain silent in the investigative context.

    The search warrant was issued but the assistance order was denied.

  • 20 Mar 2019 1:33 PM | Anonymous

    Detailed questionnaire sent to at least 60 individuals

    According to Forbes Online, the Canada Revenue Agency (CRA) has begun to audit individuals with significant involvement in cryptocurrency holdings or transactions. 

    In 2017, the CRA established a dedicated cryptocurrency unit said to be intended to build intelligence and conduct audits focussed on cryptocurrency risks as part of its Underground Economy Strategy

    Forbes reports there are currently over 60 active cryptocurrency audits, which involve a very detailed questionnaire, consisting of 54 questions with many sub-questions. Examples include: 

    • Do you use any cryptocurrency mixing services and tumblers? (Which can be used to intermix accounts and disguise the origin of the funds.) If so, which services do you use? 
    • Can you please provide us with the tracing history, along with all the cryptocurrency addresses you ‘mixed’? Why do you use these services?”. 

    Further questions address whether the taxpayer has purchased or sold crypto-assets from or to private individuals. If so, how they became aware of the sale opportunity, and how the transaction was facilitated. The questionnaire goes so far as to ask the taxpayer to list all personal crypto-asset addresses that are not associated with their custodial wallet accounts. It further asks whether they have been a victim of crypto-theft, or have been involved in ICOs (initial coin offerings) or participate in crypto-mining.

    Not surprisingly, CRA has not disclosed what criteria it has used to target individuals with the questionnaires.

  • 20 Mar 2019 1:29 PM | Anonymous

    Calls for a creation of a “Digital Authority”, increased onus to police user generated content and reducing market concentration

    The United Kingdom Select Committee on Communications has released a very interesting report on the regulation of the internet. Entitled Regulating in a Digital World, the report calls for a whole new era and methodology for regulating both online service providers and platforms, and the content that is made available through them. It calls for regulation based upon ten principles enunciated in the introduction:

    1. Parity: the same level of protection must be provided online as offline
    2. Accountability: processes must be in place to ensure individuals and organisations are held to account for their actions and policies
    3. Transparency: powerful businesses and organisations operating in the digital world must be open to scrutiny
    4. Openness: the internet must remain open to innovation and competition
    5. Privacy: to protect the privacy of individuals
    6. Ethical design: services must act in the interests of users and society
    7. Recognition of childhood: to protect the most vulnerable users of the internet
    8. Respect for human rights and equality: to safeguard the freedoms of expression and information online
    9. Education and awareness-raising: to enable people to navigate the digital world safely
    10. Democratic accountability, proportionality and evidence-based approach.

    At its heart, the Report calls for the creation of what's called a Digital Authority that would advise government and regulators about the online environment: 

    238. We recommend that a new body, which we call the Digital Authority, should be established to co-ordinate regulators in the digital world. We recommend that the Digital Authority should have the following functions:

    • to continually assess regulation in the digital world and make recommendations on where additional powers are necessary to fill gaps;
    • to establish an internal centre of expertise on digital trends which helps to scan the horizon for emerging risks and gaps in regulation;
    • to help regulators to implement the law effectively and in the public interest, in line with the 10 principles set out in this report;
    • to inform Parliament, the Government and public bodies of technological developments;
    • to provide a pool of expert investigators to be consulted by regulators for specific investigations;
    • to survey the public to identify how their attitudes to technology change over time, and to ensure that the concerns of the public are taken into account by regulators and policy-makers;
    • to raise awareness of issues connected to the digital world among the public;
    • to engage with the tech sector;
    • to ensure that human rights and children’s rights are upheld in the digital world;
    • to liaise with European and international bodies responsible for internet regulation.

    239. Policy-makers across different sectors have not responded adequately to changes in the digital world. The Digital Authority should be empowered to instruct regulators to address specific problems or areas. In cases where this is not possible because problems are not within the remit of any regulator, the Digital Authority should advise the Government and Parliament that new or strengthened legal powers are needed.

    The Report further critiques the presence of large companies that it says dominate the digital space, and calls for greater regulation and scrutiny of mergers and challenges the paradigm of cross-subsidies that result in free services:

    15. Mergers and acquisitions should not allow large companies to become data monopolies. We recommend that in its review of competition law in the context of digital markets the Government should consider implementing a public-interest test for data-driven mergers and acquisitions. The public-interest standard would be the management, in the public interest and through competition law, of the accumulation of data. If necessary, the Competition and Markets Authority (CMA) could therefore intervene as it currently does in cases relevant to media plurality or national security. 

    16. The modern internet is characterised by the concentration of market power in a small number of companies which operate online platforms. These services have been very popular and networks effects have helped them to become dominant. Yet the nature of digital markets challenges traditional competition law. The meticulous ex post analyses that competition regulators use struggle to keep pace with the digital economy. The ability of platforms to cross-subsidise their products and services across markets to deliver them free or discounted to users challenges traditional understanding of the consumer welfare standard. 

    With respect to problematic content, the Report proposes the removal of safe harbours that current protect platform providers and replacing it with an obligation to police and be accountable for that user generated content: “a duty of care should be imposed on online services which host and curate content which can openly be uploaded and accessed by the public. This would aim to create a culture of risk management at all stages of the design and delivery of services.”

    The Report also addresses children’s issues, the difficult-to-understand terms of use and privacy by default. 

    How the Report will be received and translated into new regulation remains to be seen.

  • 20 Mar 2019 1:28 PM | Anonymous

    Media parties denied standing in Reference on right to be forgotten

    The Prothonotary of the Federal Court dismissed an application by various media parties to be involved in a Reference case in Reference re subsection 18.3(1) of the Federal Courts Act. A complainant brought a complaint to the Privacy Commissioner alleging that Google contravened the Personal Information Protection and Electronic Documents Act [PIPEDA] by continuing to prominently display links to news articles about him in search results corresponding to his name. The complainant alleges the articles in question are outdated and inaccurate and disclose sensitive and private information, and has requested that Google “de-index” him: that is, that they remove the articles from search results using his name, a process colloquially referred to as “the right to be forgotten”. Before investigating the complaint, the Privacy Commissioner sent two Reference questions to be determined by the Federal Court, and the media parties sought to take part in those Reference proceedings. The prothonotary denied that request, but left open the possibility that the media parties could make the same application at a later point.

    The central issue was, in fact, what the issue was. There was no question that the underlying complaint to the Privacy Commissioner against Google raised “important and ground-breaking issues relating to online reputation, including whether a ‘right to be forgotten’ should be recognized in Canada, and if so, how such a right can be balanced with the Charter protected rights to freedom of expression and freedom of the press” (para 7). That was not, however what the Reference was about. Rather, the Privacy Commissioner had asked only two questions:

    1. Does Google, in the operation of its search engine service, collect, use or disclose personal information in the course of commercial activities within the meaning of paragraph 4(1)(a) of PIPEDA when it indexes webpages and presents search results in response to searches of an individual’s name?
    2. Is the operation of Google’s search engine service excluded from the application of Part I of PIPEDA by virtue of paragraph 4(2)(c) of PIPEDA because it involves the collection, use or disclosure of personal information for journalistic, artistic or literary purposes and for no other purpose?

    Google, as a party to the Reference, had brought an application to expand the questions to include issues relating to whether, if PIPEDA applied to the operation of its search engine and requires deindexing, it would contravene s 2(b) of theCharter. However, that application had not yet been heard at the time the media parties sought to be added (in part at their insistence).

    The media parties argued that the true issue underlying the reference was the Privacy Commissioner’s proposed regulation of internet searches and whether that offends the expression and press freedoms in the Charter, and therefore that they should be added as either parties or intervenors. The Prothonotary, however, held that the question which needed to be asked at this time was whether the media parties should be parties on intervenors on the Reference questions which actually existed at the time, and that there was no basis to grant that application.

    The media parties were not, for example, necessary for a full and effectual determination of all issues in the reference:

    [36] …What is at issue here is only whether Google is subject to or exempt from the application of Part 1 of PIPEDA in respect of how it collects, uses or discloses personal information in the operation of its search engine service when it presents search results in response to an individual’s name. 

    [37] The only direct result or effect of the answer to the questions raised in this reference will be to determine whether the OPC may proceed to investigate the complaint made against Google. The media parties are neither intended nor required to be bound by that result. The questions, as framed in the reference, can be effectually and completely settled without the presence of the media parties.

    Even if the scope of the Reference were expanded to include Google’s Charter question, the Prothonotary noted, she would be hesitant to conclude that the media parties were necessary:

    [39] The Court accepts, for the purpose of this argument, that deindexing may significantly affect the ability of content providers to reach their intended audience and for the public to access media content. Even as argued by the media parties, however, that is only the practical effect of the implementation of a recommendation to deindex. If deindexing is recommended or required, its implementation does not require that any action be compelled from or prohibited against the media parties, any other content provider, or any user of the search engine. The only action required would be by Google. Deindexing could and would produce its effect without the need for the other persons “affected” by it to be “bound” by the result of the proposed expanded reference. 

    [40] The impact of a potential deindexing requirement may be significant, but it does not affect the media parties any more directly than it would affect other content providers or those who use Google’s search engine service to gain access to content. To hold that the media parties are, by reason of the practical effect of a decision, necessary to the full and effectual determination of all issues would require that all others that are equally affected also be recognized as necessary parties and be made parties to the reference. 

    Similarly the media parties were not found to have shown that they would add anything of value if they were allowed to be intervenors in the Reference:

    [47] It seems to the Court that the media parties have not given much thought to what they would have to contribute to the determination of the reference if it were limited to the questions as currently framed in the Notice of Application. Indeed, given that the issues currently framed in the reference focus on whether Google’s operation of its search engine is a commercial activity and the purpose for which Google collects, uses or discloses personal information, it is not clear what evidence the media parties might be able to contribute that might assist the Court’s determination. Asked at the hearing to state the position they might take in respect of each of the questions as framed in the reference, counsel for the media parties candidly admitted that they could not provide an answer, having not even seen the evidentiary record constituted by the Privacy Commissioner for the purpose of the reference.

    The Prothonotary did allow, however, that the media parties could apply for intervenor status again once Google’s application to expand the Reference had been decided, so long as “the proposed intervener’s contribution is well-defined and the Court is satisfied that this contribution is relevant, important and in the interest of justice” (para 50).

    [Editor’s note: one of the authors of this newsletter was counsel to one of the parties in this case, but was not involved in writing up this summary.]

  • 20 Mar 2019 1:27 PM | Anonymous

    Supreme Court concludes you don’t necessarily believe what people tell you on the Internet.

    The Supreme Court of Canada struck down portions of the child-luring provisions with its decision in R v Morrison. The accused was charged with child luring for the purposes of inviting sexual touching of a person under age 16, contrary to ss 172.1(1)(b) and 152 of the Criminal Code. That section, the Court noted, 

    40…creates an essentially inchoate offence — that is, a preparatory crime that captures conduct intended to culminate in the commission of a completed offence: see Legare, at para. 25; R. v. Alicandro, 2009 ONCA 133, 95 O.R. (3d) 173, at para. 20, citing A. Ashworth, Principles of Criminal Law, (5th ed. 2006), at pp. 468-70. There is no requirement that the accused meet or even intend to meet with the other person with a view to committing any of the designated offences: see Legare, at para. 25. The offence reflects Parliament’s desire to “close the cyberspace door before the predator gets in to prey”: para. 25.

    The accused had posted an advertisement on Craigslist saying “Daddy looking for his little girl”, which was responded to by a police officer who posed as ‘Mia,’ a 14-year-old. Over the course of more than two months, Morrison invited ‘Mia’ to touch herself sexually and proposed they engage in sexual activity. As a result he was charged with the child luring offence, and defended himself on the basis that he believed he was communicating with an adult female engaged in role play who was determined to stay in character: as he said to the police when arrested, “on the internet, you don’t really know whether you’re speaking to a child or an adult”. However, that statute affected his ability to make that argument. 

    The offence in section 172.1(1)(b) requires proof that the communication took place with a person who is or who the accused believes is under the age of 16 years. Section 172.1(3) creates a presumption around that belief: 

    (3) Evidence that the person referred to in paragraph (1)(a), (b) or (c) was represented to the accused as being under the age of … sixteen years … is, in the absence of evidence to the contrary, proof that the accused believed that the person was under that age.

    In addition, section 172.1(4) imposes a further burden on the accused in that same regard: 

    (4) It is not a defence to a charge under paragraph (1)(a), (b) or (c) that the accused believed that the person referred to in that paragraph was at least … sixteen years …unless the accused took reasonable steps to ascertain the age of the person.

    Taken in combination, these provisions mean that if the other person is represented as being under 16, then the accused must show evidence to the contrary that he believed that representation, and that evidence to the contrary must include the taking of reasonable steps. As a result

    [49]…the combined effect of subss. (3) and (4) is to create two pathways to conviction where the other person is represented as being underage to the accused: the Crown must prove that the accused either (1) believed the other person was underage or(2) failed to take reasonable steps to ascertain the other person’s age. In the context of child luring cases involving police sting operations, such as in Levigne, where it can be assumed that the undercover police officer posing as a child will represent that he or she is underage, these two pathways to conviction would have been available to the trier of fact.

    The accused challenged the constitutionality of both sections 172.1(3) and 172.1(4). In addition he challenged the constitutionality of the mandatory minimum sentence in section 172.1(2)(a). 

    The Court concluded that the presumption that a person who was told someone was under sixteen therefore believed that that person was under sixteen violated the Charter, specifically the presumption of innocence in section 11(d). It is well-established that the substitution of proof of one thing (the accused was told she was under sixteen) for proof of another thing (the accused believed she was under sixteen) will violate section 11(d) unless the connection from one to the other is “inexorable”: that is, “one that necessarily holds true in all cases” (para 53). That could not be said of a communication on the internet:

    [58] Deception and deliberate misrepresentations are commonplace on the Internet: see R. v. Pengelley, 2010 ONSC 5488, 261 C.C.C. (3d) 93, at para. 17. As the Court of Appeal in this case aptly put it:

    There is simply no expectation that representations made during internet conversations about sexual matters will be accurate or that a participant will be honest about his or her personal attributes, including age. Indeed, the expectation is quite the opposite, as true personal identities are often concealed in the course of online communication about sexual matters. [para. 60]

    Accordingly the Court found that section 172.1(3) violated section 11(d), and they went on to conclude that it could not be saved by section 1. They held that although the goal of protecting children from Internet predators was sufficiently important, the provision was not minimally impairing, because it would be sufficient to “rely on the prosecution’s ability to secure convictions by inviting the trier of fact to find, based on a logical, common sense inference drawn from the evidence, that the accused believed the other person was underage.”

    The majority did not, however, strike down the “reasonable steps” requirement in section 172.1(4), holding that as it required proof of “belief”, it set a high mens rea standard (which excluded recklessness) and did not violate section 7. (Justice Abella, in a concurring judgment, would also have concluded that section 172.1(4) was unconstitutional.) The majority also declined to decide whether the mandatory minimum sentence was unconstitutional or not, preferring to have that issue argued at the retrial they ordered. Justice Karakatsanis, writing a concurring judgment, would have struck down the mandatory minimum.


Canadian Technology Law Association

1-189 Queen Street East

Toronto, ON M5A 1S2

Copyright © 2023 The Canadian Technology Law Association, All rights reserved.