Rising Surveillance Powers – a Threat to Digital Privacy?

Australia’s surveillance powers are on the rise. Earlier this year, as the biggest crime sting in recent history took place, the Australian Federal Police (AFP) and international authorities arrested over 800 suspects of organised crime networks, across 18 different countries. Additionally, global raids carried out by law enforcement resulted in the requisition of $148 million in cash, as well as tonnes of drugs, firearms, luxury cars, and cryptocurrencies.

These arrests were made possible by Operation Ironside – an AFP-led operation that was established in 2018, with the aim of uncovering local and international drug smuggling rings. Operation Ironside has resulted in the prosecution of 224 criminals on 526 charges across the country: “3.7 tonnes of drugs, 104 number of weapons, $44,934,457 million in cash, and assets expected to run into the millions of dollars, have been seized under Operation Ironside” (AFP, 2021).

Investigations were aided by an FBI-developed technology that gives authorities access to the encrypted communications of organised crime members: “Operation Ironside has now charged hundreds of alleged offenders. Seized millions of dollars in criminal proceeds. Removed weapons from our streets and saved lives. And will continue to. It is an ongoing operation,” says Prime Minister Scott Morrison.

In collaboration with the Federal Bureau of Investigation, more than 4,000 Australian police officers have been involved in Operation Ironside. By utilizing a secretly-developed encrypted messaging app called AN0M, which was marketed as a secure communication platform for criminal networks, undercover agents were able to decrypt millions of messages and track the whereabouts of the platform’s 1,650 Australian members and 11,000 global members. According to AFP Commissioner Reece Kershaw, “All they talk about is drugs, violence, hits on each other, innocent people who are going to be murdered. A whole range of things.” 

Motivated by the success of Operation Ironside, Scott Morrison unveiled plans to expand the nation’s surveillance laws, stating the operation “struck a heavy blow against organised crime – not just in this country, but one that will echo around the world.” He went on to say, “there is a series of pieces of legislation that we’ve been seeking to move through the Parliament. The AFP and our law enforcement agencies need the support of our Parliament to continue to keep Australians safe.”

Despite the operation’s exceptional outcomes, members of the cybersecurity community have raised concerns about the law expansions. In an episode of Think: Digital Futures, produced by 2SER host Julia Carr-Catzel, we hear from experts who believe the new laws were generated far too hastily, leaving individuals vulnerable to privacy violations.

 One proposal met with concern was the Surveillance Legislation Amendment (Identify and Disrupt) Bill 2020, which has now come to pass. It presents new law enforcement warrants that enhance the powers of the AFP and ACIC in fighting cyber-enabled organised crime. The bill introduces three warrants – a data disruption warrant, a network activity warrant, and an account takeover warrant. “Basically, it allows any police officer to go to a magistrate and take over somebody’s account, for the purposes of preventing crime, or catching a criminal,” explains Patrick Fair, a commercial lawyer who specialises in intellectual property, competition law, telecommunications, and privacy law. “These are three very significant powers. I’m not aware of anything quite so aggressive in the law of other countries.”

 Having passed with supposedly-limited oversight, what potential consequences could the public be facing? Under the data disruption warrant, “The AFP and ACIC will be permitted to covertly access computers to disrupt data and while doing so, if necessary, add, copy, delete or alter that data in order to frustrate the commission of relevant offences.” This has sparked concerns among cybersecurity insiders, who believe the warrant may threaten the safety of private information belonging to innocent third-parties. For example, it could allow any police officer to seize a person’s Gmail account and gain access to family and health information, which would otherwise be restricted. “The logic of the provision seems to be that they’re going after somebody’s laptop, or some small system. But nowadays, most people have accounts which are held by large cloud operators,” says Patrick. “It’s just not a trivial thing to have the police knock on your door and say, ‘we want to interfere with the way that particular account’s working, and we’d like you to do it for us now.’ It needs to take more care about how it’s going to get cooperation and access, and then strictly limiting the harm that’s done by trying to enforce this law.”

 In a memorandum presented by Parliament, it is revealed that data disruption warrants can only be issued if there’s “reasonable grounds for suspicion,” and only for suspected offences that carry a maximum sentence of at least three years. However, members of the community say that’s far below the sentencing of serious crimes such as terrorism, human trafficking and child abuse, which is what the bill is intended for. “The most requests that were being put through, if I recall, were around drug offenses, and a drug offense doesn’t have to be that serious to fall into that category of potentially three years in prison,” says Eric Pinkerton, a cybersecurity consultant for Trustwave.

 According to Eric, police metadata requests published online expose a number of loopholes in the system: “If you were convicted of a drug offense, you can get a fine of $10,000, or five years in prison…there’s a difference between being caught with a small amount of marijuana, to being caught with a whole shipment of cocaine,” he explains. “So are the police running around busting relatively small-time criminals? That’s the kind of thing you have to consider when you start looking at legislation like this. How will it be used? How will it be interpreted? And if it is misused, will it be picked up on?”

The Identify and Disrupt Bill enables the seizing of accounts, on platforms like Facebook – the world’s largest social networking site. Home to 2.89 billion monthly active users, mass amounts of sensitive information is created, stored, and shared on the platform. Unlike encrypted data, which is harder for authorities to intercept, personal conversations on Facebook Messenger had been unprotected for years, until recent implementations of end-to-end encryption. According to Ruth Kricheli, Director of Product Management at Facebook Messenger, “end-to-end encryption is already widely used by apps like WhatsApp to keep personal conversations safe from hackers and criminals. It’s becoming the industry standard and works like a lock and key, where just you and the people in the chat or call have access to the conversation. Content is protected from the moment it leaves your device to the moment it reaches the receiver’s device. This means that nobody else, including Facebook, can see or listen to what’s sent or said.”

Despite the apparent benefits of this implementation on personal data security, law enforcement agencies believe the challenges posed by end-to-end encryption will negatively impact criminal investigations. That’s why they’re fighting for in-built, backdoor access on popular platforms. However, due to fears surrounding privacy and consumer trust, this demand is being met with strong opposition by tech companies: “There is a contract between tech companies and users, in terms of what information should be disclosed, and what information should not be disclosed,” explains Doctor Priyadarsi Nanda, a Senior Lecturer at UTS with over 27 years of cybersecurity experience. “If backdoor entrance is given to authorities, there is every chance that law enforcement will get into every detail, finding out the privacy information. That is a breach of contract.”

In addition to privacy threats, commercial lawyer Patrick Fair argues that backdoor access could result in consumers turning elsewhere, leaving companies financially hurt in the process: “It’s a real concern,” he says. “And one of the companies that lobbied heavily to moderate the legislation was a local company that builds encryption tools. They said to the Commonwealth, ‘if you pass a law like this, where at any time, without a judicial order, you can ask us to build access to our software, so you can see what it does, how it does it, or get reports and our clients won’t know, then how do we do business?’ Nobody will want to build anything in Australia, knowing there’s the potential that the software has a backdoor built into it for Australian law enforcement.”

The encryption versus privacy dispute between law enforcement and tech companies is nothing new. In fact, it’s been going on for decades. A prime example would be tech giant Apple’s famous backlash against the FBI in 2016, when they refused to break into the iPhone of a terrorist responsible for the mass shooting in San Bernardino, California. In an attempt to uncover information that might aid the investigation, authorities wanted the company to “make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation.” In their retaliation, Apple argued that what they were requesting would result in a “master key, capable of opening hundreds of millions of locks” (CNBC, 2016). According to Eric Pinkerton from Trustwave, the FBI was suggesting that “the phone had information about a potential third shooter. They were rattling the cage for Apple to create a backdoor. It was quite a unique request for Apple to undermine their own security, and Apple refused.” Despite the FBI’s reasoning behind their request, Eric has his own theories. In fact, he speculates that it wasn’t about the content of the phone at all. Instead, he believes the FBI saw this as an opportunity to build a case on why authorities should be granted backdoor access to platforms and devices, and “set this precedent that they could force tech companies to give them access to things they wanted.”

In truth, Eric is not the only one with doubts about the FBI’s true intentions. Apple CEO Tim Cook says the case was “very rigged” from the start. He refers to an Inspector General Report, published by the Justice Department in 2018, which states that the FBI units responsible for exploiting devices were operating under insufficient advisement, and that “there were misunderstandings and incorrect assumptions” between agents working on the case. According to Tim, “I have personally never seen the government move against a company like it did here in a very dishonest manner.” Reflecting on the events of 2016, Tim continues to stand firm in his belief that consumer privacy is paramount, and opposes the creation of regulations that could allow authorities to defeat encryption: “The protection of people’s data is incredibly important, and we know that doing this could expose people to incredible vulnerabilities.”

But how does data encryption protect people online? According to Doctor Priyadarsi, it “hides the actual information in a way that a third party cannot understand,” by turning “plaintext” into unintelligible code. This makes personal information less vulnerable to exploitation. The encryption process works via public and private keys, with each key consisting of a string of numbers or text: “The public key is known to anybody. The private key is only known to the concerned party, who wants to participate in the communication,” says Doctor Priyadarsi. “Say for example, there are two parties in a classical cryptosystem called Bob and Alice. To send information to Bob, Alice will use the public key to imprint the message, and Bob will use the private key to decrypt the message,” he explains. “The key is the secret. Essentially, encryption is a process to which you’ll be using that key to decrypt, get, or recover the plaintext.” The harsh reality though, is that giving authorities access to this “secret key” could pose significant security risks, one of which being that criminals could find a way to obtain it: “I think that’s what most tech companies are frightened of,” says Eric from Trustwave. “That we’re going to introduce a backdoor with good intention, and it’s going to be a honeypot for attackers. It’s going to get compromised. And it’s very difficult to roll back from that.”

However, criminal security breaches are not the only risk facing scrutiny. Members of the community fear law enforcement misconduct as well. In reality, this is not an unrealistic concern. In 2013, Queensland Police misused their metadata access rights, in order to spy on fellow police officers. Their objective was to uncover information such as who was lying about a sick day, and who was sleeping with who. Commercial lawyer Patrick Fair attributes this to a lack of judicial oversight: “We expect our police to be competent. We also expect our laws to be properly reviewed and passed, so that they take account of practical implementations, and fairness to people who are being subject to them. Judicial oversight on these things is a good idea because it means police have to make a case, document it, have a third-party test it to make sure it’s okay. That process gives the public confidence that powers are being used appropriately. But more broadly, we should have a better idea of how these powers are supervised, and how they’re used.”

Looking at current metadata laws, the threshold under which Australian authorities can obtain access to information on our devices is quite low. This is a point of concern for Patrick: “For metadata on your mobile phone, they just need to write a notice and say, ‘getting the data is relevant to an activity I’m doing as law enforcement’…provided I document that, all three years of where you were, who you spoke to, how long you spoke to them, is all available. And if you put that through an analytics engine, you get a very factual picture of somebody’s life in a way which is far more invasive and privacy-intrusive than actually intercepting content.”

Every day, members of the public can follow annual reports on how many surveillance warrants are issued in Australia. For example, the 2019–2020 AFP Report reveals that over 3,600 interception warrants were issued, which led to over 2,600 arrests. When it comes to implementing new surveillance laws, Patrick Fair believes it’s imperative for society to be well-informed, and to be given an opportunity to have their say: “The Identify and Disrupt Bill just came from nowhere,” declares Patrick. “Often with this kind of legislation, there’s a discussion paper that comes out from perhaps an Attorney-General, or Home Affairs, saying, ‘we want these new powers because we found that it could have done something to stop or disrupt a crime.’ Then, the industry gets to have some input before legislation appears. This legislation just appeared. Part of the shock is that the powers are so invasive, and there hasn’t been an opportunity to have input to what the law might say, before going to parliament to be passed.”

Despite much public apprehension, the Identify and Disrupt Bill did come to pass on the 25th of August, 2021. As predicted by Patrick Fair, sadly, I suspect the public will be on the Prime Minister’s side.” While Australia continues its rapid transition into becoming a surveillance state, cybersecurity experts remain steadfast in their belief that “the government’s approach to technological surveillance is leading us down a dark path” (ACS, 2021). This concern is felt not only on a national scale, but on a global one: “In the world where everything is totally open, people begin to guard what it is they will say,” says Apple CEO, Tim Cook. “Think about where society goes if we’re afraid to tell each other our opinions – if we’re afraid that somebody’s listening, or watching, or monitoring, or we’re under surveillance. This is a bad thing, in a very broad way. Not to mention the manipulation that can go on with pitting different groups against each other.”

With many Australians still unsure about what will come of these new surveillance laws, Patrick Fair believes that much of the ambiguity surrounding them could have been avoided with better communication, and more information: “I think all of us should be more familiar with those mechanisms, and there should be more opportunity for politicians and third-party representatives to ask questions, and get information about how this stuff is used, just so we know that we’re not turning into a surveillance state, and that the extent to which the government is getting information about personal lives and activities, is for a proper purpose.”

Think: Digital Futures is made possible with the support of 2SER radio, the University of Technology Sydney, and is heard around Australia on the Community Radio Network.

For more information:

Listen to the full interview here

Subscribe to Think: Digital Futures on your preferred podcast platform:

Spotify | Whooshkaa | Apple Podcasts

Thursday 7th of October, 2021

You may also like