Opinion | Signal and WhatsApp Are Among the Last Bastions of Digital Privacy – The New York Times

We mightthink of most of our day-to-day activities as private. Rarely is anyone deliberately eavesdropping on our conversations, spying on where we shop or following us on our commute. The government needs a search warrant or other court order to listen to our phone calls, to discover what books we checked out from the library or to read our mail.

Buta tsunami of digital tracking technology has made a large portion of our lives public by default. Nearly everything we do online and on our phones our movements, our conversations, our reading, watching and shopping habits is being watched by commercial entities whose data can often be used by governments.

One of the last bastions of privacy is encrypted messaging programs such as Signal and WhatsApp. These apps, which employ a technology called end-to-end encryption, are designed so that even the app makers themselves cannot view their users messages. Texting on one of these apps particularly if you use the disappearing messages feature can be almost as private and ephemeral as most real-lifeconversations used to be.

However, governments are increasingly demanding that tech companies surveil encrypted messages in a new and dangerous way. For years, nations sought a master key to unlock encrypted content with a search warrant but largely gave up because they couldnt prove they could keep such a key safe from bad actors. Now they are seeking to force companies to monitor all their content, whether or not it is encrypted.

The campaign to institute mass suspicionless searches is global. InBritain, the Online Safety Bill, which is making its way through Parliament, demands that messaging services identify and remove child exploitation images, whether communicated publicly or privately by means of the service. In the UnitedStates, bills introduced in Congress require online services to identify and remove such images. And in the European Union, a leaked memo has revealed that many member countries support weakening encryption as part of the fight against child exploitation.

This surge of regulatory efforts is part of a larger worldwide concern about the prevalence of child exploitation images online. Although substantiated cases of child sexual abuse have thankfully been on a steep decline in the UnitedStates down 63percent since 1990, according to the University of New Hampshire Crimes Against Children Research Centerthe prevalence of sexual images of children circulating online has risen sharply, swamping the National Center for Missing and Exploited Childrens CyberTipline with 32 million reports in 2022.

The deluge of online reports reflects how images can be duplicated and shared limitlessly online and that there are more images available not just ones that adults take of children but also of images children and teenagers share with one another that are later shared publicly or commercialized, according to David Finkelhor, the director of the University of New Hampshire center.

The recent legislative proposals are focused on detecting these images as they circulate online. But once you are in the business of scanning content,you are in the surveillance business and that is not what we want from the companies that hold our most intimate communications.

Apple learned this lesson the hard way two years ago when it proposed a technical scheme that it claimed would be able to identify known child exploitation images on users devices without anyone actually looking at users photos.

Apples proposal would have downloaded onto every device a secret list of IDs corresponding to known exploitation images. It would then use an algorithm to determine whether any photos on the device were similar to those on the list.

There were two major problems. First, there was the possibility that the program might falsely label innocent photos as illegal. Like all matching algorithms, Apples system makes educated guesses based on statistical probabilities, but those guesses could be wrong. In a survey of technical papers about scanning systems like the one Apple proposed, two Princeton researchers, Sarah Scheffler and Jonathan Mayer, found that false positive rates ranged from 135 to 4.5 million false positives per day, assuming a worldwide 7.5 billion messages sent a day. Thats a lot of innocent messages that would have been forwarded to the police for investigation.

The second, and greater, problem was that scanning for one type of content opens the doors for scanning for other types of content. If Apple had a device-scanning system in place, India could demand scanning for illegal blasphemy, China could demand scanning for illegal anti-Communist content and U.S. states that have outlawed abortion or gender-affirming care could scan to identify people seeking those services. In other words, it would likely be a free-for-all for every type of surveillance out there.

There is a long history of surveillance technology being used initially for a benign or well-meaningpurpose and morphing to a more sinister use. Taylor Swift, in 2018, pioneered using facial recognition at concerts to scan for known stalkers, but within a few years, Madison Square Garden was using the technology to block lawyers it was in a dispute with from entering the arena.

Thousands of privacy and security experts protested Apples plan to scan for images of abuse, signing an open letter saying it had the potential to bypass any end-to-end encryption that would otherwise safeguard the users privacy. Under pressure, Apple backed down.

The new legislative proposals which wouldmake companies liable for everything on their networks even if they cant see it will inevitably lead to enforcement efforts that arent much different than Apples doomed plan. And these new efforts may not even be constitutional. In the UnitedStates, a group of scholars wrote to the Senate last month to protest that forced scanning could violate the Fourth Amendments prohibition on unreasonable searches and seizures,which precludes the government from having a private actor conduct a search it could not lawfully do itself.

The question is philosophical, not technical. Do we want to start allowing the government to require companies to conduct suspicionless, warrantless searches of our messages with family, friends and co-workers?

Opening the door to dragnet searches of peoples phones for evidence of possible crime is closer to the work of intelligence agencies than of policing. And in the UnitedStates, we have largely restricted intelligence gathering to focus on foreigners and on national security issues such as terrorism. (And when intelligence gathering has gone too far in surveilling domestic Muslim communities or peoples phone call records, lawmakers have condemned it and changed relevant laws.)

Under current law, nothing is stopping the police from getting a search warrant to examine the devices of those whom they suspect of a crime. And despite the F.B.I.s claims that encryption hurts its ability to catch criminals, the agency has had some spectacular successes overcoming encryption. Among them:using an Australian hacking firm in 2016 to unlock the encrypted iPhone of the San Bernardino mass murderer and obtaining data from Signal messages that led to the conviction of members of the Oath Keepers organization for their role in the Jan. 6 insurrection.

Search warrants have long been the line we have drawn against overly intrusive government surveillance. We need to hold that line and remind lawmakers: No warrant, no data.

More here:
Opinion | Signal and WhatsApp Are Among the Last Bastions of Digital Privacy - The New York Times

Related Posts

Comments are closed.