DT Context: Government security agencies enlisting 'experts' to promote their activities then getting upset when privacy advocates push back! Whatever next?

How Dare Signal Protect Its Users From Surveillance, Asks Ethicist Who Advises The FBI
Oh, man. This is just dumb as fuck. There’s no way around it. The New York Times seems extremely willing to suffer fools (especially its own!) Here’s yet another fool given prime intern…

Oh, man. This is just dumb as fuck. There’s no way around it. The New York Times seems extremely willing to suffer fools (especially its own!) Here’s yet another fool given prime internet/printed real estate to push bad ideas, worse arguments, and absurd conclusions.

This time it’s Reid Blackman, a self-described “ethicist” who focuses on AI and other tech issues. He’s also a government consultant:

His work, which includes advising and speaking to organizations including AWS, US Bank, the FBI, NASA, and the World Economic Forum, has been profiled by The Wall Street Journal, the BBC, and Forbes.

We’ll just pause at “FBI.” Perhaps it’s not the fault of its many advisors and consultants, but the FBI is the most backwards of federal agencies. It has advocated against device encryption and end-to-end encryption, despite spending those same years lying about the alleged “threat” posed by encryption. The FBI is full of shit. And people like Blackman aren’t making it any less shitty.

Back to Blackman, who appears to believe tech companies are ethically and morally obligated to make it easier for governments (even the bad ones!) to obtain information about customers and users. His target is Signal, which has refused to collect metadata on the users of its encrypted messaging service.

This decision has frustrated some US law enforcement agencies, which have demanded Signal turn over information it does not possess. That seems to bother Reid Blackman, who has inexplicably been given space in the New York Times to say a bunch of stupid stuff about Signal.

His editorial starts with complaints about Twitter co-founder Jack Dorsey expressing support for Signal and its willingness to allow users to avoid government surveillance and interference. Dorsey put his money where his well-bearded mouth is: he has pledged to give $1 million a year to the nonprofit running Signal.

Cue Blackman’s irking:

Mr. Dorsey is promoting one of the most potent and fashionable notions in Silicon Valley: that a technology free of corporate and government control is in the best interest of society.

Just let that soak in for a bit. We all may agree many tech companies are, at best, problematic. That comes with the millions/billions of users territory, though. You can’t make everyone happy. You can’t solve all moderation problems. And you can definitely abuse your access to demographic data to monetize the hell out of everyone that utilizes your services, even when such actions are decried and adamantly opposed by your users.

But the flipside of Blackman’s assertion is that society would be better off with the government directly regulating tech companies, even if this regulation would violate Constitutional rights. (Fun fact: Blackman’s op-ed never mentions the Constitutional rights of users or the private companies they choose to utilize!)

Signal is a bit evil, argues Blackman (but without the intestinal fortitude to use strong words like “evil”). Unlike other providers of encrypted messaging services (Apple, Facebook, WhatsApp), Signal doesn’t feel the need to provide the “state corporate surveillance” (in Signal’s own words) regime that currently allows government agencies to acquire metadata in lieu of encrypted communications.

Signal… refrains from collecting metadata about its users. The company doesn’t know the identity of users, which users are talking to each other or who is in a group message. It also allows users to set timers that automatically delete messages from the sender’s and receiver’s respective accounts.

Rather than respecting the nonprofit’s decision to protect users, Blackman argues this somehow isn’t right (in the ethical or moral sense). And he does this despite admitting there are plenty of people who directly benefit from governments (not just our own!) being unable to obtain metadata it can use to identify targets and the people they communicate with.

This level of privacy can be beneficial on a number of fronts. For instance, Signal is used by journalists to communicate with confidential sources.

An acknowledgment of Signal’s value. But one followed immediately by Blackman’s claim that the negatives outweigh the positives.

But it is no coincidence that criminals have also used this government-evading technology. When the F.B.I. arrested several Oath Keepers for rioting at the Capitol on Jan. 6, 2021, one of its primary pieces of evidence was messages on Signal.

LOL. Come on, Reid. Your point is undercut by the same thing you’re using to support your claims. “One of its primary pieces of evidence was messages on Signal.” So, the FBI was able to obtain evidence to use against suspects despite Signal not collecting user metadata and offering E2EE.

This much is extremely obvious: encryption doesn’t stymie as many investigations as the FBI claims. And even without access to metadata, investigators are able to compile enough evidence from Signal users to move forward with prosecutions. So, Signal’s decision to refuse to collect metadata appears to have almost no effect on law enforcement.

Despite undermining the premise of his op-ed, Blackman continues.

The ethical universe, according to Signal, is simple: The privacy of individuals must be respected above all else, come what may. If terrorists or child abusers or other criminals use the app, or one like it, to coordinate activities or share child sexual abuse imagery behind impenetrable closed doors, that’s a shame — but privacy is all that matters.
One should always worry when a person or an organization places one value above all. The moral fabric of our world is complex. It’s nuanced. Sensitivity to moral nuance is difficult, but unwavering support of one principle to rule them all is morally dangerous.

His op-ed started with Signal and the premise that it needs to be directly regulated into compiling information governments (even the evil ones!) desire access to. Several paragraphs and disconnected assertions later, Blackman is now speaking about “or one like it” — a phrase that covers any service that isn’t immediately an open book to government agencies.

This slippery speaking about slippery slopes is hilarious because Blackman goes on to claim (with zero evidence) forcing tech companies to gather info for no other reason than governments might want it won’t lead to mission creep, abuse, or the greasing of slope that would compound surveillance abuses.

Blackman likes his broad brush. He doesn’t care for Signal’s brush, though. And he argues, with an apparently straight face, that just because the US hasn’t abused surveillance programs the way historical dictators have, Signal has no right to refuse to collect metadata on users.

To the company, surveillance covers everything from a server holding encrypted data that no one looks at to a law enforcement agent reading data after obtaining a warrant to East Germany randomly tapping citizens’ phones. One cannot think carefully about the value of privacy — including its relative importance to other values in particular contexts — with such a broad brush.

It’s not quite “Hey, at least we’re not Hitler.” But it’s less than a half-decade away from that argument. “Not quite the Stasi” is not a persuasive argument, especially when Signal’s customers aren’t all located in the United States. Many of them are located in countries where domestic surveillance and targeting of government critics, journalists, and opposition leaders is so common it’s much more comparable to the Stasi. Blackman’s view of the issues at hand is not only dim, it’s blinkered.

If you think it can’t get any stupider, well… you just don’t know Blackman. Signal is asking other tech companies to collect less data on their users, even if it means they may be slightly less profitable. Blackman claims (again without evidence) Signal’s views on data collection may not reflect the views of its users. And this (alleged) disconnection means Signal is co-opting users to push privacy initiatives they may not fully support. Somehow, this desire to actively protect users is the equivalent of lying to users, according to Blackman.

Signal’s users may not be the product, but they ‌‌are the witting or unwitting advocates of the moral views of the 40 or so people who operate Signal.
There’s something somewhat sneaky in all this (though I don’t think the owners of Signal intend to be sneaky). Usually advocates know that they’re advocates.

This isn’t like buying corn dogs and later finding out State Fair is donating money to legislators whose politics you don’t agree with. This is a company doing everything it can to ensure the security and privacy of its users, even if some may not be aware of the extent of the protection or the nonprofit’s desire to make this a messaging service standard everywhere. I think most people assume, however incorrectly (depending on the service), that messaging services will provide them with secure, private connections with other people. Finding out their provider does more than most to protect them isn’t going to make them think their provider was misleading them.

Somehow, this random assortment of phrases, clauses, and self-destructing assertions leads Blackman to arrive at this conclusion:

So I am not convinced we are really getting more freedom and “for the people by the people” by way of our technology overlords. Instead, we have a technologically driven shift of power to ideological individuals and organizations whose lack of appreciation for moral nuance and good governance puts us all at risk.

Great. You’re not convinced. So what. Go harass family members, bartenders, and Uber drivers with your “technology overlords” claptrap. But leave the rest of us out of it. Saying things like “moral nuance” and “good governance” is meaningless in the context of this op-ed. There is no middle ground. And that’s on you, Reid. You somehow think companies can provide less security and privacy to criminals while still protecting the thousands or millions of non-criminals utilizing these services.

You appear to believe the government is more trustworthy than private companies that also want to increase their power and influence. That you come down on the side of the powers rather than the Constitutional rights is absurd, especially when you present yourself as an “ethicist.” This op-ed looks at all the issues and decides the government can do a better job protecting people than private companies, especially when it’s allowed to demand access to metadata without even having to consult a judge. You’re arguing that self-contained, self-regulating government surveillance is far better for people than tech companies that actually prioritize security and privacy.

What a fucking joke.

We publish daily doses of decentralization to over 4000 regular visitors, and boost out on Mastodon, Twitter, Telegram, Tribel and Element (Matrix) to over 4500 daily followers and growing! Please like & share our output. We rely on you for content, so please write for us. We welcome sponsorship and donations to help us continue our work - all major cryptos accepted or maybe buy us a coffee. Contact us at blog@decentralize.today - many thanks for all donations received, much appreciated.
Share this post