It all started with a blog post with the alarming title “The Encryption Debate Is Over - Dead At The Hands Of Facebook”, where the author stated that Facebook’s efforts to “move a global mass surveillance infrastructure directly onto users’ devices” would allow it to circumvent the protections of end-to-end encryption. Scary stuff.

End-to-end encryption received a big boost after Snowden revealed to the world the extent of NSA’s and GCHQ’s surveillance which left tech companies scrambling to add end-to-end encryption to their messaging apps. We thought end-to-end encryption would keep our messages safe from prying eyes. We felt encouraged back in 2015 by David Cameron’s and Barack Obama’s calls to outlaw cryptography without government-accessible backdoors. When in 2018 WhatsApp was linked to the spread of fake news and mob violence in India with Facebook unable to halt it because of end-to-end encryption we felt outraged by the killings but also saw the news as proof that our messages were safe. And now Facebook wants destroy the sanctuary it created…

Of course, a sceptic might say that Facebook only added end-to-encryption to WhatsApp because it was forced to mitigate the PR fallout from Snowden’s disclosures about NSA’s PRISM program. The allegation that your company is a willing partner to a secret, large-scale government surveillance project does not engender trust with customers. If the sceptic is right then we should not be surprised that today Facebook is seeking to undermine the encryption it created yesterday. Because today our private messages are one of the few data sources Facebook is not able exploit to sell ads.

But I digress. So far our source for these alarming news is one solitary blog post. After reading the pages the blog post links to, I found that the predicted “Death of Encryption” is based on one talk given at the Facebook developer conference F8 about Facebook’s efforts to move AI from central servers to edge devices, i.e., your phone. This sounds much less scary. But is it in fact less scary?

Deploying AI on mobile phones–make no mistake, this is a difficult engineering challenge–can make apps more responsive, less reliant on good phone signal and less bandwidth-hungry. But it also gives big brother one more window into our lives. An AI system deployed inside the app on the phone could indeed bypass the end-to-end encryption of messages since it would have access to the unencrypted content. We can only speculate what Facebook would do next. Displaying contextual ads would be the most benign option, but the scenarios range all the way to applying content moderation to our private conversations.

We have accepted that our public posts are subject to “Community Standards” – a curious name given that they are set unilaterally by the company–but our private messages? And this is not even science fiction any more. In China WeChat is already censoring the private conversations of users in real time. WeChat’s AI system is powerful enough to scan both text and images for content the Chinese government deems undesirable.

Regardless of the scariness-factor, these news show us how little control we have of the tools that we depend on in everyday life. If Facebook decides to deploy content moderation on the edge and starts to enforce its rules on permissible content in our private messages, what are we to do? Our friends are still on WhatsApp, our social network still uses Facebook. There are no third-party clients for WhatsApp we could switch to. But this worry about the future does not feel scary, merely subliminally disquieting. Maybe it should.