How Facebook, for example, profits from its users is clear; data collection and ad revenue. What is less clear is the pervasiveness of the platform— a broken SDK last week took down a number of websites that didn’t even realize Facebook code was a part of any number of modules that consume it downstream- something consumers can neither be aware of, nor consent to, if not even developers can verify it exists. We use Messenger for any number of customer support functions, groups for ad-hoc organizing, the platform itself as a very well privacy controlled means of socializing, it functionally serves a net positive if you don’t consider the actual culture of Online.
So, the question becomes, how do you consume ethically, without a clear path to resisting? Well, the short answer is that you don’t. The longer answer is somewhat more repellant, but modern times call for modern solutions.
Of course, not explicitly using Facebook services is a possibility, but for many, there are one or more components that are more or less mainstays of one’s online life, which can take on a position of great importance. Whatever your reasons, even if you do opt-out, there’s always an obstacle somewhere posed by Facebook code that will eventually get exposed to you, no matter your divestment, even if it is so complete that you never even come online.
We’ve created a culture of bullying the least vulnerable online, and typically, cyberbullying would be unimaginable, but lately, it’s come to feel like it is the only means of protest available to a powerless consumer base. This sounds simple, and it truly is, but it serves almost this accelerationist ideal.
I will give Mr. Zuckerberg a compliment, first: He was a brilliant opportunist, with the skills to pull off all that he has while remaining as involved in the quotidian operation of Facebook as he has. That’s what makes all of this so horrifying.
Zuckerberg (Zucc, as he likes to be called) started with a website and ended up with a platform that Facebook is merely just one application that consumes the Facebook APIs; other examples, Instagram (uses Facebook’s content delivery network), Messenger (it’s own discrete set of APIs), Groups, comment section integrations, share buttons, the list goes on. This was always the intention, to make it centralized, but modular enough for expansion.
In 2020, this manifests in the form of many different platform functions being used for surveillance in very dystopian, but not so effective, and highly speculative ways— they’ve been unable to move on from PHP to the extent that they created their own PHP VM to handle their problems of scale, for example, in the core application, but modules for things like spam detection, etc. can be plugged in using any tool an engineer may desire. This is all very impressive, but scary for the simple reason that it, both, reinforces the notion that, spiritually, Facebook’s engineers (okay, Zucc, as the avatar for such a developer) are hackers, while also using this subversion to implement tracking that stays with you long after you’ve left the site; this goes way beyond cookies, and that activity will trace back to your Facebook experience.
I’m not the first to write about this, so I will not go into it further, but this is mostly for context.
In 2015, Facebook publicizied the use of a new Haskell framework for automating spam detection. Great, right? Sure.
But this is what is known as a limited hangout, except without actually revealing anything nefarious to make the larger invasiveness seem less bad; we’ve already done the work for them. Phrases like “you are the product” normalize this to hell and back, and while it’s true, it doesn’t normalize the idea that one can resist it, especially when it reduces something really creepy down to a slogan.
By publishing an engineering blog at all, it creates the picture of transparency, and because it’s positive, it generates good will, but as we saw during the Cambridge Analytica debacle after the 2016 election and Brexit, and usage of the platform that is broader than targeting misinformation at susceptible voters in the US and UK, we see that its tools, even without automation, are exploitable for huge impact negatively, and from a policy standpoint, Zucc has not combatted this with their technical programmes.
Anecdotally, an engineer I know once told me about a visit to an AI-driven team at Facebook that was focusing on email integrations, and left so shaken by the experience that he immediately deleted his account, purged all his devices, and started fresh without Facebook going forward. The paranoia that seeing how the sausage gets made, even the suggestion that we could be living in any more a panoptic hellscape, a voluntary one at that (we’ve been told), is usually a tipping point to take privacy and personal opsec seriously.
I, of course, have no idea what he saw, or why it scared him so much, but it’s something to consider; if they have the ability to use new tools to expand their functionality for good, then ostensibly, it would follow, it can be used for malicious purposes as well; in the case of Facebook, if you want to chalk all this up to conspiracy, it is still, at best, a user-hostile anti-pattern to suggest invasiveness is the same as observability.
I say all of this to contextualize the powerlessness in something like this image:
Junk data plays a very important role in disrupting the flow of useful information to data warehousing enterprises like Facebook. There’s no better example of this in unintentional protest than this in casual user parlance than the time a rumor circulated that Facebook was censoring this, particular, image.
Zuckerberg became the avatar for Facebook, itself; he, of course, isn’t personally moderating content. In perpetuating this premise, it created an outlet for frustration, to suggest that Zuckerberg, personally, is offended by, even “fears” (as one user put it), this image.
Even the iconic photo of Mark Zuckerberg drinking water during his Congressional hearing, that he’s sitting on a booster seat, that his content circa 2009 is so cringeworthy to warrant suspicion that he is, merely, a meme, has become fair game:
On the surface, it may seem like I’m suggesting that trolling a billionaire with commentary he’ll never see, or if he does, won’t compel him to change anything, but it’s a bit more nuanced that this; I’m suggesting ripping on Mark Zuckerberg can, and I’d argue probably has in a way we failed to do to Bill Gates a generation ago, be a vehicle for coercing public discourse about what makes this lack of consumer choice, and material harm to society so decisively problematic.
We’ve now seen two Democratic primaries derailed by narratives about social media usage based on nothing more than the suggestion that one bloc of voters was engaging in coordinated harassment; this is thoroughly uncorroborated, but it does demonstrate a willingness on the part of the media, and more importantly, the public to see social media as a vehicle for organizing, for good or for ill.
Imagine a future where this vitriol for billionaires trying to pose as common-man geniuses are, rightfully, bullied for their artifice? That’s the energy we’re seeing tapped into, it’s just directed into calling Zuckerberg a coward for being bad at image processing, rather than being the protege of a eugenicist (Bill Gates) and a vampire (Peter Thiel, who also is behind Palantir, which, honestly, should be horrifying enough that they have a relationship to Facebook’s data operations at all that I shouldn’t need to write any of this).
We’re seeing this play out with the de-canonization of Elon Musk; he’s neither a good businessman, nor an engineering marvel. He’s simply a capitalist who convinced fans of his products(?) that the (deeply tenuous) ends justify the (demonstrably harmful) means. The revision is in effect, and frankly, that we’re beginning the work on Bill Gates’ work in Africa, which toys with the idea of denying informed consent to the recipients of medical treatment, even if it is ostensibly intended in their interest (it is not), I think we can come for something as comparatively first-world as the work of Mark “Zucc” Zuckerberg.
Consider the model of Rubberhose cryptography, a thing brought into the popular imagination, but not originating with, Julian Assange; the idea being to create enough random junk data to make actual data deniable.
Suppose a larger audience began generating this junk data around Mark Zuckerberg, in a model rather than using junk data to obscure a reasonable truth, we began generating junk data to make factually true information seem more reasonable by comparison. Providing a runway to the more outlandish, but just as well-reasoned aspects of what is happening in FAANG corporations.
Ultimately, this becomes a means of using exploitable tools against those who would exploit them first, by design. Poisoning the data seems like an effective enough response to those who would mine the data for off-label reasons.
With this context in mind, even the aimless bullying has an effect; we kill the idea of killing one’s idols by having no idols to begin with. That, to me, is truly a precious sentiment for building a better future.
Extras
Recent things I’ve read, listened to, or watched that I am now recommending:
Cameron Carpenter - Rachmaninoff's Paganini Rhapsody for Organ & Orchestra