Command and Control

We do not understand the human brain, yet our lives worship the artiface of an artificial brain imitating a malicious interpretation of human nature

Iconically, the opening scene of WarGames starring Matthew Broderick depicts a missle control center, hidden in an underground bunker beneath a farm house in the dead of winter, during the Cold War, when the order comes through a launch, and as the second man is about to turn the key, he’s struck with the immediacy of the consequences of what he’s about to do; usher in nuclear annihilation. His concern is that, yes, his orders are clear, but before he commits to being the harbinger of a post-nuclear age, he’s conflicted about doing so without confirmation. This was, however, a drill; the rest of the movie is what follows, the WOPR computer is placed at NORAD to replace the human element, but the computer, as the rest of the film goes, sees humanity as, correctly for a computer inputting variable and processing, the output of a computer system, data points in a game.

The film is a metaphor for the deeply personal interactions we project on machines, and when we entrust those machines to replace the functions of humans, believing the fallibility of humans to be a failing all of the time, rather than automation/intelligence unencumbered by human concerns being unreliable even some of the time (to say the least), we are staging a policy failure that will have tangible consequences for those on the application-end of said policy; be it means testing a welfare program, or the decision of whether or not to launch a weapon. The robot’s catchphrase at the end of the film sums up the meta-narrative of going all-in on AI where human consequence is a constant: The only winning move is not to play.

This is based on real conditions of this sort of program during the Cold War, with the fictionalization of a real concern of its military leadership surrounding an extremely irrational, overblown, and practically non-existent threat; what if someone hesitated, and we didn’t win?

What this film from the 1980’s gets right that any manner of film of most genres in decades before or since is that, for the United States, military conflict isn’t a matter of world peace or ideology (in and of itself), but a lifestyle of imperialism and global hegemony, but directed at nothing in particular; if this weren’t the case, the inconsistent standards of when/why it’s appropriate to be involved wouldn’t so closely mirror the rhetoric used to justify said involvement (i.e. “if the president does it, then it isn’t illegal”— sound familiar when you consider why it’s tolerable for the US to get involved in the democratic processes of, for example, Venezuela, but not when US elections are disrupted? A commitment to democracy here can, absolutely, be questioned). Even (otherwise really enjoyable) films like In the Army Now (1994), a comedy, (as a comparatively extreme example) depicts being a reservist as a means to, not only, personal material security as an enticement to join, but as a fun way to serve your country (in a conflict we had no business being in), with no consequences to the places you’d potentially be deployed to (should the need arise) or why you’d be going there at all.

We’ve normalized the idea that even our processes are self-justifyingly intelligent to the point that execution no longer matters; if the US is there, surely there is a reason, and thus, volunteering to serve in any capacity is cool and good, actually. We devolve in the decade between these films from trusting technology with national security is bad because it removes human judgement from the equation in light of an ““apparent”” threat, while later, human judgement is unnecessary because, well, the original request is based on the premise that this is a good and necessary cause inherently.

In the present, we find ourselves with some combination of both of these ideas being held simultaneously; Palantir, for example, among other companies, wish to assist in COVID-19 contact tracing, the gift from the government to them to finally fulfill the sincere desires of Silicon Valley to collect as much data on as many people as possible under the pretext of a government sanction, for reasons that will, definitely, not be used for public health beyond this point.

We see this play out in the media as well, the desire to think as machines do; perfect objectivity, which we mistake for rationality and formal logic. The problem with this is that, as with the design of algorithms, the training of artificial intelligences, etc. what objective reason looks like can be highly biased, if not outright bigoted towards one belief system or another. In conservative circles of online commentators, there’s this tendency to believe that logic is the objective standard, and therefore, editorialization is not only appropriate, but morally correct because of its perceived correctness; corporate, or “liberal” media uses a formula based on the idea that objectivity, without editorializing, a consensus (read: parent company influenced) framing of the facts, or a narrative, can be morally correct without (being seen as) taking a position, even if the truth value of these facts (or how they are presented) tell an incomplete truth, or a manufactured one; whereas left commentating tends to report critically, but not unemotionally in its necessary editorialization, perhaps in deference to the irrationality of humanity and understanding that motive is part of this narrative as much as the events themselves. You see the play out with, respectively, people like Ben Shapiro, outlets like CNN, and in this latter category, commentators like Chris Hedges— they all report some version of (what they are presenting as) the truth, and maybe even believe it, but there’s something to be said for the humanistic, emotional lens on the correctness of an action when reporting it. Reporting objectively, as if two things of dramatic moral differential are the same, does no one a service as news.

The media is, simply, an example of where this formulaic thinking clouds effectiveness, and only seems to peak in effiacy when being used to mislead, but there’s a range of behaviors that all can process the same information very differently and come to different outcomes, and based on what measure of human nature a machine seeks to emulate, and one could argue endlessly for which one most closely resembles the human response, but rarely will the computed response be anything but imprecise and undecisive, while also detecting deviances in human interactions; sociopathy is one thing that comes to mind that could, in essence, fake out such a computational model, and on the order of a military operation, this is far more likely than you’d think.

What we’re talking about here is that we’re, potentially, seeking validation from these machines that, if something supposedly objective, unbiased, can be bigoted or behave immorally, then it must not be bigoted or immoral, when the reality is that it excuses the immorality of that behavior; consider the example of command and control systems and procedures, removing human judgement from an act that, no matter how it is rationalized, results in mass destruction and human suffering, not to mention lasting harm to the ecosystem for generations— the very definition of an immoral action, and something a human would hesitate before doing, if not for someone (or, now, some thing) letting them off the hook.

A machine to behave, objectively, the way we wish we could behave, but our consciences do not allow; we wonder why these machines haven’t improved humanity’s quality of life, just increased the velocity with which we experience our lives. The things impacting quality of life remain the same; poverty, bigotry, exclusionary political leadership, etc. Technocracy would, merely, entrench these things further, but without the sanction of “evil” or “immoral” or “incompetent” leadership; on the interpersonal scale, look no further than social media moderation attempts at employing AI, which only reinforces the biases of the users training it, and validated by the positions of the leadership, demonstrably amplifying the problem. Imagine this at scale; drone strikes, are one such example— from behind a screen, the targets are dehumanized, and it makes this sense of duty easier to carry out, but makes it more accessible to perform the sort of collateral harm that all casualities of combat are.

To evaluate where all of this could go wrong, let’s go back to the original premise: the fictional WOPR computer.

In the film, during a simulation, the machine detects a fictional launch, to which it brings NORAD, in reality, to alert, preparing to launch a sequence of missles to key targets across the USSR. In the mind of the public, and indeed the mind of the computer, this follows logically; however, the reality of that period in world history is much less complex, but infinitely harder to navigate algorithmically as transactions in this manner. Consider the example of the Cuban Missle Crisis, it’s painted as a situation where mutual destruction was almost ensured, however, the facts are very much in dispute with this common misinterpretation of the narrative, and true to conservative form, doesn’t acknowledge one complicating variable that is ignored in this rational, logical flow of events; Khrushchev tried to stand-down first, and ultimately, what was the real offense? Ideologically-based pretexts for coming to alert to make this false choice. Motives are not superfluous in the strategic calculus.

The problem in trying to view world history as broken down by nation-state or ideological parity is that it ignores the globalized nature of every interaction every world leader takes; the Soviet Union, itself, was founded following a legitimacy crisis that had everything to do with the Empire sustaining itself on credit to other monarches, the French and English revolutions both having their causes partly to do with geopolitical tensions with each other and in their respective colonies, consider every Latin American country that the US has overseen regime change in, in Iran, Syria, Libya, the list goes on. This is neither new, nor limited to instances of colonialism or imperial adventurism, but is one very deliberate example of the foreign seeming irrelevant, while also being the most direct, impactful, and distilled exemplification of the very worst impulses of a society, up to and including things not-so-far in our past as Americans like slavery, and its perpetuation in the Black Codes or Jim Crow laws or in today’s modern carceral system, to say nothing of the treatment of indigenous groups. Every society has these types of original sins, but the reality is that they also rarely occur in a vacuum to that group of people, and have far reaching implications. In the modern age, can you program a computer to behave any better? No, I don’t believe you can, not without, first, interrogating and dismantling this behavior in society, which is an arduous, ongoing process, that I don’t believe humanity can translate to the One Perfectly Critical algorithm.

It’s not that these problems are now solvable, or that they’re simply too hard or that the human brain is infinitely complex, it’s that, in our arrogance, we acknowledge not knowing how brains even work, or why, while also presuming to be qualified to invent a more perfect (read: rational, detached, unemotional) brain for machines to be our avatar; it’s no wonder the applications for much of the AI isn’t really all that intelligent, just a proxy for the things we believe humans aren’t capable of being decisive about. In our present, it’s taken the form of beyond military automation, it dictates our discourse, powers our surveillance state, is used to enhance state violence, but also provides low-level convenience in the form of virtual assistance, nothing even remotely approaching the complexity of a brain, but performs the menial labor of cognition well enough to sell us on the idea that with sufficient application, more impressive things are possible, and that there is a necessary evil in the development of these tools for purposes one might normally object to.

In the sitcom Friends, the character Joey Tribbiani, who is an actor, accepts a role on a buddy-cop show called Mac and C.H.E.E.S.E, where C.H.E.E.S.E is a robot and partner. The problem is that Joey, when first arriving on set, is disappointed to learn it’s not an AI, but a remote controlled machine, controlled by an operator, upon whom he leaves a bad impression. Because of this, the operator begins sabotaging Joey on set, and ultimately, a decision needs to be made between the robot and the star of the show; the former coming at an advantage. This is as good a metaphor for, in my estimation, the reality of the state of our actual AI capability’s possible prospects as a superior being.



Recent things I’ve read, listened to, or watched that I am now recommending:

Algorithms of Oppression - Safiya Umoja Noble

ALAB Podcast, Episode 1 - Old Dirty Bastards

Tech Bullshit Explained