The self-driving fallacy

There’s an episode of the original Star Trek series in which Spock discovers that someone has tampered with the ship’s computers. He realizes this because he is able to beat the computer at chess several times in a row. This should be impossible, he reasons, since the computer is incapable of making mistakes. Therefore, someone must have tampered with it.

That episode was written in the late 1960s, at a time when actual computers had been playing chess for about a decade, and none of them were able to defeat a reasonably talented human. No computer chess program would beat a grandmaster for another twenty years or so. Obviously, “not making mistakes” was not a sufficient condition to make an unbeatable chess opponent. Yet the writers of the episode, as well as a decent chunk of the audience, seem to have accepted the claim that computers, unlike flawed humans, are somehow infallible.

There is a trivial and uninteresting way in which the claim that computers don’t make mistakes is true. Under most conditions, computers faithfully execute whatever instructions are programmed into them, and if some electrical or mechanical flaw causes them to do otherwise, we don’t claim the computer “made a mistake”; we claim that it malfunctioned. “Making a mistake” is something that can only be attributed to beings to which we ascribe free will. It seems that most of us believe in the ethical maxim that “ought implies can,” and therefore that mistakenly doing other than what one is supposed to do implies some sort of volition that machines lack.

So why didn’t the first computer chess program handily beat every human opponent? Because it didn’t make mistakes. It faithfully executed the (flawed) instructions given to it by its programmers, who were human. And those humans, with their finite brains and limited experience, were unable to devise a set of instructions that could handle every possible configuration of chess pieces, and fit in the tiny memory available to machines of the time, and not take billions of years to decide on the proper move. Even today, with much more advanced hardware and much more knowledge of both computers and chess, we can’t design an unbeatable machine. Nevertheless, the belief in machine infallibility persists, and continues to get us into trouble.

One recent example is with the push to replace human drivers with self-driving cars. A common argument is that most traffic incidents are the result of human error, so the more we remove humans from the decision-making process, the safer driving will be. Since computers don’t make mistakes, computer-driven cars should be about as safe as can be. The problem with this argument is that replacing human drivers with computers doesn’t remove humans from the decision process at all. Instead, it just takes the decision-making humans out of the driver’s seat and puts them in a laboratory or office, where they must design a system capable of dealing with every possible traffic situation one might encounter – even those that said humans could never have imagined. And in doing so, it replaces the human brain – an unbelievably advanced computer that has been honed over millions of years of evolution to handle exactly the sort of split-second, life-or-death decisions based on limited sensory information that one encounters when driving – with software, something humans have been designing for less than a century. A much more sensible approach would be to keep humans in the driver’s seat, but give them machines to assist them with those things that they can’t easily do, such as see in front of them and behind them at the same time (mirrors, backup cameras), or figure out the most efficient route to a location given current traffic conditions the driver might not be aware of (GPS, cellular network). But because of this belief that machines are somehow infallible, we think they can do better than humans at the very things humans do better than any machine ever has.

I believe that something like the self-driving fallacy is behind the recent craze for cryptocurrency and other blockchain-related solutions in search of problems. Perhaps the biggest defining feature of cryptocurrency isn’t that it’s decentralized, or distributed, or “trustless”; it’s that it’s completely automated. The early advocates of Bitcoin may have been bothered by the fact that fiat currencies relied on a central authority, but what really bothered them was that the central authority consisted of people – people who might decide to act against the best interest of the currency holders. And the insistence on a trustless system stemmed from the fact that no human-run system could be completely trusted, whereas one run by machines could, because machines, of course, are infallible. Never mind that those machines are running software built by a person or persons who hide under a pseudonym and about which virtually nothing is known; they’re machines, so they’re trustworthy.

Take, for example, so-called “smart contracts”. These are an attempt at replacing lawyers, courts, and other human institutions with software. The idea is that, instead of hashing out the terms of a contract, agreeing to it, and possibly arguing one’s case to a human judge if one feels that the other party has breached the contract – a process which requires trust in the skill and good intentions of all the lawyers, judges, and other legal professionals involved – you simply write some code, which will be faithfully and infallibly executed by the computers on which the blockchain runs. Which would be great, except for the fact that smart contracts are neither smart, nor contracts – they’re just stored procedures. They automatically update data on the blockchain (usually in the form of transferring cryptocurrency from one party to another) in response to some other blockchain-related event (such as a person or persons sending a transaction to the contract). If we verbally agree that I’ll pay you when some set of real-world conditions are met, the smart contract can’t check whether those conditions have, in fact, been met; it can only check whether I’ve agreed to release the payment. I can decide those conditions haven’t been met, refuse to pay, and the smart contract can’t force me to do so; there’s also no legal authority you can turn to that can compel me to pay. Likewise, if we agree to a smart contract that says if you loan me 2 Zorkmids today I’ll pay you 2.1 Zorkmids next month, that 2.1 Zorkmids will be deducted from me no matter what; if some extenuating circumstances crop up that neither of us could have anticipated, there’s no judge to whom I could appeal to argue that I shouldn’t have to pay just yet.1

Blockchain apologists would no dobut argue that a smart contract could be written to empower a third party, or DAO, or whatever to transfer money from me to you if they determine I’m acting in bad faith. At this point, though, we’ve just got a rudimentary human-run legal system, and it’s unclear why it needs to be on the blockchain in the first place – especially since a human-run legal system is exactly what smart contracts were supposed to replace in the first place.

Chesterton’s Fence

This brings us to a common problem that we see in the blockchain world, and even more generally with anything that claims to be revolutionary: Something comes along which is supposed to make everything that came before it obsolete. It promises to replace the confusing complexity of the past with something simple and obvious that solves all our problems. Then, little by little, the adopters of this new revolutionary system come to realize that all sorts of problems that the old system solved – maybe imperfectly – are not addressed at all under the new regime. So they face the task of having to reinvent solutions that work with the new system. Eventually this simple and revolutionary thing comes to look as complex and crusty as the thing it replaced.

This happens so often we even have a name for it: Chesterton’s Fence, from a passage by the stalwart defender of orthodoxy, G. K. Chesterton:

IN the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

The tech world is particularly succeptible to Chesterton’s fence, enamored as they are with ever-newer and better solutions; contemporary politics, where even conservatives often frame their platforms as a “revolution,” is another field where this mistake runs rampant. The blockchain world, which sees itself as revolutionary in both a technological and a political sense, is doubly succeptible. This is why these early days of cryptocurrency are rife with scams, theft, wild price fluctuations, and all sorts of problems that, while not nonexistent in the fiat currency world, are certainly not as bad, because the institutions crypto fans are so keen to tear down have developed solutions for them over several centuries.

Chesterton’s Lamp-Post

Elsewhere Chesterton describes a similar situation to his fence parable, though with a bit of a twist:

Suppose that a great commotion arises in the street about something, let us say a lamp-post, which many influential persons desire to pull down. A grey-clad monk, who is the spirit of the Middle Ages, is approached upon the matter, and begins to say, in the arid manner of the Schoolmen, “Let us first of all consider, my brethren, the value of Light. If Light be in itself good–” At this point he is somewhat excusably knocked down. All the people make a rush for the lamp-post, the lamp-post is down in ten minutes, and they go about congratulating each other on their unmediaeval practicality. But as things go on they do not work out so easily. Some people have pulled the lamp-post down because they wanted the electric light; some because they wanted old iron; some because they wanted darkness, because their deeds were evil. Some thought it not enough of a lamp-post, some too much; some acted because they wanted to smash municipal machinery; some because they wanted to smash something. And there is war in the night, no man knowing whom he strikes. So, gradually and inevitably, to-day, to-morrow, or the next day, there comes back the conviction that the monk was right after all, and that all depends on what is the philosophy of Light. Only what we might have discussed under the gas-lamp, we now must discuss in the dark.

Here we see a similar situation: there is some existing institution which people wish to remove, only to encounter problems once they remove it. The difference is that, whereas in the fence situation, the reformers “don’t see the use of” the old institution, the anti-lamp-post brigade are a coalition of people united only by their desire to tear the lamp-post down; each has a different reason for wanting this. The ones who wanted darkness could certainly see the use of the lamp-post, and see it as a problem rather than a solution; once their revolution is carried out, they are likely to fall out with the ones who wanted the electric light. A skilled politician might even build a broad coalition by talking up the importance of electricity to one audience, iron to another, and darkness to a third, though such a coalition would never survive its own success.

We can see one such Chesterton’s Lamp-Post situation today in the disturbing trend among some self-described libertarians to side with, or even consider themselves a part of, the authoritarian alt-right. Modern examplars of this tendency are the disciples of Lew Rockwell, most famously Stefan Molyneux, but the tendency can probably be traced back to Hans-Hermann Hoppe and his brand of “anarcho-capitalism”. Many (most?) libertarians would identify their cause with individual liberty and hostility to centralized control, and envision a society in which many different ways of life can co-exist in peace. These alt-right so-called libertarians, on the other hand, have no problem with centralized control or reduced freedom. Instead, they only dislike it when current governments curtail freedom, because those governments are likely to ban things they want to do, and require things they don’t want to do. Under an anarcho-capitalist regime, powerful corporations would take the place of governments and be able to constrain people’s freedoms just as easily as governments currently do. It’s not a hatred of central control that drive the alt-libertarians, but rather the belief that, were current governments abolished in favor of a market-driven free-for-all, the central control that eventually emerged would be run by people like them, or at least far more sympathetic to their desires.

Cryptocurrency, in particular Bitcoin, is often associated with the libertarian movement, so it’s no surprise that there is tension between people who like the blockchain because they think it advances the cause of freedom and those who like the blockchain because they think it will topple the current order and allow them to set up a new one. In the specific case of Bitcoin, generally regarded as the first real cryptocurrency, we don’t know anything about its creator, the pseudonymous Satoshi Nakamoto, so we can say little about his/her/their motives. But David Golumbia has written an excellent investigation into the political environment in which cryptocurrency emerged and the views of many of its pioneers. It’s hardly surprising that a number of far-right authoritarians have placed their trust in this supposedly liberatory technology, to the point that far-right talking points about the dangers of central banks have become mainstream among Bitcoiners who may not even be aware of their origins. By no means do all or even most cryptocurrency enthusiasts actually hold these far-right views, but, like the lamp-post opponents who wanted the electric light, they are unwittingly making common cause with those who prefer darkness.

Which brings us back to DAOs, smart contracts, and the need for human arbitration. As the blockchain becomes more popular, the limitations of automated contracts will become more clear. There will no doubt be more and more proposals to bake third-party arbitration into such contracts, and we will probably see a few powerful DAOs emerge to fill the roles of courts and even legislative bodies. At this point, those who supported the blockchain because of its decentralized aspects will begin wondering if this is a “meet the new boss, same as the old boss” sort of situation. They may start to think that blockchain has failed to live up to its potential, perhaps even that it was inherently flawed, and start looking for another way to achieve their liberatory goals. On the other hand, there will be those who applaud the new situation as a way to do an end-run around current governments and replace them with power structures more to their liking. These will be the ones to watch out for.

Then again, there’s always the possibility that the limitations of blockchain will be its undoing. As people come to realize that lawyers, courts, legislatures, and police are, if not unalloyed goods, at least a proven solution to a lot of basic problems, they may find that their attempts to replace these things with blockchain-based alternatives just don’t work. The slow speed, limited processing power, and huge space requirements of most blockchains will probably prevent them from being used to implement such solutions on all but the smallest scale. Fixing these problems might require fundamental changes to the protocols, to the point that they become decentralized blockchains in name only, and most of the true believers leave them behind in search of the next big decentralized thing. This might be the preferred outcome, actually.

There are many problems with the current system of central governments and central banks, but, contrary to what some believe, they didn’t come into existence solely because of nefarious conspiracies. Even if some shady, self-interested parties had a hand in their creation, they wouldn’t have had the staying power they’ve had if they didn’t provide really good solutions to a lot of fundamental problems. So until you have a good understanding of why these things exist, and a much better idea of how to replicate the good things they do, it’s probably a good idea not to be too enthusiastic about anything that claims it will make them obsolete overnight. And you should never assume that those who want to overthrow the current order want to do so in order to make people more free.

  1. On the other hand, I have been unable to find a good explanation for how a smart contract would be enforced in the event that I wind up owing more than I’m currently worth. Most cryptocurrency ledgers don’t seem to have the concept of a negative balance, and even if they did, it’s hard to see how it would have real-world ramifications; I could in theory create a cryptowallet, rack up a million Griftcoin in debt, and just walk away. ↩︎

Last modified on 2022-02-21