Trust

Recently I’ve become more interested in virtue ethics, and, by extension, virtue epistemology. (I still maintain that epistemology is just ethics applied to the realm of belief, but that’s a post for another day. Maybe.) The distinguishing feature of virtue ethics is that it focuses less on what you should do and more on who you should be. And one of my favorite quotes on virtue ethics comes, not from Aristotle or Anscombe, but from Robert M. Pirsig’s Zen and the Art of Motorcycle Maintenance: “If you want to paint a perfect picture, just make yourself a perfect painter, then paint naturally.” Leaving aside whether or not “perfection” is something attainable or even something we should be striving for, I think there’s a lot to say for this way of looking at things. If you just want to paint a perfect picture, you might look for a set of rules to follow. But no set of rules – at least, no set a human could memorize and follow – could possibly encapsulate every picture you might want to create. However, developing general skills that apply across domains, such as noticing details, or fine muscle control, will make you better at anything that uses those skills. Plus for all but the simplest human endeavors, those who are experts likely don’t even know how they do what they do beyond a superficial level. Rather, they learned a few basics, put in a lot of practice, listened to criticism, and never stopped seeking and following advice from those more accomplished than them. Eventually all of this gels into a combination of instinct and muscle memory that allows them to excel at their chosen endeavor in ways nobody can adequately explain. No set of rules can do that. And both deontological and utilitarian ethics are, at bottom, sets of rules for deciding what to do. They’re almost like ethical systems designed for machines rather than humans, with all the limitations that implies. Until we reach the always-twenty-years-away goal of general artificial intelligence, no human-made autonomous system is going to be able to develop anything like an Aristotelian virtue, which is why I will never let one drive my car or run my economy.

For Aristotle (and many contemporary virtue ethicists), each virtue is accompanied by two vices, one of excess, and one of deficiency. The key to being a virtuous person is to find the mean between the two vices. Personally I dislike the term “mean” in this context because, to modern ears, it implies an arithmetic mean, suggesting both that virtue is something quantifiable, and that the perfect amount of virtue is at the exact midpoint between two extremes. I don’t believe Aristotle intended this, and at any rate it’s not what I believe. Rather, I see virtue more like a path of solid ground through a treacherous swamp: going off the edge on either side will land you in danger. And the particular virtue that I am most interested at the moment is trust.

Since at least the 1960s, our society has placed a lot of emphasis on the vice of excess trust – call it gullibility. People were cautioned to “Question Authority,” and eventually even to “Question Everything”. There were good reasons for this: evidence had come to light, time after time, of the United States government lying to us. This was also a time when other institutions, such as churches and schools, were seeing trust erode as people began to realize how much they had been misled by those institutions. In this environment, anyone who was overly trusting came to be seen as foolish and hopelessly naïve. This sentiment worked its way into popular culture, to the point that the token “dummy” character in every sitcom is usually seen falling for some sort of a scam. In today’s world, trust and foolishness are thought to go hand-in-hand.

The internet has contributed a great deal to the modern emphasis on gullibility. It provides scammers and deceivers unparalleled access to potential victims, and the popular perception is that the online world is a cruel, unforgiving place where nobody is to be trusted and everything is a potential con. (The one thing, it seems, that is still to be trusted implicitly, at least in some circles, is technology – hence the insistence that blockchain is “trustless” and “code is law”, because programs written by people are somehow more trustworthy than the people themselves.)

The situation has become so bad that it sometimes seems as though trust itself is now considered a vice, and the ideal epistemological approach is to mistrust every other human being, relying on one’s own powers of reason – or supposedly infallible code – alone. The problem with this approach is that it can never work. Every person has to rely on other people for information at some point. There is no way that an isolated individual, through their senses and faculties of reason alone, can derive all the information needed to live in the world. In fact, those very faculties of reason – not to mention related, indispensible faculties such as language – could never have developed without relying on other people. And one cannot rely on other people without some degree of trust. The problem was never trust, but trust misplaced. And I would go further and assert that the gravest problem facing contemporary society isn’t an excess of trust, but a deficiency.

Take, for example, the recent wave of conspiracy theories such as QAnon, Pizzagate, and various “truther” movements. The common diagnosis of these movements is that their adherents are gullible and will “believe whatever they read on the Internet.” The problem with this diagnosis is that it is obviously false: the mainstream narratives about 9/11, the coronavirus, and other conspiracist targets are far more prevalent, and far easier to find, than the conspiracy theories. If someone “believed everything they read” they would be much more likely to hold the mainstream view. Instead, people go looking for the conspiracy theories precisely because they don’t believe the standard account. It may be that they latch on to whatever conspiracy theory they find rather than treating it with the proper skepticism, but I would argue that this move stems less from an excess of trust than from an underdeveloped faculty of reason – underdeveloped, perhaps, because they lack the trust in others necessary to learn how to think.

Many years ago, I taught critical thinking. I would ask my students how they would go about forming an opinion about a topic such as whether the moon landing was faked. One of the most common answers I got was something like “I listen to both sides of the story, and make up my own mind.” Yet a little probing revealed that they placed far more emphasis on the “listen to both sides of the story” part than the “make up my own mind” part. When asked why so many people have a different view than theirs, they would usually say something to the effect that those people “must not have listened to the opposing views.” You see this move today when people on the Internet accuse their opponents of being “stuck in a bubble” of people who think like them, when conspiracy theorists urge others to “do your own research,” or when people try to sway others by sending them long articles debunking their claims with counter-claims. In all of these cases, people seem to think that the only thing keeping the other side from forming the correct opinions was not being exposed to the right information. The “make up my own mind” part is never questioned: everyone behaves as though they thought all people had an inherent ability to choose between the right view and the wrong view, once both views had been properly presented. But in fact, people have no such inherent ability. It takes a lot of time to develop these skills. Helping develop them is exactly what I was trying to do by teaching critical thinking.

There are certain thinking skills that cut across disciplines. Things like knowing how to avoid logical fallacies, how to test hypotheses, basic mathematics and statistics, are skills that are helpful regardless of domain. But in order to make consistently good judgments within a particular domain, there is a staggering amount of history and methodology one must learn. In other words, one must learn what those who came before have done, and what their successes and mistakes were. And in order to do that, one must have a certain degree of trust. Some trust in the individuals is needed, of course, but the history of any branch of science is usually the history of things people in the past got wrong. The greater degree of trust needs to be on the knowledge community itself, and on its ability to, over time, carry out the difficult task of separating fact from fiction, of advancing knowledge, of questioning one bit of received wisdom without throwing away every single other bit of knowledge.

Furthermore, any given individual probably only has the bandwidth to develop this specialized expertise in one or two disciplines. Yet all the time we need to form opinions pertaining to fields in which we have no expertise. In order to do that, we need to trust the other knowledge communities to develop their own expertise. That’s why we listen to doctors, lawyers, climate scientists, and so forth, rather than relying on our own undeveloped expertise in those areas. Perhaps the most pernicious epistemological vice of our time has been the insistence that expertise doesn’t matter, and that one’s own faculties of reason are superior to the collective work of generations of thinkers simply because… you don’t trust them.

Of course, there is a flip side to trust. Those whom we trust must themselves be trustworthy. It is still possible to place one’s trust in an individual or community who consistently form the wrong conclusions about everything. No moral or epistemological virtues operate in isolation. But it does no good to be trustworthy if you’re only shouting into the void, ignored by those who would rather “do their own research.”


Last modified on 2022-05-31