On Digital Solipsism
If I were to pick the two most over-hyped technologies of the present day, I wouldn’t hesitate to nominate the blockchain and deep learning. The blockchain is supposed to be a revolutionary technology that will make… something… possible, maybe not right now of course, but “it’s early days” and anyway you and the people hawking it can both make bazillions of money if you just buy in now. Deep learning at least has some tangible gains to show for itself. Some of these might even be useful, but the ones that get all the press are mostly chatbots and text-to-image applications that at best make us laugh if we don’t think of their questionable data collection processes. Both use crap-tons of energy and draw crap-tons of venture capital investment.
I just realized another thing that both of these have in common, and it’s why they turn out to be more hype than substance. Both are forms of digital solipsism.
Solipsism is the name for the philosophical position that only the self is real, or at least only the self can be known. It’s RenĂ© Descartes before he finds God. It’s not really a philosophical position anyone has ever admitted to holding; if you really believed in solipsism, after all, why would you tell anyone about it? They’re all figments of your imagination! “Solipsism” is more of a pejorative that philosophers use for other philosophers, like Descartes and Berkeley, whose philosophies are saved from solipsism only by an almost literal deus ex machina. Maybe Hume was really a solipsist, but that’s about it, and even Hume had to shove most of his philosophical positions to the back of his mind in order to engage with daily life. If you’re really a solipsist, after all, there’s not much you can really do with that.
I was thinking about how blockchain-based “smart contracts” aren’t actually smart because the only thing they “know” about is the blockchain itself. If I want to enter into a contract with you stating that I’ll pay you four million blartcoins if you write me a rap musical about the Great Canadian Maple Syrup Heist,1 I can write up some code in a “smart contract” to do that, but it won’t be as good as a real-world contract with lawyers and stuff, because all it will really say is that four million blartcoins will be transferred from my wallet to yours once I transfer a token to the smart contract. If I refuse to do that, not only is there nothing the “smart contract” can do about it, there’s no way the “smart contract” can even know about it. All it knows is the blockchain. The entirety of the DeFi/cryptocurrency/Web3 hype engine is just coming up with more and more creative ways to bridge non-blockchain events with the blockchain, and once these have been constructed, it’s unclear what point the blockchain actually plays, especially since the non-blockchain scaffolding often does an end run around blockchain’s supposed benefits of “decentralization” or “censorship resistance” or “Byzantine fault tolerance”. The blockchain is just sitting there, thinking about blockchain stuffs, doing things that are almost completely unrelated to the external world while people insist it has meaning.
When we look at a lot of the fancy new toys run on deep learning, we see a similar situation, especially when it comes to large language models. These are what powers chatbots (trained on only text) and text-to-image models (trained on text plus images). Whenever someone sees a chatbot give apparently intelligent answers to questions, or draw a convincing picture from only an English-language description, there’s always a cadre of dupes out there saying “this proves AI is really sentient! It’s doing the same thing humans do!” Except it isn’t, because humans have access to sensory input apart from just text and maybe images; they are able to interact with the world in ways other than just responding with text or images; and they have existential needs like food and oxygen that fancy multi-layer perceptrons don’t. All of these things connect humans to something “out there” other than just context-free words.
You can see the limitations of deep learning solipsism in translation apps. Many languages, such as Turkish and Mandarin, do not have gendered pronouns, and so most sentences that refer to someone in the third person do not give any clue as to the person’s gender unless the speaker felt it necessary to make it explicit. But languages such as English and French not only allow but actually require gendered pronouns in most situations. So in order to translate from Turkish to English, a decision has to be made as to the gender pronoun to use. And the only tools a large language model has to make this decision is what words are typically found in close proximity. So sentences about doctors and engineers get “he” pronouns and sentences about nurses and secretaries get “she” pronouns. True believers would probably claim that this just proves the large language models are smart enough to figure out things about how the world really works. These true believers, almost without exception, also use “he” pronouns.
In both these cases, a technology is over-hyped because it has a demonstrated ability to perform well in its isolated, solipsistic little world. And lots of people with lots of money are convinced that this is the same thing as performing well in the real world. And they’ll lose big once people catch on to the fact that it’s not the same at all.
-
I would for real pay this, if blartcoins were real, and worth about one four-millionth of a USD. ↩︎