LLMs aren't actually completely bad

It hasn’t escaped me that this blog has seemed very cynical of late. It’s hard not to be cynical when you’ve seen the same tech hype cycle play itself out over and over again, always with the result that a handful of billionaires reap the benefits while life gets worse for everyone else. But that doesn’t mean the technologies at the center of these hype cycles are completely without merit. While I fail to see any positive applications for cryptocurrency, for example, several of the technologies on which it is based, such as asymmetric cryptography and Merkle trees, are quite useful. (They also predate cryptocurrency by decades, so that’s not exactly a win for the crypto camp.) Lately large language models are the hype du jour, and as someone who specialized in natural language processing back in the pre-deep learning days, I’ve devoted quite a lot of time to considering whether there’s any good to come of these.

Before we do that, however, let’s review what’s not good about LLMs as currently deployed. The training of said LLMs consumes an horrific amount of electricity. The data on which they are trained is scooped up wholesale from the Web, and the creators of those training texts are neither asked for permission nor given credit or compensation. Armies of underpaid workers in the Global South are employed to help process that data. And even if we ignore, or were somehow able to fix, all of the problematic aspects about how these models are trained, we have an even bigger problem: these models don’t actually do what everyone wants them to do, what most users think they do, which is understand natural language.

It’s easy to go down a sophistic rabbit hole debating what “understand” means and whether non-human or non-biological entities can realistically be said to understand anything at all, but I don’t want to do that now. For the time being, let’s at least agree that understanding requires, at the very least, access to some form of semantic information. A simple sentence like “The cat is on the mat” can be analyzed in purely textual terms: it contains two occurrences of the letter “a”, four occurrences of the letter “t” (one of which is upper-case), and so forth. We can analyze it syntactically, as when we observe that “the” is an article, “cat” is a noun, and so forth, and these word-tokens relate to each other according to certain grammatical rules. But we can’t analyze it semantically without situating that sentence in a world in which cats and mats exist, and in which a statement like “The cat is on the mat” can be true or false.

The only level on which most LLMs work is the textual level. All they are really designed to do is to predict what text is most likely to appear close to some other bit of text, according to a complex and largely unknowable definition of “most likely” imposed on them by the architecture of the multi-layer perceptrons1 on which they’re built. You could argue that there is some degree of syntactic knowledge encoded in a trained LLM: even without knowing what any of the words mean, you could piece together enough regularities based on your observation of word order and co-occurrence to be able to generate novel grammatical sentences somewhat reliably. But even this isn’t a slam dunk: the ability of an LLM to generate grammatical sentences is dependent on the degree to which the sentences in its training set are grammatical. If you were to take sixty percent of the documents in an LLM’s training set and randomly rearrange the words (or, even better, systematically rearrange them, in a way that follows its own set of rules that is nonetheless different from those of whatever natural language the source text is in), the ability of the trained model to generate grammatical sentences would be severely curtailed. Only insofar as our assumption that the majority of sentences in the training set are grammatical holds true can we state with any confidence that the LLM “knows” about natural language syntax.

The semantic situation is even worse: while the assumption that most of the natural language text found on the Web is grammatical2 is not a bad assumption, I wonder how many people would confidently assert that most of the text found on the Internet is true and accurate. How would you even measure that? And even if you managed to put together a training set that consisted only of true sentences, there’s no guarantee that the “rules” that end up encoded in the trained LLM are sufficient to guarantee that novel sentences produced by the model preserve truth value. Consider the following misleading syllogism:

  • Nothing is better than a nice steak dinner.
  • A peanut butter sandwich is better than nothing.
  • Therefore, a peanut butter sandwich is better than a nice steak dinner.

Let’s ignore, for the moment, the fact that both assumptions are matters of opinion, and just assume they are, in fact, true. The conclusion clearly does not follow from the premises. The term “Nothing” is being overloaded: in the first sentence, it is being used to assert the non-existence of some thing that is better than a nice steak dinner, while in the second sentence it means something like “having no food at all”. We know this because we understand English. But does an LLM know this? Let’s sidestep the baggage that comes with using the word “know” in a machine learning context by rephrasing the question: can we guarantee that an LLM trained on sentences including the two premises will not generate output like the conclusion? And the answer is “Nobody knows.”3 Large language models are notoriously complex and dense, and one of the strongest criticisms of them is that they lack explainability: it is impossible to determine how a given model arrives at the output it generates, and certainly impossible to justify blanket statements like “this LLM always (or usually) tells the truth”. I would argue, however, that the rules which govern the truth-value relationships between natural-language sentences are orders of magnitude more complex than syntactic rules, and it’s really reaching to assume that, even with a gigantic training set, a huge network, and tons of computing power, we can derive those rules to any degree of accuracy. And that’s not even taking into account all the natural-language sentences that have no truth value, such as “Hello” or “Put the lotion in the basket,” or those which appear to have truth value but are matters of opinion, such as “Biz Markie was a better rapper than Tupac Shakur” 4

However.

It occurs to me that the myriad problems that prevent LLMs from being useful truth-generators are problems specifically of natural language, and that there is nothing in the definition of the transformers most LLMs these days are built on that says they can only be used with natural languages. They could, for example, be used with formal proof languages, such as Fitch-style natural deduction. These languages offer certain advantages. They have a much simpler grammar and vocabulary than natural languages. There is a deliberate relationship between their syntax and semantics, such that a “grammatical” proof (one that follows the rules of the system) is automatically also a valid proof. And while deriving a proof of a given conclusion from a set of premises may be a nearly intractable problem, once a proof is generated, it is almost trivial to verify whether it is valid. It would therefore be a simple matter to collect a training set of human-generated proofs in a particular deduction system, automatically filter out any invalid proofs, and use the resulting set to train an LLM. Give that LLM a set of premises and a conclusion, and it will automatically generate an argument from the premises to the conclusion. It might not automatically generate a valid argument – there might be no such argument, if the conclusion doesn’t follow from, or even contradicts, the premises – but I’m guessing it would generate valid arguments a decent percentage of the time, and anyway it would be simple to verify programmatically whether the result was actually valid.

I’m actually pretty optimistic about the prospects of such an application of LLMs. The reason proofs are hard to come up with is that for any reasonably robust formal proof system, there are, at any point, infinitely many possible valid steps you could take. Fitch-style systems, for example, allow you, at any point, to introduce a new assumption by increasing the level of indentation.5 Absolutely any assumption, even one involving variables that have not yet been introduced, is valid, provided you subsequently derive a conclusion from that assumption that lets you reduce the indentation once again. It’s thus impossible to use something like Dijkstra’s Algorithm to find a proof by trying all possible paths from premise to conclusion, because there are infinitely many such paths.

However, there are a number of “design patterns” that have been developed over the years, certain argument structures that have been employed over and over again in various proofs. “Proof by contradiction” is a simple example: to prove that proposition P is true, assume not-P, derive a contradiction, and you’ve proven that P. There are a lot of patterns that are more complicated than that, and you generally learn them by reading a lot of proofs that others before you came up with. But nobody has read every proof, and formal logic (like every academic discipline) is prone to specializations and sub-specializations, so that people tend to become experts in one particular narrow domain of proofs and aren’t necessarily familiar with techniques employed outside their specialty. Every once in a while someone does go poking around in a different specialty, discovers a technique that would actually perfectly address a problem they’re having, and a great serendipitous leap is made.

I do not mean to suggest that a large language model could make every inventive leap a human could make, or that robbing logicians of the joy of discovery would even be a good thing. But I think that an LLM that was trained on a greater volume and variety of proofs than any human could read and understand would be able to make a lot of connections that it would take humans years to discover, and, far from turning formal logic into a purely mechanical process, it would actually open up new avenues of discovery that humans could build on, using their far more powerful brains that actually know about more things than just symbolic logic.

Something like this approach could also work for mathematical proofs, which tend to inhabit a space somewhere between unrestricted natural language and formal proof systems. Mathematical proofs tend to be written in natural language (plus mathematical symbols), but a restricted subset in which words have precise definitions, and sentences are almost always declarative. Even so, the language tends to be more ambiguous than a formal proof system, and more difficult to check by machine. And I’m positive that any LLM proof assistant trained on mathematical publications would frequently, and infuriatingly, generate lemmas for which “the proof is left as an exercise to the reader.” Still, we might get some new and interesting ideas out of it anyway.

I know that many people use LLMs to assist in writing code in various programming languages, and it would seem like this is a restricted, formal domain, similar to formal proof systems, where I wouldn’t have any problem with the use of LLMs. Unfortunately, it is not. First of all, many of the systems people use are general-purpose LLMs, trained on tons of natural language that also happens to include a healthy smattering of code. And a common use case is to give a natural-language description of what you want (“Write a Python function to sort a list of words according to number of vowels”) and get a piece of code in return. Unfortunately, while it is trivial to test whether the code actually compiles and runs (often it doesn’t), it is a much trickier (and non-automatable) matter to determine whether the resulting code is actually guaranteed to do the thing you asked it to do. I don’t use these sort of LLM coding assistants because to me, it turns something I like (writing code) into something I hate (reviewing someone else’s code).

Other types of LLM-based coding assistants are essentially fancy autocomplete: rather than state your design specification in natural language, you start typing code, and they suggest code that might follow. Unfortunately, this often isn’t the code you want, and you still have to read over it to make 100% sure it is what you want. Not to mention the fact that these coding assistants are trained on real people’s repos without their consent, might be in violation of software licenses, and in some cases leak sensitive information.

A formal logic proof assistant, however, strikes me as a use case for responsible LLMs. Formal proofs, as statements of fact, are (probably?) not bound by copyright the way that text or code are. You can easily assemble a corpus containing only valid proofs, and easily verify whether the output is also valid. There would be issues with determining whether you need to cite publications used in the training set when publishing an LLM-generated or LLM-inspired proof, of course, but this use case doesn’t strike me as quite the “enclose the commons and use it for private profit” approach most AI startups these days are using. Maybe it’s just because there’s probably not much money in formal logic proofs, I don’t know. But I would like to see how far we can take this approach, preferably training on public domain proofs, or with the consent of authors where applicable. I think it might contribute in a positive, non-toxic way to scientific and mathematical progress.


  1. I like to use the term “multi-layer perceptron” rather than “neural network” because the latter is deliberately designed to trick people into thinking these things resemble human brains in ways that, I’m sorry, they just don’t. ↩︎

  2. By “grammatical” here I am not talking about conforming to prescriptive rules developed by pedants and enshrined in manuals of style. I mean “grammatical” in the sense that a fluent speaker of some natural language would recognize it as a sentence in that language (even if possibly a nonsensical one, such as the famous “Colorless green ideas sleep furiously”), rather than as a random set of words or symbols. And “language” here includes nonstandard dialects, including online-only dialects such as the complex system of acronyms and emoji employed in online chat. If it has its own internally consistent set of grammatical rules, it counts as a language. ↩︎

  3. We reached out to Nobody for clarification prior to publication but they could not be contacted. ↩︎

  4. Not my actual opinion. Don’t come at me, Suge. ↩︎

  5. Fitch was using indentation to indicate scope forty years before Python, so don’t talk to me about how weird that is and how it should just use curly braces or parentheses or whatever. ↩︎


Last modified on 2023-08-25