Posts

How to design a terrible website

Like most Americans with a university degree, I have student loan debts. This means that periodically, I get an email from my loan servicer that there is a special correspondence for me. The actual correspondence never appears in the email itself, of course, because that medium isn’t secure. Instead, it is hidden away in a Federal government website that has eschewed both security-through-obscurity and security-through-cryptography, in favor of the novel approach of security-through-a-user-interface-so-infuriating-that-attackers-just-give-up-and-also-the-intended-recipients.

Read More…

Ads make the Internet worse

I have a sneaking suspicion that when future generations look back on the era from the mid-1990s to some time in our not-too-distant future, they will be struck by how much our culture was dominated by advertising, and how ineffective it all was despite its ubiquity.

For as long as there have been businesses, there has been a need to, at the very least, let people know you have something to sell. There was also always an incentive to convince people to buy your product rather than a competitor’s. But to me, advertising didn’t really become the cultural juggernaut that it is until the twentieth century, when new forms of mass media arrived that were unable to survive without it. There had been newspaper ads before then, of course, but they were more of a supplemental revenue stream; you still, as a general rule, had to pay for newspapers and magazines. But radio and television were indiscriminate in their reach. Anyone with the proper receiver could consume any broadcast content, and there was no way to bill them for all and only the programs they consumed. There wasn’t even a reliable way to know which programs they consumed. So in order to make money, radio and TV stations had to give their content away for free, but charge businesses money to air their marketing messages. This made advertising virtually inescapable.

Read More…

The ATS Conspiracy

Searching for a job has become even more difficult than usual lately, and it seems that much of the problem is due to applicant tracking software (ATS). This is software that is designed to filter through applicants’ résumés looking for exactly the skills and experience the role requires. This is intended to make things easier for hiring managers: rather than sort through thousands of résumés by hand, they can instead sort through the dozens that survive the filtering process. This means a lot of applications are cut before any human being even sees them, and no feedback is given. At best, you get an auto-generated email that says “you’re not exactly what we’re looking for at this time.” At worst, you get ghosted.

Read More…

Differences between LLMs and humans

More than once I’ve seen the claim made that something or other done by LLMs is “just like” what human minds do. For example, there’s the oft-repeated insinuation that LLMs trained on copyrighted material don’t really plagiarize, because their output is based on exposure to multiple sources, just as human writers reflect their own influences. Or there’s the occasional response to the criticism that LLMs are just glorified autocorrect, merely predicting the next word in a sequence. This, I’ve been told, is not really a criticism, because next-word-prediction is “just like” what humans do.1 I find the claim that anything is “just like” what happens in the human brain to be astonishing given that our knowledge of the brain is still in its infancy; tellingly, I have never heard an actual neuroscientist make such a claim. Still, there’s one thing we can do given the current state of knowledge: look at the features of LLMs that definitely aren’t like what happens in the human brain.

Read More…

This is a rather long post describing the steps I took in implementing a small project in a domain I’m not especially familiar with, using libraries I’m not especially familiar with. Rather than showing off a polished finished product, it details the steps I took along the way, false starts and all. Nor do I claim that my final version is the best possible version; there are no doubt more efficient and clever ways to do what I did. But I wanted to illustrate my thought process, and how I learned from my mistakes, and I thought it might be useful for other learners to follow along with my journey.

Read More…

Python Is Not Object-Oriented

Recently I had a technical interview in which I was asked a rather strange question in order to probe the extent of my Python knowledge. It consisted of a code sample similar to the following:

# Comments after each print() statement show the output
# (And no, they were not in the original problem)

class Foo:
	bar = 4
	
x = Foo()
y = Foo()

print(x.bar) # 4
print(y.bar) # 4
print(Foo.bar) # 4

x.bar = 5

print(x.bar) # 5
print(y.bar) # 4
print(Foo.bar) # 4

Foo.bar = 12

print(x.bar) # 5
print(y.bar) # 12
print(Foo.bar) #12

z = Foo()
print(z.bar) # 12

I was asked to predict what values would be printed at each point. To be honest, I was kind of surprised by this question: One, because it didn’t make much sense in context: this was a remote interview, and I was not asked to share my screen, so I could easily just copy and paste it into a REPL to get the answer. (Maybe they wanted to weed out people who didn’t know how to use the REPL?) But the other reason was that it was roughly equivalent to asking a prospective auto mechanic, “What would happen if you replaced your car’s transmission fluid with Mrs. Butterworth’s Pancake Syrup?” Because such a situation would never even come up on the job unless you were doing something very, very wrong. In fact, most other object-oriented languages won’t even let you do something like what this example does. The fact that Python does allow it made me realize why I have always been vaguely bothered by the language’s object-oriented features: because Python is not, actually, object-oriented.

Read More…

I built my own Python package, complete with binary wheels published on PyPI. It nearly ruined my life. I can’t wait to do it again.

It all started with the Game of Hex. I’d been somewhat obsessed with this game for many years, ever since the movie A Beautiful Mind came out and the nerdier corners of the Internet went gaga for all things John Nash. I wasn’t obsessed with actually playing it so much – I was an ok but not great player – but I loved the thought of teaching a computer to play it. It’s a fairly easy first step into board game AI, because it’s very simple to implement (much more so than chess or even Go) yet complex enough to be interesting. The board size is variable, with larger boards being harder to play than smaller ones, so it’s easy to set the difficulty level. The game can’t end in a draw. All these things help make Hex a perfect testing ground for game-playing algorithms.

Read More…

Bath salts, big tech, and passing the buck

I’ve had a lot of thoughts about the irresponsible, “move fast and break things” way the tech industry has progressed for many years, but I haven’t found a good way to articulate them. Recently I saw an ad on LinkedIn that made it all come into focus.

This ad was for a tool to be used by recruiters and hiring managers to assist in the hiring process. The tool promises to let you “Effortlessly generate tailored interview questions, conduct video interviews with automatic evaluation, and receive detailed feedback instantly.” The accompanying video explains that it will automatically generate suggested questions based on a job posting, and summarize the candidate’s answers. But the big value add? “In the final step, as soon as the interview is finished, detailed feedback will be generated. It will explain the answers’ correctness and completeness, and provide summarized recommendations for that particular candidate.”

Read More…

Mastodone

A few weeks ago I deleted my Mastodon account.

I can’t swear that I’ll never venture into the Fediverse again, either via Mastodon or some other ActivityPub-compatible service. And I’m not writing off either ActivityPub or the Fediverse as failures. I’ll probably dabble in the Fediverse some in the future. But for now, I’m pretty sure neither is for me. This post is a post-mortem of sorts, detailing why I found the whole experience somewhat dissapointing.

Read More…

LLMs aren't actually completely bad

It hasn’t escaped me that this blog has seemed very cynical of late. It’s hard not to be cynical when you’ve seen the same tech hype cycle play itself out over and over again, always with the result that a handful of billionaires reap the benefits while life gets worse for everyone else. But that doesn’t mean the technologies at the center of these hype cycles are completely without merit. While I fail to see any positive applications for cryptocurrency, for example, several of the technologies on which it is based, such as asymmetric cryptography and Merkle trees, are quite useful. (They also predate cryptocurrency by decades, so that’s not exactly a win for the crypto camp.) Lately large language models are the hype du jour, and as someone who specialized in natural language processing back in the pre-deep learning days, I’ve devoted quite a lot of time to considering whether there’s any good to come of these.

Read More…

Affirming the consequent with LLMs

With the AI hype now in full swing, I’m seeing a lot of “AI bros” swooping in to discussion threads much as the “Crypto bros” of yore did two summers ago, helpfully educating people on how their preferred technological fad will save the world while coincidentally making them rich. The AI fad has an interesting twist: a lot of the staunch proponents (and even some pseudo-critics1) seem obsessed with the idea that large language models or other AI toys are actually sentient, and even human-like.

Read More…

When asked why people believe in seemingly insane conspiracy theories, a lot of people will say something about “critical thinking” and how we don’t teach it enough. They’re not entirely wrong (though I’m sure the full reasons are much more complicated), but I’m not sure most people who use the term even fully understand what “critical thinking” means.

I actually taught critical thinking for two years. And one of the things I would ask on the first day of class was how, when confronted with conflicting claims such as “humans landed on the moon in 1969” and “the moon landings were actually faked,” people decided which to believe. (Of course, whether or not either must be believed, and whether every controversial issue boils down to exactly two possible positions, are another question entirely.) Almost always, the answer I heard was something along the lines of, “I listen to each point of view, and I make my own decision.” When asked why people who believed the other position might have arrived at the “wrong” conclusion, they would usually pinpoint the failure somewhere in the “listen to each point of view” part, and not the “make my own decision”. It’s almost as though most people believed that “making one’s own decision” is something that happened automatically and infallably once one had consumed enough information, and that the only way to make a bad decision was not to consume enough information. Most of the rest of the course consisted of trying to convince them that there are plenty of errors that can occur in the “make my own decision” part, and to make them aware of when they were making such errors themselves. Indeed, “being able to make rational decisions once presented with information” is a pretty good working definition of “critical thinking”.

Read More…

On Digital Solipsism

If I were to pick the two most over-hyped technologies of the present day, I wouldn’t hesitate to nominate the blockchain and deep learning. The blockchain is supposed to be a revolutionary technology that will make… something… possible, maybe not right now of course, but “it’s early days” and anyway you and the people hawking it can both make bazillions of money if you just buy in now. Deep learning at least has some tangible gains to show for itself. Some of these might even be useful, but the ones that get all the press are mostly chatbots and text-to-image applications that at best make us laugh if we don’t think of their questionable data collection processes. Both use crap-tons of energy and draw crap-tons of venture capital investment.

Read More…

Are Tech Bros Their Own Baddies?

Nick Bostrom, the court philosopher of billionaire tech bros, posits the “paperclip maximizer,” a powerful AI designed to do one thing: make paperclips. So it destroys all of humanity in its quest to turn the whole planet into an optimal paperclip factory. Tech bros think this is a real risk we should spend serious resources to thwart.

These same tech bros tell us the only purpose of humanity is to maximize pleasure, so we should turn other planets into “computronium” in order to create simulated life forms that can experience happiness. We should even ignore present-day, real-world problems like climate change if it helps us develop this technology.

Read More…

Re-Centralizing the Fediverse

Ever since the takeover of Twitter by Elon Musk and the subsequent exodus from that platform, the Fediverse has been in the news more and more. (Really, Mastodon, the most Twitter-like of the many applications to use the ActivityPub standard, has been in the news; to the extent that most people hear the term “Fediverse” at all, they probably assume it’s synonymous with Mastodon.) And, over and over again, to the point of becoming a cliché, one hears the same analogy:

Read More…

No, don't "just Google it"

Pretty much the least helpful advice you can give to someone on the Internet, no matter how basic a question they ask, is “just Google it.” You see this especially in technology forums such as those dedicated to programming languages. Someone asks a rather basic question, and, rather than either answer the question or just ignore it, one of the forum’s regulars tells them, often rudely, to “just Google it.” Perversely, this sometimes results in a death-spiral of snark: more than once, I’ve Googled a basic question, only to have the number one search result be a Stack Overflow question whose only reply is, you guessed it, “just Google it.” (I don’t know whether it’s due to changes at Stack Overflow or changes at Google, but these results are much less common now than they were about ten years ago; nevertheless, it’s something I’ve seen happen multiple times.)

Read More…

I’m something of a programming languages obsessive. I love learning new programming languages, almost irrespective of anything I could actually build using them. I love seeing how the same basic problems get solved, over and over again, in different ways. I’m fascinated by the formal grammar of programming languages and how it can be so similar and yet so different from natural language. And I like how new language paradigms force us to retool our thought processes.

Read More…

Lately it seems there is only one thing that unites people of every philosophical, political, and religious persuasion: they all hate Twitter. Even the people who use Twitter hate Twitter. In fact, some of the biggest Twitter hate comes from the people who spend the most time on it. After all, who better to know? My feeling is that people desperately want to feel some sense of community, find people of a similar mind, and have enlightening and entertaining conversations with them. But that is becoming increasingly more difficult to do, so they settle for the next best thing, which is shouting at strangers on a computer. Then they get mad at the stranger-shouting app for letting the wrong strangers shout back at them. Or something.

Read More…

Trust

Recently I’ve become more interested in virtue ethics, and, by extension, virtue epistemology. (I still maintain that epistemology is just ethics applied to the realm of belief, but that’s a post for another day. Maybe.) The distinguishing feature of virtue ethics is that it focuses less on what you should do and more on who you should be. And one of my favorite quotes on virtue ethics comes, not from Aristotle or Anscombe, but from Robert M. Pirsig’s Zen and the Art of Motorcycle Maintenance: “If you want to paint a perfect picture, just make yourself a perfect painter, then paint naturally.” Leaving aside whether or not “perfection” is something attainable or even something we should be striving for, I think there’s a lot to say for this way of looking at things. If you just want to paint a perfect picture, you might look for a set of rules to follow. But no set of rules – at least, no set a human could memorize and follow – could possibly encapsulate every picture you might want to create. However, developing general skills that apply across domains, such as noticing details, or fine muscle control, will make you better at anything that uses those skills. Plus for all but the simplest human endeavors, those who are experts likely don’t even know how they do what they do beyond a superficial level. Rather, they learned a few basics, put in a lot of practice, listened to criticism, and never stopped seeking and following advice from those more accomplished than them. Eventually all of this gels into a combination of instinct and muscle memory that allows them to excel at their chosen endeavor in ways nobody can adequately explain. No set of rules can do that. And both deontological and utilitarian ethics are, at bottom, sets of rules for deciding what to do. They’re almost like ethical systems designed for machines rather than humans, with all the limitations that implies. Until we reach the always-twenty-years-away goal of general artificial intelligence, no human-made autonomous system is going to be able to develop anything like an Aristotelian virtue, which is why I will never let one drive my car or run my economy.

Read More…

The self-driving fallacy

There’s an episode of the original Star Trek series in which Spock discovers that someone has tampered with the ship’s computers. He realizes this because he is able to beat the computer at chess several times in a row. This should be impossible, he reasons, since the computer is incapable of making mistakes. Therefore, someone must have tampered with it.

That episode was written in the late 1960s, at a time when actual computers had been playing chess for about a decade, and none of them were able to defeat a reasonably talented human. No computer chess program would beat a grandmaster for another twenty years or so. Obviously, “not making mistakes” was not a sufficient condition to make an unbeatable chess opponent. Yet the writers of the episode, as well as a decent chunk of the audience, seem to have accepted the claim that computers, unlike flawed humans, are somehow infallible.

Read More…

Flotilla of Leftist Gorillas

Every time I read about NFTs and the tremendous amount of hype surrounding them, I think, “Surely I’ve got it wrong. Surely there’s more to these than there appears to be, because there’s no way this many apparently intelligent people could be taken in by something so obviously phony.” And the more I read, the more I realize – there’s not.

NFTs, or non-fungible tokens, use the same blockchain technology as most cryptocurrencies. That is, they’re a decentralized ledger that tracks the ownership of some asset, and provides a complete history of how that asset has been transferred from owner to owner. The difference is that, whereas the assets involved in cryptocurrency are created out of thin air (or fossil fuel) by some sort of mining process, the assets tracked by NFTs are external to the blockchain. It could be a tangible, physical object, or real estate, or any number of things. But the types of NFTs that seem to have garnered the most attention are those associated with digital assets like image files. NFTs are supposed to be the long-awaited answer to how we can enforce the ownership of something that can be copied and shared at will.

Read More…

It never ceases to amaze me when some libertarians, who are supposed to be in favor of small government (or even no government at all), point to the Confederate States of America as some sort of ideal, or at least better than the current US government. There are a couple of reasons why this is ridiculous. The first, and more important, reason is that the Confederacy was founded on the principle that some people had the right to own other people, which is as blatant a violation of core libertarian principles as one is likely to find. Sure, there are many who will claim that the Confederacy wasn’t really about slavery, but rather that its founding cause was “states’ rights.” This claim is easily debunked by reading the declarations of secession, which state in no uncertain terms that slavery and white supremacy were chief causes of secession. Some of these declarations even list the nullification of the Fugitive Slave Law by Northern states among their grievances, suggesting that the right of the states to defy the federal government didn’t apply to laws that benefitted those who enslaved people.

Read More…

An Underrated Benefit of Type Safety

Dynamic vs. static typing is an old debate that will not go away any time soon. However, I’ve noticed that most of the debate seems to focus on the same things: whether static typing reduces the amount of bugs in your code by preventing type errors, whether dynamic typing is more open, and so forth.

For a while, I didn’t have a strong preference between the two. The only “typing” advantage I could see in my day-to-day life was the amount of typing you’d save not having to declare your variables in Python. However, over the last few years I have come down strongly in favor of static typing, followed closely by dynamically typed languages with type hinting (if used consistently). And the reason for this is partly the arguments I alluded to above, but mostly it’s because of an underrated benefit: when combined with a decent IDE or sufficiently smart editor, it keeps me from having to play an excruciating guessing game with the code I’m writing.

Read More…

No, you can't win an argument

It’s become quite commonplace today for people to bemoan the fact that critical thinking and logic are not required courses at most universities, and are generally not even taught at all in high school. While I don’t disagree, I think it would be even more helpful if schools said even a little bit about why one would want to learn such subjects in the first place.

You see, I don’t think the biggest problem is that people don’t know about logic. There’s at least a certain contingent of right-wing Internet trolls whose Twitter bios always mention “Logic” and “Reason”1 even as they engage in bad-faith arguments and commit informal fallacies. And while some of them probably had their first and last exposure to “logic” from a Jordan Peterson video, I wouldn’t be surprised to find that many of them had taken introductory logic and critical thinking courses in college. The problem isn’t that they didn’t learn it; the problem is how they use it. I believe that a lot of people view logic and reason, not as ways to arrive at the truth, but as tools to help you win an argument.

Read More…

Decentralization: An Introduction

In the 1980s, a common trope kept popping up in popular entertainment. Aging Baby Boomers, who had come of age during the cultural upheaval of the 1960s, were shocked to look around and see that their dreams of peace and spirituality had given way to garish Reagan-era materialism. From the stoners visiting post-Woodstock America for the first time in Rude Awakening to the hippies-raising-yuppies of Family Ties, we kept seeing examples of the culture shock confronted by utopian dreamers who were sure that they had found the one true path to human thriving, only to see society reject that path in favor of the very thing it was supposed to save us from. Sure, the trappings of the counterculture had survived: rock music, colorful clothing, long hair, anti-authoritarian slogans; but now they were being sold to us by multinational corporations who used the money to fund Central American death squads and destroy the environment. The coda to the Flower Power swan song was The Big Lebowski’s Dude, a former student radical who was content to stumble around early-’90s LA, getting stoned, going bowling, and not even trying to make a difference.

Read More…