I built my own Python package, complete with binary wheels published on PyPI. It nearly ruined my life. I can’t wait to do it again.
It all started with the Game of Hex. I’d been somewhat obsessed with this game for many years, ever since the movie A Beautiful Mind came out and the nerdier corners of the Internet went gaga for all things John Nash. I wasn’t obsessed with actually playing it so much – I was an ok but not great player – but I loved the thought of teaching a computer to play it. It’s a fairly easy first step into board game AI, because it’s very simple to implement (much more so than chess or even Go) yet complex enough to be interesting. The board size is variable, with larger boards being harder to play than smaller ones, so it’s easy to set the difficulty level. The game can’t end in a draw. All these things help make Hex a perfect testing ground for game-playing algorithms.
I had fooled around a bit with various attempts at Hex-playing algorithms, and I noticed I kept re-using a lot of the same code. I always needed some way to represent a board. I needed a method for rendering a simple ASCII representation of that board, so I could actually see what was going on. I always needed a method for determining whether a given game state was complete, and if so, who had won. Random playouts (in which completely random moves for each player are generated, turn after turn, until someone wins the game) are a staple of many methods such as Monte Carlo Tree Search, so having a method to do that was a good idea. I wanted to try out different machine learning methods, so having a way to generate
pandas dataframes containing sample games would be useful. The more I thought about it, the more I realized that a general Hex library was needed, that I and others could use for implementing the game and developing game-playing agents.
hexea is born
I began my library in a somewhat counter-intuitive way: by creating a library for a different game, the Game of Y. The reason for this is that Y is a closely related connection game – so close that Hex can be seen as a special case of the Game of Y. There’s a simple to implement, and reasonably fast, algorithm for determining the winner of an arbitrary Y board that could then be used for Hex as well. Plus having one library that would work for two different games just gives it broader use cases. I decided to build a general library for both Y and Hex, which I named
hexea – from “Hex-” for Hex, and “-ea” from the name of one of Y’s creators, Ea Ea.
Since I wanted this library to be usable not just by me, but also by the three or four other Python developers interested in building Hex-playing software, I wanted to publish on PyPI. Not only had I never done that before, I had never published any software on any sort of public registry – not npm, NuGet, Maven, or any of the various Linux packaging indices. This was going to be a challenge: build a software library from scratch that sufficiently adhered to standards and best practices so that I could share it on a public registry without fear of embarassment. The fact that it catered to a rather niche interest softened the blow somewhat, as I figured it would not be used by enough people to generate a torches-and-pitchforks response if I did something completely stupid. The perfect starter project!
poetry package management software. It includes commands for generating a project skeleton, initializing the Git repo, managing virtual environments, and publishing to PyPI, as well as solving, once and for all, the terrifying issues with transitive dependencies that make large Python projects so tough to deal with. Plus it’s what we used at work, so I was already familiar with it.
Poetry worked great for the initial stages of my project. It was painless to spin up something new, to use virtual environments, and to push to TestPyPI and then PyPI. Painless, that is, until I decided to do something that would wreck my life.
How to shoot off your leg with
One of the things I wanted to implement with
hexea was, as I’ve already intimated, Monte Carlo Tree Search. The thing with MCTS is, it really needs a fast implementation to reach its full potential. It’s sort of a modified A-star search that replaces the heuristic function with random playouts, extending the search tree first in the branch for which the player won the most playouts. For games, especially those against a human player, it’s frequently implemented with a time limit, cutting of search at, say, five seconds so as not to make the game proceed too slowly. The more playouts you can run in that period of time, the deeper your search tree, and the more accurate your estimation of which move is best. I knew that a pure Python implementation of random playouts would be much slower than what I could do in a lower-level language, so I decided to replace the Python guts of
hexea with C++, making use of the excellent
I wasn’t a total newcomer to C++, but I hadn’t regularly used it for a very, very long time. We’re talking “oldest standardized version of C++” long time. (That’s 1998.) I picked up Python in ‘99, was pleasantly surprised that you didn’t need pointers and allocators and deallocators and all that crap just to write a simple program, and never looked back. That is, until a few years ago, when I discovered that so many features had been added to C++ since the Dark Ages that it could actually be somewhat pleasant to use now. I had been looking for an excuse to get up to speed with modern C++, and this was just the chance I needed. I originally planned to implement MCTS entirely in C++, but I eventually decided this was overkill, and I just needed to implement the parts that needed speeding up in C++ and use Python for the parts that it’s really good at. Hence overhauling
Despite my rusty C++ skills, the C++ part wasn’t the hardest part of the process. It was actually, if not exactly easy, at least manageably difficult – as difficult as learning to use a new library for the first time. You have to Google a lot to figure out how to do certain things, but it’s reasonably easy to find the answers you need. Nor was making the C++ code callable from Python a problem;
pybind11 solves that problem beautifully. No, the hardest part was building wheels that I could upload to PyPI. From
poetry, the tool that made all of my pure Python tasks so much easier, wound up making the C++ parts nearly insurmountable. Officially,
poetry only supports building pure Python wheels, but I managed to find some examples online where people had successfully used
pybind11 together to build binary wheels. These methods worked. Until they didn’t. One moment, I could run
poetry build . and wind up with binary wheels that worked just fine; the next moment, that command would only build the python bits, and when I tried to install the wheels, they wouldn’t work. Oh, the
poetry build process would finish without error. The wheels could be installed without error. It was only when I actually tried to use
hexea that I got errors that indicated none of the architecture-specific binary bits had made it into the wheels. I spent way too much time trying to figure out what I had broken, and how to fix it, before I concluded that ripping the
poetry parts out and starting anew was the best way to go.
In the process of diagnosing my problems, I discovered that
poetry no longer held the unbridled adoration of the entire Python world the way it (seemingly) had a few months before. For one thing,
pip had introduced a new dependency resolver that addressed a lot of the pain points
poetry was supposed to. For another,
setuptools now supports the use of
pyproject.toml, fully compliant with PEP 621;
poetry is not compliant with that standard (admittedly because it developed its own version of
pyproject.toml before the standard was finalized). And there’s a growing realization that unofficial third-party tools needlessly complicates things, and sticking to standard tools is usually preferable. That being said, I did wind up embracing one such tool:
pdm. This gave me a lot of the convenience features I enjoyed from
poetry: a one-stop command for spinning up projects, managing virtual environments, and publishing to PyPI. But it also gets out of your way, allowing you to use
setuptools rather than its own custom build system if you want. The result is a PEP-compliant
pyproject.toml that I can use to build the project with
pip as easily as I can with
(It’s worth noting here that
pdm was originally designed as a test implementation of then-in-draft, now-rejected PEP 582, which would have given Python
npm-style local packages instead of virtualenvs. It can still do that, PEPs be damned, but I haven’t made use of it and don’t really see a need to. That said, I also wouldn’t be surprised if the project went away soon as its raison d’être seems to have gone the way of Betamax. It may be that the best thing for me to do would be to escew these command-line convenience tools altogether, create a
git project template that meets my needs, and get used to typing
python -m venv .venv and the like. Or add a bunch of aliases to my terminal.)
Why do we have all these damned architectures anyway?
I haven’t yet addressed one of the biggest pain points of a package that needs binary wheels: the “s” in “wheels”. Yes, you need more than one, because computers do this dumb thing where they have lots of different processors with different instruction sets, so C compilers are generating completely different outputs based on the machine the code will be running on. And even two machines with the exact same processor – hell, the same machine with the same processor, if you dual-boot – will need different binaries depending on whether you’re running MacOS, or Windows, or Linux, or some crap nobody cares about.1
I do most of my development on an Intel Mac, and it was quite easy to create a wheel using Python’s
build utility. It says it’s “universal”, which used to mean it would work on 680x0 and PowerPC, then meant it would work on PowerPC and Intel, and now means it’ll work on Intel or Apple Silicon. Unfortunately I don’t have access to any Apple Silicon on which to test it, so I’ll just have to take their word. (I do have an old PowerPC Mac Mini in the basement, but I’m reasonably sure the wheel won’t work there. In fact I’m not positive Python 3 will work there.)
Linux, however, was another issue. I repeated the same steps I used to build on the Mac, and I wound up with a wheel… that only worked with one specific version of Linux. Yes, you can end up with different wheels if you build on Debian vs. Fedora vs. Arch vs. whatever. It turns out there’s something called a
manylinux wheel that you need if you want to target multiple Linux distributions, because of course there is. Of course the Linux world couldn’t make anything easy. At least there’s a
manylinux utility that you can run in a Docker image (ugh) to build such a wheel. As far as I can tell, it works by somehow building the wheels on a Docker image built from a version of Linux so ancient that most (I’m guessing not all, because it’s Linux) modern versions are backwards-compatible with it. At least I can run Docker on my Mac, so I can build both Mac and Linux wheels in one place.
That leaves only Windows. I haven’t yet managed to create a Windows wheel, though I also have devoted the least amount of effort attempting to get that to work. This isn’t because of any principled anti-Windows or anti-Microsoft stance; rather, I just haven’t gotten around to it. Also I think I might have to install Visual Studio to make that work, and that’s a whole thing.
Of course, the ideal setup would be to have some sort of continuous integration system that, whenever I update my repo, would build wheels for all three major targets, so I wouldn’t have to worry about all the manual steps. Unfortunately, there are a number of problems with this. One, all the major CI choices seem to be designed specifically to work with GitHub, and maybe GitLab or Bitbucket if you’re lucky. I use Codeberg, which is small, community-driven, and nowhere near as feature-rich as the big players. For my principled decision to resist handing version control over to a corporate oligopoly, I am rewarded with something called Woodpecker CI that I could probably get to work with
manylinux, but it would require a lot of manual configuration. Also I have no idea if it could handle MacOS or Windows builds without running on those systems. I’m sure there’s some sort of cross-compiler workflow I could use. I’m also sure it’s painful as hell.
The fact is, my project isn’t big enough and doesn’t iterate fast enough for manual builds to be all that big of a deal, so I’m just going to carry on, blissfully ignorant of CI, until such time as I have a project that actually needs it.
What uses four backticks in the morning, two backticks at noon, and three backticks in the evening?
There’s one more part of the ordeal that I haven’t addressed: documentation. I’m for it. I really hate it when projects view documentation as an afterthought, or seem to believe that their code is so brilliant and self-explanatory that you can learn all you need to know by reading source code and unit tests. After all, natural human languages are for dumb woke humanities majors, not big manly code ninjas, right? Utter nonsense. If your code isn’t documented, it isn’t complete, just as surely as if you had no unit tests or left features half-implemented. Humans need to know how to use your library, and making them crawl through your code just because you couldn’t be bothered to write a couple of sentences just wastes everyone’s time.
There are, of course, many different tools for adding documentation to a Python project, but one of the oldest and most common is Sphinx. I found it to be fairly straightforward to use, and its tooling that lets you create documentation automatically from Python docstrings is a godsend. Unfortunately, it uses a format called reStructuredText which everyone hates. I haven’t used it a ton, but as far as I can tell there’s nothing inherently bad about it; it’s just that it’s not Markdown, and everyone already knows how to use Markdown. There are Sphinx plugins out there that allow the use of different markup syntaxes within docstrings, but I found that it really wasn’t that difficult to just force myself to use reStructuredText.
My resulting documentation wasn’t as beautiful as a lot of documentation out there, but it did what it needed to do. I’m aware that there are theme and style plugins out there for Sphinx, and maybe someday I’ll add them, but I was happy just to get it working. The next step was actually publishing it. Much of the Python world likes to use Read the Docs to host their documentation. I looked into it, but it looks like (once again) it’s designed to be used with version control websites that aren’t Codeberg. There might be a way to publish my docs there with a fair amount of manual wrangling, but I decided that, if I was going to do things manually anyway, I might as well just publish the docs on my own server. Which I did. Again, this is something continuous integration would probably help with, but which I’m not yet sufficiently motivated to set up. Maybe next time. Until then, I’m happy to run Sphinx locally and copy the files to my web server manually.
So here I am: I have a nontrivial Python project with C++ extensions and semi-auto-generated documentation published to PyPI. Was it fun? No! Will I do it again? Probably!
Now that I’ve made all the mistakes the first time around, I have some idea what I can do next time to make things a little easier.
- Don’t use binary extensions. If it’s possible to do it in pure Python, do it; it will make your life easier. Of course, often, that’s not an option.
- Create a good project template. Something that sets up a
pyproject.tomland basic directory structure, including a
docs/directory and Sphinx configuration. I considered using pyscaffold, but I think it might be overkill for my purposes; shades of FizzBuzz Enterprise Edition. Creating my own template with the bare minimum I need is probably the better way to go.
- Figure out a good simple CI solution. Maintaining anything but the simplest project without CI is a headache waiting to happen, even more so than figuring out how to use an off-brand CI tool that doesn’t have the backing of a major corporation. If I were running a big project with any sort of funding, I’d probably bite the damned bullet and move to one of the big platforms. But I’m sure there’s ways I could improve my workflow and stay on Codeberg. That’s probably the next big addition to
- Keep dev tooling simple. Next time, no
pyenv, as little as possible that isn’t an official Python solution. Install the latest stable Python and target that. Use
pip, all with
python3 -mrather than the shell commands. Only complicate when you need to.
- Yeah, okay, maybe learn
tox. Maybe someday I’ll decide I want to target older versions of Python, so I’ll need a way to test without abusing
pyenvor Docker containers or something. I hear
toxis awesome, but I’ve yet to try it. Next time?
- Don’t do it alone. I launched
hexeacompletely by myself, because I misjudged how simple the project would be (scope creep is real!) and because I didn’t know if anyone other than myself would even be interested. Next time I launch something that even threatens to be moderately complex, I’ll announce it early and try to get help from more experienced people. I don’t need to become an expert in Sphinx or Woodpecker CI or whatever if I can bring an existing expert on board. Of course, this would require me to rejoin some form of social media to get the word out…
- Don’t panic! This is supposed to be fun! It’s ok for a passion project to be missing features or docs or binary wheels in the early stages. If fighting with Sphinx makes you want to abandon the project, don’t. Set that aside and work on the parts that bring you joy. If you’re not afraid to ask for help (see above point), you can probably find someone else who absolutely loves Sphinx and thinks RST is way better than Markdown. Remember, kid, we’re all in it together.
I can freely insult OpenBSD because nobody who uses it has ever installed Python, or indeed any language other than C or Emacs Lisp. Also nobody has ever installed Plan 9 at all; there’s just a troll farm that posts ChatGPT-authored blog entries about it on HackerNews. ↩︎
Last modified on 2023-10-01