Most of my work involves significant amounts of typing and Iâ€™m an unrepentant nerd with virtuous laziness. So I use a piece of software called TextExpander to automate the production of text snippets: things like dates, long words or phrases, and templates for note-taking. This is a category of software Iâ€™ve kept an eye on for some time.
I used to use the more intimidating and powerful Keyboard Maestro, which is too much for me, most of the time. Recently, I found Espanso, an open source text expander, written in the Rust programming language. As a supporter and strategic beneficiary of Free and Open Source software, it was immediately attractive.
Checking out Espanso made me dig intoâ€”for the second time in the last six monthsâ€”the Rust ecosystem. In this context, an ecosystem means the intersection of people, built software, open source projects, and approaches that make it possible to build working software in a given programming language. Rust began in 2006 as a personal project and blossomed with the sponsorship of the Mozilla Foundation in 2009.
Here is a technical explanation of why Rust has become so popular of late, but the rough translation is that itâ€™s perceived to be more secure, more performant, and has a much easier ecosystem for managing dependencies between a given project and other software projects it employs.
Letâ€™s unpack those for the non-technical. When we talk about a programming language being â€™secure,â€™ we mean that its basic building blocks, the way it processes instructions to produce an outcome, are less susceptible to being used in ways that allow unwanted intrusion into the resulting software. And when we talk about being â€˜performant,â€™ we mean the extent to which the program is not wasteful in using the resources (chiefly; memory and storage) of the computer on which it runs.
But itâ€™s the last point about managing dependencies which is fascinating, in an old software nerd way, and bears further translation.
The first language I used for commercial purposes, working whatâ€™s called a side hustle today while still in college, was dBase III. But I like to joke that the first computer language I used â€˜in angerâ€™ was Perl. Perl, as a programming language, was unique and pivotal in many ways but the most important, in my humble opinion, was the creation of CPAN, the Comprehensive Perl Archive Network.
CPAN allowed Perl developers to make use of a broad collection of â€˜software packages.â€™ Think of these as pre-configured bits of code that perform a very specific and (hopefully) well-defined purpose. For example, there are packages for performing date calculations, or building a graph from a data set, or converting files from one format to another.
Like many critical advances in computing, packages were based on ideas about data abstraction created by an insufficiently recognized technology leaderâ€”who unsurprisingly happens to be an insufficiently recognized woman in technologyâ€”named Barbara Liskov. She came up with the idea of code modules upon which packages are based. (True; she did receive a Turing Award, but how many code bros are aware of Barabaraâ€™s contributions? I digress.)
So software packages existed long before CPAN, but what made the Perl + CPAN setup unique was the sheer number and variety of package contributions it enjoyed, and the relative ease by which programmers could plug these packages into their software. The â€˜physicalâ€™ distribution of software packages, which was a fairly arcane craft, was much more easily handled within the code. It became possible to snap together packages like Legoâ„¢ blocks for making surprisingly complex forms.
Successive popular programming languages did not fail to notice the benefits of CPAN in this regard. Perlâ€™s â€˜successorâ€™ language, PHP, built essentially the same thing in PEAR. The next generation of popular languages, Ruby and Python, took packages a step further with
pip/PyPi. In addition to the large repository of software Legoâ„¢ blocks, they took a page out of the operating systems world and connected the package repository to a package manager. Think of a package manager like a piece of software that automatically makes sure your Legosâ„¢ fit together, even when new blocks or new versions of existing blocks come out. And if you need some other Legosâ„¢ to make yours work, itâ€™ll grab the right versions of them, too.
Itâ€™s not possible to overstate the extent to which this is a complex and time-consuming problem, largely because unlike real Legosâ„¢, software packages are constantly changing. Software packages may be running on different types of computers with different versions of different operating systems and other different packages installed, which themselves expect other software packages to be installedâ€”sometimes with specific versions or combinations of versions required to function. Not only do they not always work together, but figuring out why they do not work together, or worse, why they work together in unexpected ways is a fractal problem. And that problem is: dependency.
Thereâ€™s a price technical people pay for the ease of all of the Legoâ„¢ block-snapping we perform to power the software underpinning our lives. Itâ€™s that we areâ€”all of us and all of the software we writeâ€”utterly dependent upon one another. I hope the irony of a bunch of mostly introverted people building a massive system of interpersonal dependency as the basis for our current civilization is not lost on anyone. The size of this irony is only dwarfed by the amazing fact that all of these dependencies areâ€”for the most partâ€”managed through automation without obvious catastrophe for most users of the software, most of the time. We take this for granted and get frustrated when something takes too long to load.
cargoand crates.io appear to have devised a better solution to dependency management. Hereâ€™s a technical but non-esoteric explanation, if youâ€™re interested in further detail.
Those whoâ€™ve worked with me before know Iâ€™m a True Believer in theÂ Theory of Constraints. (Thanks MSM!) My particular implementation, a game called â€œFind the Constraint,â€ argues that the greatest leverage in improving a human/technology system comes from the ruthless identification of the single largest constraint in the system and running constant experiments targeted at its elimination. If the constraint on software development in the dawn of the Internet Age was the widespread availability of reusable software components, the solution to that constraint developed over the ensuing twenty yearsâ€”package repositories and managersâ€”swallowed its own tail and created the next constraint.
TheÂ meta-translationÂ here is that the choices we make as individuals in a system with dependenciesâ€”both in softwareÂ andÂ the wider worldâ€”affect the collective outcomes whether we see them or not. Choices have ramifications, whether we try to manage them, externalize them, or choose to ignore them. And software dependencies in critical systems, like any other dependency, are dangerous when either overly centralized overly distributed. You getÂ hydraulic despotismÂ in the case of the former. And in the case of the latter, you get, as William S. Burroughs put it, â€œinept, frightened pilots at the controls of a vast machine they cannot understand, calling in experts to tell them which buttons to push.â€
This post appeared originally in Issue #1 of the Translation Layer newsletter.