Software Dependencies

Photo by Gemma Chua-Tran on Unsplash

Most of my work involves significant amounts of typing and I’m an unrepentant nerd with virtuous laziness. So I use a piece of software called TextExpander to automate the production of text snippets: things like dates, long words or phrases, and templates for note-taking. This is a category of software I’ve kept an eye on for some time.

I used to use the more intimidating and powerful Keyboard Maestro, which is too much for me, most of the time. Recently, I found Espanso, an open source text expander, written in the Rust programming language. As a supporter and strategic beneficiary of Free and Open Source software, it was immediately attractive.

Checking out Espanso made me dig into—for the second time in the last six months—the Rust ecosystem. In this context, an ecosystem means the intersection of people, built software, open source projects, and approaches that make it possible to build working software in a given programming language. Rust began in 2006 as a personal project and blossomed with the sponsorship of the Mozilla Foundation in 2009.

Here is a technical explanation of why Rust has become so popular of late, but the rough translation is that it’s perceived to be more secure, more performant, and has a much easier ecosystem for managing dependencies between a given project and other software projects it employs.

Let’s unpack those for the non-technical. When we talk about a programming language being ’secure,’ we mean that its basic building blocks, the way it processes instructions to produce an outcome, are less susceptible to being used in ways that allow unwanted intrusion into the resulting software. And when we talk about being ‘performant,’ we mean the extent to which the program is not wasteful in using the resources (chiefly; memory and storage) of the computer on which it runs.

But it’s the last point about managing dependencies which is fascinating, in an old software nerd way, and bears further translation.

The first language I used for commercial purposes, working what’s called a side hustle today while still in college, was dBase III. But I like to joke that the first computer language I used ‘in anger’ was Perl. Perl, as a programming language, was unique and pivotal in many ways but the most important, in my humble opinion, was the creation of CPAN, the Comprehensive Perl Archive Network.

CPAN allowed Perl developers to make use of a broad collection of ‘software packages.’ Think of these as pre-configured bits of code that perform a very specific and (hopefully) well-defined purpose. For example, there are packages for performing date calculations, or building a graph from a data set, or converting files from one format to another.

Like many critical advances in computing, packages were based on ideas about data abstraction created by an insufficiently recognized technology leader—who unsurprisingly happens to be an insufficiently recognized woman in technology—named Barbara Liskov. She came up with the idea of code modules upon which packages are based. (True; she did receive a Turing Award, but how many code bros are aware of Barabara’s contributions? I digress.)

So software packages existed long before CPAN, but what made the Perl + CPAN setup unique was the sheer number and variety of package contributions it enjoyed, and the relative ease by which programmers could plug these packages into their software. The ‘physical’ distribution of software packages, which was a fairly arcane craft, was much more easily handled within the code. It became possible to snap together packages like Legoâ„¢ blocks for making surprisingly complex forms.

Successive popular programming languages did not fail to notice the benefits of CPAN in this regard. Perl’s ‘successor’ language, PHP, built essentially the same thing in PEAR. The next generation of popular languages, Ruby and Python, took packages a step further with gems/bundler and pip/PyPi. In addition to the large repository of software Legoâ„¢ blocks, they took a page out of the operating systems world and connected the package repository to a package manager. Think of a package manager like a piece of software that automatically makes sure your Legosâ„¢ fit together, even when new blocks or new versions of existing blocks come out. And if you need some other Legosâ„¢ to make yours work, it’ll grab the right versions of them, too.

It’s not possible to overstate the extent to which this is a complex and time-consuming problem, largely because unlike real Legosâ„¢, software packages are constantly changing. Software packages may be running on different types of computers with different versions of different operating systems and other different packages installed, which themselves expect other software packages to be installed—sometimes with specific versions or combinations of versions required to function. Not only do they not always work together, but figuring out why they do not work together, or worse, why they work together in unexpected ways is a fractal problem. And that problem is: dependency.

There’s a price technical people pay for the ease of all of the Lego™ block-snapping we perform to power the software underpinning our lives. It’s that we are—all of us and all of the software we write—utterly dependent upon one another. I hope the irony of a bunch of mostly introverted people building a massive system of interpersonal dependency as the basis for our current civilization is not lost on anyone. The size of this irony is only dwarfed by the amazing fact that all of these dependencies are—for the most part—managed through automation without obvious catastrophe for most users of the software, most of the time. We take this for granted and get frustrated when something takes too long to load.

Which brings us back to the Rust programming language. Rust is in the process, perhaps, of supplanting in technical esteem the pre-eminent programming language of the moment (for the moment): Javascript. One of the main reasons for this is that Javascript’s package management ecosystem is, to put it mildly, difficult to navigate. After a long hiatus from being directly involved in writing production code, I helped architect and build a set of medium-complex Javascript applications this year. I can tell you from painful, personal experience that the Legoâ„¢ Dependency Tax in the Land of Javascript is very, very high. More importantly, there is certain anarchic unpredictably in the dependencies, as demonstrated by an infamous incident in which the deletion of a single package ‘broke’ the Internet back in 2016. Rust’s package management system, cargoand crates.io appear to have devised a better solution to dependency management. Here’s a technical but non-esoteric explanation, if you’re interested in further detail.

Those who’ve worked with me before know I’m a True Believer in the Theory of Constraints. (Thanks MSM!) My particular implementation, a game called “Find the Constraint,” argues that the greatest leverage in improving a human/technology system comes from the ruthless identification of the single largest constraint in the system and running constant experiments targeted at its elimination. If the constraint on software development in the dawn of the Internet Age was the widespread availability of reusable software components, the solution to that constraint developed over the ensuing twenty years—package repositories and managers—swallowed its own tail and created the next constraint.

The meta-translation here is that the choices we make as individuals in a system with dependencies—both in software and the wider world—affect the collective outcomes whether we see them or not. Choices have ramifications, whether we try to manage them, externalize them, or choose to ignore them. And software dependencies in critical systems, like any other dependency, are dangerous when either overly centralized overly distributed. You get hydraulic despotism in the case of the former. And in the case of the latter, you get, as William S. Burroughs put it, “inept, frightened pilots at the controls of a vast machine they cannot understand, calling in experts to tell them which buttons to push.”


This post appeared originally in Issue #1 of the Translation Layer newsletter.

Digital Marketing Technology: A Tale of Two Approaches

For the last nine years, our team at Leapfrog has worked on an integrated technology platform that’s designed to solve a fairly unique problem in digital marketing: the acquisition of customers in high-consideration, complex sales in direct-response categories like Financial Services, Home Services, For-Profit Education and Telecommunications. Unlike physical goods, where a single leading online retailer has set the pattern across industries, digital marketing for services has some unique challenges, especially the fact that differentiating between products is more challenging for consumers, the ordering process can be more involved, and the number of legacy systems tends to be a bit higher.

That alone has made for an interesting nine years, with a modicum of technical challenge and fun problems to solve. And the way we’ve solved those problems has been pretty intriguing on its own right, with a set of agile approaches, and a team-development focus—development of the *team* not just development of the software. But what’s made it much more interesting is the who of the software.

Unlike the majority of software being built today, which is built for either end-users in a software-as-a-service mode, we built a form of what used to be called “expert systems.” And what’s even less common is that we built it for people—co-developed it, really—for a set of knowledgeable users with a very specific understanding of how the domain works, and a well-defined methodology that the software empowers and enables.

Now certainly there are some industries, like healthcare for example, where people are building software that their colleagues operate. But in the world of digital marketing technology, AdTech and MarTech, it’s a far less common thing. And part of the reason is because the standard approach of having the software eat the world has taken firm root. There are so many software-as-a-service providers in the AdTech/MarTech space that the visuals have become impossible to parse.

And there’s a ton of value in a lot of the SaaS that’s been built for marketers. I know; we’ve integrated with a lot of it. But, like most SaaS today, its genealogy is highly defined by the market for software companies. Not market for software, but market for software companies. Software companies grow, generally speaking, through one of two strategies: by either (A) expanding vertically by solving for the entirety of a set of tasks up and down the supply chain of a given business challenge; or (B) by expanding horizontally, by moving into adjacent business domains to deliver an all-encompassing solution for a much broader business domain.

In the world of AdTech and MarTech, think of Optimizely as a very successful example of Strategy A, and the Adobe Marketing Cloud as a very successful example of Strategy B. Both of these approaches have one thing in common: when successful they result in enabling the progressive rounds of funding which (as eloquently described here by William Janeway) are critical to the way enterprise software works today. The former, through venture capital funding; the latter, through mergers and acquisitions of software companies. In both cases, it is a set of considerations largely driven by the market for companies and the investors in those markets, which determine outcomes.

We took a different approach. Leapfrog, which is today a marketing solutions provider, started as a very straightforward pay-for-performance marketing company. This was back in 1999, when most large brands did not have the internal awareness, let alone people, software, or data, to acquire customers online. Google was not yet the AdWords-powered behemoth it is today. There were no display advertising networks. Most companies’ Web sites did not actually try to acquire customers through the full transaction process.

The problem we faced was easy to conceptualized but difficult to solve. In order to get paid, we had to prove to a client—in a repeatable, provable fashion—that it was our specific ad, which was shown to this specific consumer, which resulted in this specific page view, this specific form submit, and the back-office system events it triggered, which all resulted in that same consumer becoming a customer and paying their first month’s bill on this specific date.

The challenge for us was to create a single view of a set of events that took place across disparate and disconnected systems, most of which we did not operate, using a variety of different formats, without shared identifiers, in which the lifecycle of data was inconsistent between sources. The approach we took was one of *instrumentation*  in which we pragmatically link together those things we can control at run-time, while focusing heavily on data management practices that tie together all of the data progressively as it arrived. Not dissimilar in concept from what folks call a lambda architecture today, but with the added fun of having the system consume its own outputs, and incorporate them in its next iteration.

Hindsight makes this look like a pretty decent decision, because most of what challenges marketers today is connecting all of the disparate data together to get insights and intelligence that they can act upon. And while you’d think all of the SaaS that they have purchased would have made it easier, the reverse is true. There’s a strong incentive for SaaS providers to focus on the one thing their SaaS does really well. And unlike enterprise software vendors of days past in which whole economies were built around systems integration, SaaS companies do not benefit economically from being really thoughtful about how their application’s data fits into *all* of the other software that its customers utilize. How could they?

We took a different approach at  Leapfrog. We started with a strategic problem to solve, not a functional problem, and that’s made a ton of difference. Even though the latter approach is easier for  building a classic SaaS-style set of features around, we played a longer-term game oriented around the underpinnings of instrumentation. The ‘special sauce’ of the technology platform our team built at Leapfrog is really the years and years of data and application integration successes and failures. It’s given us a brutally clear understanding of what it takes to do integrations properly, efficiently and scalable, whether we’re dealing with robust, well-documented marketing cloud software, legacy billing systems written in COBOL, or some erratically-delivered spreadsheets from an offshore call center where an IT guy comes by once every few weeks to kick off a manual export. Building a well-defined set of  features on top of that is the (relatively) easy part.

The other thing we did was to work with a set of knowledgeable users. These were marketers inside our company, people who dug into the details, who wanted as much data as they could possibly get to drive better decision-making for our clients. And while there have been discussions over the years that we should move towards making our platform more SaaS-like, there’s a fundamental difference that tends to bring those discussions to a close: we built this software platform to make a complex set of tasks easier.

It’s a great thing in some ways—though not *all* ways—that software is eating the world. But for every business domain in which a really well-designed SaaS solution is super-helpful, there are at least two in which the underlying set of activities do *not* easily lend themselves to the sort of automation that SaaS provides. I’d argue that digital marketing, especially customer acquisition marketing, is one of them.

The actual material, the actual objects of inquiry, the information that marketers need to understand and manipulate in order to make decisions is hard enough. It’s the complex behavior of humans who themselves display a high degree of internal complexity. SaaS in digital marketing has added an additional layer of work for those marketers perform, as they take on the management of their marketing clouds and best-of-breed solutions. The software requires technical people to implement and operate, but technical are trying to make the software run successfully. They cannot also try to make the marketing operate successfully.

So the more humane thing that our platform does for clients is to put the tools to connect data from all of those systems together and manage the operational tasks associated with it for them. We have marketing operations folks perform the complex set of tasks to make sure things are configured and functioning properly for our clients across the parts of their stack we manage and they manage. That way they can focus on using that information to make decisions, which we can then feed back into the integrated stacks to implement. Their SaaS continues to do all of the specialized things that it was explicitly designed to accomplish, and we can make sure all of the pieces fit together.

As I said to a new business prospect a few weeks back, this model and approach is different. But as we’ve found, it just works better. And having built software that works better for its users, not just its investors, as important as they are, makes us an engineering and product development team deeply satisfied.

Vulgo enim dicitur, iucundi acti labores.

What Is Programmatic Conversion?

The term programmatic appears often in discussions of digital marketing and advertising. It’s most commonly deployed in compound form as programmatic media or programmatic buying, referring to the collection of information about consumer preferences, publisher inventory and pricing, advertiser goals, and the use of software programs to automate the dependent processes of data collection, purchasing, ad trafficking and performance management.

The big idea here is about efficiency, both in the sense that the software can perform calculations and make decisions at a scale and speed that eclipses human media buyers, and in the sense of efficient markets. As of the first half of 2015, the use of programmatic media, or programmatic buying, has become not just all the rage, but an expected component of the digital marketer’s toolbox.

Given all the focus on efficiency, it’s curious that few of the participants engaging in programmatic media ever speak about what happens after the ad is clicked. While the value for the publisher comes from the placement of the ad, the value for the advertiser—and hopefully, the consumer—emerges from the post-ad-click activity, or what we might broadly call conversion.

Conversion is the thing that the intent behind the advertisement ultimately wants to service. It’s a sign-up, an inquiry, a registration, a subscription, a purchase. It’s the action to which we are being called. Conversion is the payoff for both the buyer and the seller, and the thing upon which all the attention and intention being captured in programmatic media ultimately depends. It’s the real money.

So where is the automation—where is the programmatic—for that part?

Some would suggest that the automation in this arena comprises activities like A/B and multivariate testing, or targeting and personalization, or the use of tag management software to confirm that conversion events are actually taking place. These are all eminently useful, even critical activities for digital marketers to pursue. And they make use of software-based data collection and implementation at a scope exceeding human capabilities. But are they programmatic?

No.

They are point solutions, albeit extremely useful ones, attempting to solve for individual elements of a broader cyclical process. Their equivalent, in the programmatic media space, would be solutions that served only publishers, or only advertisers or even just the function of trafficking ads. We have had those for some time, and while they helped to create all kinds of local efficiency, the huge (efficient) markets for programmatic ad spending developed their size only when the automation comprised the full ecosystem.

So let’s define what programmatic should mean in the conversion landscape. Deep breath.

Programmatic Conversion is a system in which the automated collection of data across the domains of digital media, consumer preferences, devices and behavior is used to build models and rules for actively customizing the resulting exchanges of information between a consumer and a brand in a data-driven, customized and orchestrated set of interactions intended to produce a commercial outcome. These interactions may occur on a device, in an application, in a voice call, in a store, or when the first month’s bill arrives. However—and this is the crucial bit—true programmatic conversion only occurs when the connection between the process of data collection, modeling and rule-definition and its specific implementation across consumer interactions is continuous, cyclical, and self-referential. Simply put, the system feeds itself with its own results. This is what some have called a closed loop. 

The technology to do this exists across a diverse set of vendors in the business-to-consumer software-as-a-service space, and has for quite some time in the business-to-business software space. And yet for the most part, the technology exists in fairly disjointed parts, even where a ‘marketing cloud’ or ‘suite’ is purported to exist. Building a Programmatic Conversion solution is above all else about true integration, the kind that can be messy, time-consuming and requires customization, which SaaS players have little economic incentive to undertake. Programmatic Conversion requires integrating pieces of software and non-software operational processes in a highly instrumented fashion, especially the grittier aspects of digital marketing operations. These are less glamorous things  like call center switch data or the provisioning of internal reference and tracking numbers in an enterprise marketing logistics system, or the accuracy of pricing and product data across a highly localized market footprint when rendered on a multitude of devices and formats. Programmatic Conversion includes observing and acting on not only the shiny objects like display and social and mobile, but the unglamorous locations where leaky pipes—a broken consumer experience—all too often emerge. It is certainly not for the faint-hearted.

At the risk of spawning a whole new category, let me clarify by saying that what it takes to make Programmatic Conversion happen is not merely software, but a team of people adopting a thought process. It’s defined by curiosity, operational diligence, and above all else, a willingness to ask questions and let the data decide. I’ve had the pleasure of working with such a team over the last several years to build a business-to-consumer Programmatic Conversion system. While we continue to iterate and improve upon what we built, it’s already delivering meaningful returns for large brands.

What connection exists between Programmatic Media and Programmatic Conversion? They are essentially the same concept, applied across two related domains. Audience insights drive efficient media campaigns, which produce efficient conversion, which generates further audience insights and media optimization, which drives further consumer behavioral insights and conversion optimization, in a continual, virtuous cycle. This is not just a concept, but a reality. At my employer Leapfrog, we have delivered remarkable value to clients by connecting our programmatic conversion platform with a partner’s programmatic media platform. We’re going to keep improving it, because we think it’s a Big Deal.

Why?

The answer to that has a lot to do with the tectonic shift towards Digital TV. Declarations of impatience notwithstanding, the big money is already betting on the $84 billion opportunity associated with introducing audience-level—maybe even household-level—targeting to what will soon cease to be ‘traditional’ TV advertising. What has yet to be described is what happens when you attach not only programmatic techniques for media buying but programmatic techniques for conversion to a market of that size. The answer is: a fundamental shift in the way marketing occurs. Brands have the ability to target consumers across devices at an unprecedented scale—and not just in terms of reach and frequency. They can actually convert—close the deal, generate revenue, make money—right there in the midst of your multi-device Digital TV experience, whether linear, over-the-top, direct or otherwise. You see it on TV, you like it, you buy it. All in one clickstream. The sheer size of the data generated will fund an entirely new generation of marketing data science. The logic appears, as they say, inevitable.

The Startup Thing
(History Lesson, Part III)

This week Leapfrog Online held its first Hackathon. It was the culmination of several efforts in the organization around making innovation more tangible, and demonstrating the capabilities of unfettered time to build. We had 14 teams, with well over three-quarters of the company participating, and every single one of the teams’ projects was green-lighted. It was, in the words of our CEO, an “unqualified success.”

We tried something different, and it worked. And the things it produced will add meaningful amounts of business value for our teams, our company and our clients. You can read our blog post on the corporate site for that part. It’s meaningful, but there’s a deeper story.

MinutemenI’m a veteran of the dot-com era, which for me started in the mid-90s. I had a friend at CERN, and a friend at the National Center for Supercomputing Applications. I was in a PhD program at the University of Minnesota1 studying the political economy of technology. The friend at CERN would send both of us stuff about what Tim Berners-Lee was hacking on from time-to-time. Evenually, the friend at NCSA called me up in Minneapolis to try to get me to come back to Chicago to work for this company he was starting up. That friend was Alex Zoghlin and that company was Neoglyphics, the first real Web consulting firm in Chicago.2 I declined, but ended up coming back to Chicago anyway, when my proto-dissertation about how the Internet was going to radically change our socio-economic order was viewed with amused skepticism.

That was the summer of 1995. I started doing freelance Web consulting.3 and was shortly recruited by Playboy Enterprises to begin what would become Playboy Online. Alongside some early experiments from the New York Times, the Chicago Tribune and others, we built one of the earliest online magazines, and definitely one of the first major subscription revenue-model Web sites.

When I was at Playboy, we shared something…special. And it wasn’t about the content, or the brand, or the vibe provided by Hef.4 It was about the people. Most of us came from a variety of semi-related pre-Internet disciplines. Writers. Designers. Nerds. Musicians. Journalists. Theater people. There was only one real software engineer who had formal training. We were all the developer, all the sysadmin, all the UX person, and whatever else was required. We did our best, and it mostly worked.

And it was intriguing that it actually worked, because not only were there no clear guidelines about *how* to do something, but there were also no clear guidelines on even *what* you should do. Various generations of imported management made assertions. Various generations of salespeople from big and small software and consulting firms made more vague, PowerPoint-based assertions. But when it came down to cases, we just had to make it up. More precisely, we thought about it, tried something, and then iterated. There was less a sense of hierarchy, less a sense of rigidity, and more a spirit of adventure in the face of adversity and uncertainty. We made it up as we went along, and it was Good.5

This feeling will be familiar to anyone who has participated in a startup. In fact, every day is like this in a startup. It’s an adventure. There’s a lot of fear and anxiety. There’s a lot of exultation. There’s a lot of crushing loss. It’s not really for the faint-hearted, and if it’s not like that, then maybe you’re not doing it right, or it’s not doing it right for you.

In between the time I left Playboy and started at Leapfrog Online, I did two startups. One of these, Edventions, was started by someone who knew about executing on big ideas. Irv Shapiro’s idea was ahead of its time, an intranet/extranet system in a box, with hardware, software and networking for K-8 school. It was a fantastic idea, but the educational system was not ready for it yet. What made it great was that the Thing I experienced at Playboy was there. And it wasn’t just Irv himself, who was the first CEO I worked with that embodied the Thing, but a meaningful portion of the whole company. They believed in what we were doing, which hadn’t been done before, and it was Good.

I went elsewhere after Edventions was sold, and the Thing was totally gone. Not even part of the scene. I learned to sell, inside and outside. And then I did another start-up. This one was mine. And the horrible part was this: I didn’t create the Thing. I was so worried about the product, the next client, and not going broke that I didn’t even consider it. Terrible.

But when I sold the intellectual property from that startup to Leapfrog Online, the fact that I had been missing the Thing—not even *considering* it as important—came back in an instant. Smart people trying to do something that hadn’t been done before. And real value was being created.

Where was the Thing?

When I arrived at Leapfrog, I began pining for it. The Thing would show up in fits and starts, sometimes in big ways, sometimes in small ways, but it’s much harder to maintain that feeling—which certainly existed during LFO’s startup phase—when you get to medium size. Mid-size companies have unique leadership, strategy and talent challenges, and they’re all about scale and growth.6 Rapid change, but the kind of change that cannot be as nimble as that of a start-up, because the company has different things to prove to its founders and investors and clients. The pressures are different and the solutions to them are about maintaining focus while building out more specialized functions. Not the most conducive to that elusive startup Thing, or so I thought.

Until this week.

I have been trying to recapture that feeling, what it was like at Playboy back in the day, for a long, long time. And during the Hackathon, I felt it. It was glorious. But that was not the best part. The best part was at the *end* of the Hackathon, when I had a chance to sit down and talk with the people I had stayed up all night with, and who really, really got a lot out of it. I realized that THEY felt a Thing—not my Playboy thing—but their *own* version of that class of thing that they could look back on and say “I felt that and I *did* that.” I realized I wasn’t really trying re-create that feeling for myself. What I *was* trying to do was to create it for THEM. And that is really what I’ve been striving for all this time.

So. Pretty please, with sugar on top. Do a Hackathon at your company. Even if you’re not a startup.

  1. Home of Gopher. []
  2. Started with his dad, Gil Zoghlin, whose then-ultra bleeding edge color laser printer we utilized for various nefarious purposes back in high school. []
  3. You had to know sysadmin stuff. You had to know Perl, which was really the only viable language other than C to do stuff in, and you had to have some knowledge of visual design. But most importantly, you had to explain the crazy notion that the “World Wide Web” was critical to the future viability of a mid-size insurance company. Not easy. []
  4. Though producing several live pay-per-view Webcasts from the Playboy mansion certainly delivered its share of amusing anecdotes. []
  5. That it actually worked, to the benefit of the parent company, seemed almost but not quite secondary. []
  6. A point worthy of explanation here, since the challenges of mid-size companies, relative to startups and big corporations, do not get discussed as much in the Internet world. []

Universal Truths and Cycles

Kanban BoardI came of age in a professional environment that was excessively clunky. Information technology, having had its way with the financial industry, was making its way into the inner workings of all sorts of organizations. The people, however, were still very much accustomed to the thought processes and organizational methods that emerged from the ‘productivity’ thinking of the 1960s and 1970s. The emphasis was very much on strategic planning, rigidly defined bureaucratic specialization, tremendous amounts of documentation and detail, and a privileging of process above all else. It was, from an experiential perspective, slow.

And as often is the case with disruptive technology, it is commonly the set of people who introduce it that are first forced to actually accommodate the new ways of thinking and processing information that said technology necessitates.1 And so, several forward-thinking software people were forced (really) to create the Agile Manifesto. Building software the old way, let along getting people to use it, just did not scale.

Our teams (and my current and many past employers, including myself) make use of Agile methods. I am a believer in these methods. I believe in them because they work better than most. Like any method, they must be adapted to the practitioner’s local context to be effective, and therein lies several cycles of adaptation and experimentation, much of which is usually painful and illuminating.

Lately, I’ve been thinking a lot about how people tend to focus on the post-adaptation and post-experimentation phase of applying methods for dealing with work problems, often to the detriment of the actual problem the method was trying to solve. A wise man pointed me to the concept of orthopraxy, defined in contrast to orthodoxy, as an emphasis on conduct and actual practice as opposed to ritual and accepted belief. I’m probably butchering (or appropriating) these terms when I say that orthopraxy is about doing what works and orthodoxy is about doing what’s commonly accepted as working.

Recently, one of our teams had a Come to Rabbiâ„¢ conversation2 about their delivery vs. their clients’ expectations. The outcome of this conversation was a recognition that requirements needed to be better defined. Some of the members of this team come from Agile backgrounds, but many do not. And so the dreaded Waterfall words were uttered: “functional specification.” And the classic texts get distributed: Spolsky on Painless Functional Specifications and Fried on No Functional Spec, and Atwood on Dysfunctional Specifications. The dance begins. What works? What’s commonly accepted as working?

Universal Truths and CyclesGiven a long enough perspective, you can begin to see the same dialectic cycle which produced Agile beginning to generate the need—or at least the desire—for new approaches. This is no startling insight, just common sense and observation. Things which are radical answers to the previous cycle’s orthodoxy inevitably become the new orthodoxy. And that’s where the danger arises. Organizational structures evolve in ways that are at odds with their initial intent. Teams get stuck in patterns of orthodoxy, where the noble intent of their methods are reduced to cargo cultish repetitive behaviors. It’s the opposite of pragmatism.

I have seen this pattern before. It was present when I worked in the world of User Experience, where the orthodoxy of what were once radical design sensibilities came into conflict with the need for new methods presented by digital products. I’m seeing it now, with increased frequency, around words like “design” and “long-term roadmap” and phrases like “what are we really trying to solve for?” when mixed with Agile development.

So the interesting thing here, going back to all the annoying “Old Guy Having Seen It All Before” talk at the beginning, is that these things have a way of working themselves out. Quickly, call out the pattern and come to grips with the cracks in the edifice of orthodoxy. The sooner you focus on what works as opposed to what’s commonly accepted as working, the sooner you come to a better synthesis and move on. Until the next time, when down will go back up, forevermore!3

  1. Back in Graduate Schoolâ„¢ we used to call this an epistemic community. []
  2. The astute reader will recall, of course, that the Savior in question was in fact, a Rabbi. []
  3. Thanks, Uncle Bob. []

The Magic of Fake Italian

Comedic sublimation serves a pivotal role in human communication. It allows for socially unappetizing or emotionally-challenging messages to be sent and received without triggering some of our most common defense mechanisms. The use of self-parody, self-deprecation, and other forms of emotional sleight-of-hand tends to reduce tension, and allows for difficult but necessary things to be said.

That’s why our team uses Fake Italian.1

Fake Italian came to my attention by way of Mr. Steve Albini, Montana native and Chicago resident. You may know him from the band Shellac, for instance, or his very fine recording studio, Electrical Audio. And less well known, but immensely entertaining and useful, his food blog. His description of Fake Italian comes from the forthcoming and highly anticipated Hargrove House documentary, “Couldn’t You Wait: The Story of Silkworm.”

Fake Italian emerged, in the modern form in which we apply it, out of the experiences shared by Albini, his bandmates, and the members of  Silkworm.2 Touring with Shellac and Silkworm was the Sicillian band Uzeda and its guitarist,  Agostino Tilotta, whose spoken English carried within its measured tones the seeds of communicative nirvana:

The money quote:

Fake Italian is just so much easier on you. It allows you to say things in a way that doesn’t offend anybody if you want to say something that’s critical. It makes mundane things more entertaining to talk about, and it plays with everyone’s sense of the absurd.

It’s been my experience that many people struggle with the ability to provide criticism without incurring shame, embarrassment, or wrath. Vast amounts of time are spent constructing stylized edifices of avoidance, disingenuousness and bald-faced lying. It’s actually much more damaging to someone—especially if you happen to care about them—to neglect providing genuine feedback. Still, saying those kinds of things to people is not always easy. All of this can be avoided with the simple application of Fake Italian.

§§§

Example

Alice: Hi, here’s the model on our performance over the last month. We spent a lot of time putting the initial data together, but we didn’t get a chance to really work through it in much detail. I know you were looking for some specific answers like we talked about. The data’s there, but we couldn’t really answer any of those questions. Sorry.

Bob: Wait. Uh. Um. (sputtering; face reddening)

Carol: (in Fake Italian) Alice, this model. She reminds me of a sandwich, a sandwich full of delicious flavors waiting to satisfy a person with a deep hunger. This sandwich, she was made by a thoughtful deli worker, with love and attention, early in the morning, in a room filled with the smell of freshly baked bread, on a day rich with the promise of satisfaction from a sandwich well-made. Her lettuce was once green, the deep green of the fields at harvest time as the sun peaks in the sky. Her tomatoes, with a rosy hue that shines plumply as they tumble to the counter, full of sweet and subtle flavors. Her meats, carefully carved from the most delicately roasted cuts of tender ham, turkey and the rarest and most delicate beef. Her bread, golden like the afternoon sun, and fluffy like the clouds in the sky. But instead of being served at lunch, when desire and deliciousness could come together in the heart of a famished person, this sandwich, she sat in the deli counter all the day. Nobody paid her attention; nobody saw what she could become for a hungry person. And her lettuce, it wilted. Her tomatoes became soft and chalky. Her meats began to droop and become shiny. Her bread softened into paste. She is no longer a sandwich of love, but a sandwich of…absence.

Alice: (realizing) I…uh…ah…that is…

Bob: I’m feeling a bit peckish, myself.

Carol: In the next month, we will all eat sandwiches…together!

Alice: You got it, Carol. (smiling)

§§§

So there you have it. For most organizations I’ve been in, that conversation would have gone either one of two ways: acrimonious/accusatory or aloof/absent. With Fake Italian in the mix, the right message was sent, without an excess of unnecessary difficulty.

Maestro Albini3 offers wonderful examples on how to incorporate flora and fauna into your Fake Italian in the video above. Our team has found these to be extremely effective, but they are certainly not the only choices available. Just like Fake Italian itself, it has to come from your heart. So if you grew up on a farm, use agricultural constructions. If you study  metallurgy, talk about ores and the Mohs scale. Work in software? Talk about text editors. Some other avenues of potential Fake Italian simile, analogy and metaphor include:

– popular music
– genre motion pictures
– commonly-viewed television programs
– fiction and literature
– bodily processes

One note on delivery. Unlike your more complicated accents for the vocally impression-challenged, even poorly executed Fake Italian is comprehensible. In fact, in certain cases an earnest but hilariously incompetent Fake Italian accent defuses the tension of direct criticism so much so that the actual hard part is over after the first few words.

Try it.

  1. And my friends. And my family, though truth be told, Fake Italian is not as effective on small children, or the less subtle. []
  2. What superlatives can begin to encompass the experience of listening to and loving (yes, loving, unashamedly) Silkworm? It’s the same problem as e.g. the Minutemen. Life-saving, perhaps? []
  3. This label applied with absolutely 100% sincerity. Perfect description. []

Mid-Sized Company Growth, Specialists, Loss of Context and the Television Pilot Conversation from Pulp Fiction

When working with specialists, context is everything.

When I ran a startup, I tended to gravitate towards and hire people with a multi-disciplinary focus. I liked hiring people that wrote code but also understood interaction design. I wanted people who liked managing clients but were also obsessed with data. I liked hiring visual designers who understood marketing strategy. Those sorts of generalists are critical in a start-up, because you don’t have the capital or time to develop a wide variety of specialist functions. You pick your single area of focus and make big investments there, but rely on generalists for the non-core functions. I also think that generalists are more comfortable in the dynamic environment a start-up provides.1

When your company gets to be mid-sized, things change. You’re not quite big enough to have full-on specialist functions throughout the organization, but you’re too big to have a team consisting largely of utility players. People who were once all-in-one coders, testers, and project managers start to move up the food chain and get more focused. And in certain areas of the business where the possibility of eking out some competitive advantage exists, specialists begin to accumulate. This is entirely natural, but it has consequences.

As groups of specialists—more accurately, specialist functions—emerge, there is a concomitant loss of context. Groups of people begin to form who just do data analysis, or just do functional testing, or just write RFPs. If you hire correctly, they begin to do these things really well. But because there’s specialization, and because there’s less organic messiness in the way decisions get made, and just because there are more people, there’s a natural predilection towards increasing the efficiency of the individual specialized function. That team of search engine optimization gods look at each other and say, earnestly, “Let’s do this right. And with the best of intentions, the organization as a whole starts to optimize for its constituent parts instead of the complete entity.

I want to reiterate that this is a natural outcome of the growth process in companies. I’ve seen it happen again and again. It’s nobody’s fault. It is, however, everyone’s responsibility to mitigate the problem. And I’m not a big believer in large, overarching organizational methodologies as a panacea. Be simple and tactical, and develop a distributed solution to the problem. So while you can expect some additional writing on the dangers of over-specialization in future posts, I want to offer here one simple, tactical suggestion that anyone, at any level of your organization, can use to defeat the Context-Loss-from-Specialization problem.

When you’re working with someone who has a very specific type of knowledge and experience, you have to provide them with an easy way to unpack their thinking from the (otherwise useful) blinders imposed by their specialization. The best way to do this is a conversation, and the name for the special kind of conversation I’ve found to be the most effective in this situation is a Television Pilot Conversation.

This is a reference to a scene in Quentin Tarantino’s film, Pulp Fiction. Check it out:

The Television Pilot Conversation has two purposes, one relatively straightforward, and the other constituted by an ulterior motive. The straightforward purpose is to clearly and succinctly describe the context for the person lacking context. The more sneaky and sub-textual purpose is to remind that person that there are actual other humans out there operating with completely different sets of considerations, familiar with entire bodies of knowledge and experience, possessing skills equally as important or impressive as theirs, that should be taken into account. Now.

Television Pilot Conversations may take place on a one-to-one basis, or in large groups. The more deadpan the delivery (Samuel L. Jackson being the ideal to strive for here), the better. The more use of humor or irony, the better. The more familiar the person on the receiving end is with this particular scene in Pulp Fiction, the better. Here’s a convenient short URL for your distribution pleasure:

http://bit.ly/tvpilot  

The beauty of the Television Pilot Conversation is that anyone in the organization can deliver it to anyone else. It knows no bureaucratic strictures. It transcends seniority, hierarchy, and age. It requires no internal branding campaign, no corporate communications, no brown bag session, no long, boring meeting. It’s context in its most concentrated form, with a format guaranteed to generate results.

Enjoy.

  1. Frankly, I just plain like being around people who mix their chocolate and peanut-butter. Amongst even the most highly specialized people, an intense second interest—cf. correlation between superb software engineers that are also talented musicians—is often a good indicator of robust capabilities elsewhere. []

Ask UXMatters

The good folks over at UXMatters have been kind enough to solicit my opinion on User Experience topics for their “Ask UXMatters” regular feature.

This week’s episode made worlds collide: my former life in the world of UX and digital product design and development and my current one in digital direct marketing. Actually, they overlap every day, but it’s subtle in the operational role I play at Leapfrog Online. Mostly.

This week’s piece is about form dropouts and the use of testing. There was another one not too long ago about building a centralized, enterprise-wide UX group in your organization, a topic near and dear to my heart.

And for those interested in ancient history, there’s a long thought piece I did for UXMatters back in 2006 about designing for bridge user experiences.

A bridge experience is one in which the user experience spans multiple communications channels, document genres, or media formats for a specific, tactical purpose. These sorts of bridge experiences, within the context of less abrasive digital marketing, constitute a significant portion of my work and the work of my team today.

Excelerate Labs and ScaleWell: New Chicago Incubators

Chicago has always been less than stellar in self-promotion when it comes to our entrepreneurial community. It’s not like there’s a lack of capital, nor a lack of talent. There’s a ton of opportunity, and there’s a ton of great work going on here. It’s not nearly as networked as it should be, and it’s often under-reported. That seems to be changing with the introduction of two new seed capital incubators, Excelerate Labs and ScaleWell.

See coverage of Excelerate Labs in TechCrunch, and coverage of both Excelerate and ScaleWell in Crain’s Chicago Business.

Excelerate Labs is an entrepreneurial incubator program, similar to TechStars, which offers seed capital (in the low five figures) and mentorship to new startups. It’s well-connected to some of Chicago’s leading entrepreneurs and venture capitalists. ScaleWell, the brainchild of Sean Corbett (HaveMyShift.com) and Andy Angelos (GetTalkedAbout.com) is a much more organic and grassroots effort, offering $1,000 and office space. Both occupy a much needed space in the Chicago entrepreneurial community, alongside the more established organizations like the Chicagoland Entrepreneurial Center.

I’m honored to add my support to Excelerate Labs as a mentor for the Summer 2010 session.

Word from Troy Henikoff is that the participants will be announced very soon, and I’m extremely excited to meet and begin working with them.

I’ve also reached out to Sean Corbett of ScaleWell to see about helping there as well.

It’s good to give back.