The fifth product risk
In Product lines I characterised products as linear arrays—as lines—and product management as the practice of manipulating lines and making them legible to others. This line manipulation can be described thus: an original, continuous lines gets discretised into parts, these parts are operated on, and the modified, discretised line is recompiled into a modified, continuous line.
Inherent in this A to B—and perhaps even the choice of which particular B to select as one’s target—are four risks, all of which can be addressed by the posing of four questions:
Is it valuable?
Is it usable?
Is it feasible?
Is it viable?
These questions come from a 2017 Marty Cagan post, The Four Big Risks, where he describes:
value risk (whether customers will buy it or users will choose to use it)
usability risk (whether users can figure out how to use it)
feasibility risk (whether our engineers can build what we need with the time, skills and technology we have)
business viability risk (whether this solution also works for the various aspects of our business)”
Cagan then goes on to describe how these risks are allocated amongst a product trio:
“The Product Manager is responsible for the value and viability risks, and overall accountable for the product’s outcomes.
The Product Designer is responsible for the usability risk, and overall accountable for the product’s experience – every interaction our users and customers have with our product.
The Product Lead Engineer is responsible for the feasibility risk, and overall accountable for the product’s delivery.”
This is all sensible, quite neat and tidy, effective. But a question is missing. A risk has been overlooked:
Is it ethical?
Value, usability, feasibility, viability; at first glance, these all seem way more pertinent for product management (and overall organisational focus) than pesky ethics. Especially in a more resource-constrained environment where the allocation of time, energy and money to nebulous initiatives is being reined in. But on the 2 …. nth glance, ethics starts to accumulate a lot of weight.
To demonstrate this, engage in a little Gedankenexperiment with me.
Below is a list of arbitrary products (or structures and concepts that get commonly rolled into products). For each item, estimate the extent to which the team(s) responsible considered ethicality in comparison to any or all of the four other risks or questions. Here’s the list:
The infamous New York Times unsubscribe process
Google’s cost-per-click advertising
Any webpages, apps or products using dark patterns
Nir Eyal’s hack-y hooked model
1-click e-commerce checkouts
Intercom’s adaptive Elasticsearch infra usage
Dual use AI-driven drug discovery
Open sourced LLM model weights
Buttondown, an email newsletter software
Signal, secure private messaging
WordPress’s 100 year plans
Netflix’s model-output-to-application interface
Any of the Obsidian plugins
When I engage in some crude imaginary ethnography and think about the teams behind these products I can’t help but conclude that ethicality matters. Even if it’s not the top risk addressed or question asked, it’s not at the bottom of the pile.
Obviously, things like Signal and products directly related to existential risk from AIs and LLMs have ethicality front and centre. It’s fundamentally entangled with their integrity as a product. But what about the innocuous Buttondown? What about the neat engineering from the Netflix and Intercom teams? What about the supposedly constrained and self-contained Obsidian plugins?
The salience of ethicality in all these things is somewhere between non-zero and absolutely-the-only-thing. And it’s probably closer to the latter. Ethicality can matter a little or it can matter a lot (in positive and negative directions) but never will it not matter.
Here’s one example.
A lot of people are averse to working for actually-evil or slightly-evil organisations. Think tobacco, or defence contractors and weapon production, or betting and gambling. Ethicality matters enough to automatically disqualify participation. The gate doesn’t get passed.
Here’s another example. Some people actively seek out and deliberately elect to work on benevolent, mission-driven products. And for those who don’t exhibit much explicit ethico-phobia/philia, there still has to be an implicit and tolerable level of ethicality.
I know, this is all starting to whiff of juvenile philosophy. Back to product risks.
There are products which can be conceived, sustained and successful in the absence of each of the four OG risks:
Sans value? See the failed HD-DVD standard push
Sans usability? See the US health insurance apparatus
Sans feasibility? See self-driving cars
Sans viability? See the struggles of shared e-bike initiatives in metropolises
True, each of these risks are interconnected. Yet, thesis: sans ethicality is a bigger no-no than any of the other absences.
So, how does one grapple with ethicality in a product context? Good question, and one I don’t have a clear answer for (yet). But what I do suspect is that the ethical risk becomes easier to wrangle when it’s scale is made explicit.
In many domains, there’s commonly a distinction between macro- and micro. Macro and microeconomics, for example. But in some of the life science disciplines, there’s an intermediary scale: the meso scale. Soil science has mesofauna. Computational biology has the cellular mesoscale:
“Study of the cellular mesoscale, the scale level bridging nanometer-sized molecular structure with micron-sized cellular structure, is opening a new window on the processes of life. Most of the large-scale processes of cells are only comprehensible when seen through the lens of their cellular context. For example, the crowded nature of cells markedly changes the structure, function, and interaction of the component molecules, and transport between cellular compartments provides essential regulatory capabilities. Biomedical research also relies on detailed understanding of the cellular mesoscale, since many disease states, including atherosclerosis and Alzheimer’s disease, are a consequence of disruption of cellular processes by aberrant mesoscale structure and interaction.”
Ethicality in a product context is not micro-scale. It’s more than a single, independent decision or action. It’s also not macro-scale—the practice of product management rarely touches states and imagined communities. It’s a meso-scale concern, part of the mandate of that fundamental unit; the team, the pack.
So, a good start point for folding ethicality into product ops? Treat it with the same care and consideration as one would value, usability, feasibility and viability. Make it, officially and explicitly, the fifth product risk. Because only once it’s acknowledged as a concern can methods for mitigating it begin to blossom and diffuse.