Lessons product management can learn from science

From peer reviewing to inductive reasoning, there is a lot that product managers can learn from scientific method.

Source: Netflix/IMDB

Recently I watched Don’t Look Up.

A giant comet is about to hit. Leonardo DiCaprio and Jennifer Lawrence are scientists, unsuccessfully trying to convince the US President of the gravity (bad joke intended) of the situation. Mark Rylance plays the head of the privately-owned BASH corporation, keen to exploit the problem for his own profit. He’s clearly modelled on industry characters like Elon Musk, Steve Jobs, Jeff Bezos, and the like.

Throughout the film, Jen and Leo criticise the BASH corporation’s solution design processes. Despite coming from the humble Michigan State University, their work follows classic scientific method and, as a result, creates a better solution. In a way, you could say that the premise of the film is the benefit of scientific method over methods used in business/industry to build solutions.

As a product person and someone who often works on building solutions myself, I found the contrast between science and industry interesting. Personally I would have liked the film to have gone further and given a clear explanation of what the scientific method involves. After all, none of us want to end up like the BASH corporation. So what do we need to learn?

Image by Gustavo Fring on Pexels

Leo and Jen talk about peer reviews a lot in Don’t Look Up. But what does this actually mean? Wikipedia defines a peer review as “the evaluation of work by one or more people with similar competencies as the producers of the work”

Internally

Leo and Jen often dismiss the BASH corporation as lacking peer reviews, but in my experience peer reviews do exist and can sometimes work very constructively inside companies. How often have you asked your colleagues to share feedback on your ideas, or taken part in a roadmapping session with various departments?

As product people, this kind of “peer review” is a hugely important part of the process. It’s what avoids us getting too stuck in our own heads and creating something with unintended consequences, like the head of the BASH corporation. As UX Collective writer Nikki Anderson says:

Without feedback, we would continue doing the same thing over and over again, hoping for a better result.

However, peer review systems aren’t without problems. As social scientist and scientific whistleblower Martin writes, peers may be biased and likely to support the ideas the most similar to theirs. This reduces innovation, particularly when investment is at stake:

Since you don’t know who the referees are going to be, it is best to assume that they are middle-of-the-road.

In Wennerås and Wold’s paper, “Nepotism and sexism in peer review”, they also describe some of the bias that can come up in the system.

How can we get the best of both worlds? As I see it, the peer review process can be helpful for defining how official company priorities fit on the roadmap, and interpreting results of work. Not only does this mean that the most resource-intensive ideas get a second opinion, it also ensures stakeholder buy-in. At the same time, it’s also good to give product teams an occasional wildcard for innovation that they do not need to justify while still in early stages.

Ideally, if you have control over department budget, this means holding a small sum back purely for testing and experimentation. If you don’t, then rarely (make them counted occasions and chose them wisely) this can mean going to the top of the hierarchy and bypassing all other stakeholders with your most innovative proof of concept.

In the outside world

Aside from internally, something we don’t often think about is product peer reviews in the wider community. For me this is really where the scientific community does better work. We often get stuck in our own organisational silos, and fail to see the big picture. When we do talk about what we’re doing, the message is often just “look how great we are”, with no real substance.

Talking about results was something I learned first-hand when working for an edTech NGO. NGOs see themselves as solving sector-based problems as well as ones specific to their own users. There is an obligation to share and comment on your work when you’ve done it, and research standards are high — in many cases as good as academic institutions. By working together in this way, NGOs are generally better at finding synergies and growing the sector as a whole. There is a lot that private companies can learn from this.

Obviously, you should only what is appropriate and non-confidential. Reaching out to others in the product community, however — we could all be doing more of this. If you manage a team, this kind of activity can also be a great professional development opportunity. (by the way — fancy “peer-reviewing” this article? Let me know your feedback in the comments below 😉).

Sometimes, in both science and product, we need to make a jump into the unknown. When this happens, we often use inductive reasoning.

Publication Designorate gives an excellent description of how this might work. In summary, it means drawing general conclusions from specific cases we’ve seen. In Product, an example would be to say that we’ve observed several times that green CTA buttons have the best conversions, so it’s best to always make CTA buttons green. In science, Ellen gives some good examples of how this worked during development of COVID-19 treatment.

Image: Pixabay

So far, so good. However, it’s important to remember that inductive reasoning is not the only approach we can take. In science, Karl Popper demonstrated this in the 20th century. He gave the example of swans. We think all swans are white just because we’ve never seen a black one. However, if we find just one black swan, this invalidates our knowledge completely. Karl’s point was that, rather than proving truth, science is about proving falsehood (and yes, this is where the expression “black swan” comes from).

There are a few things we can take from this. The first is the importance of having a hypothesis that can be disproved before you test. I know this can be hard with certain stakeholders (who may be adamant that it is a guaranteed win), but try to get them to tell you what their “success criteria” and also their “failure criteria” would be before you begin.

For any belief you have, ask what it would take for you to change your mind (Mike Sturm)

The other thing we can learn is that product development as a field is never “finished”. Nothing is certain, and there are no absolute truths.

What does this mean in practice? To continue with this example above, it’s commonly accepted that green is a good colour for CTA buttons. But this is only true given what we already know. Perhaps we only think green is the best colour because we only tested it against red and blue, and never tested it against chartreuse. Or perhaps green is only the best colour for certain types of product or scenario.

The message here is that it’s important to build on what’s been learned before. But view it with a critical mind and don’t forget think outside the box. And feel free to challenge anyone who comes with an idea that is “definitely proven” to work.

Leave a comment

Your email address will not be published.