The following is a quick stab at how I work through product development that I drafted in response to some workplace questions. I always have plans to refine, clarify, and polish these types of things, but I never get to it. Publishing now with the hope that making it live incentivizes me to come back and edit.
None of this is novel. It’s a general synthesis of product management philosophies from many of the greats. It’s also not all obvious, especially if you’re new to product and haven’t gotten your butt kicked a few times by faulty assumptions and market ignorance.
Products and features aren’t projects, they’re implementations of solutions that try to resolve specific problems. While problems are usually unique, possible solutions are infinite. The best product teams implement the most elegant, creative, and useful solutions that offer the highest return on investment the fastest.
The word “problem” carries a negative connotation. In product, I tend to see it as neutral. There are negative problems – purchase data isn’t captured, revenue is churning, etc. But, there can be problems that are more positive as well – one specific user type is outperforming others, people from one country are more engaged than others, etc – where the issue is that you need to learn from and capitalize on those anomalies.
I love finding problems. One of the simplest ways to do this is with reliable quantitative usage data trends. Getting acquainted with data and having a sense for how people use your product will lead you to dig into usage by different user type, different geo, etc. The fun happens when you notice that… something is different. Drop-offs in funnels, disproportionate conversion rates, spikes, etc. signal that something may be happening at particular points in the user flow. I often think of the quantitative data telling us the what.
Knowing there’s atypical (good or bad) behavior in an application is one thing, but with only the “what”, it’s quite hard to understand how to either fix problems or extrapolate beneficial user patterns. So, the next step is triangulating this quantitative data with qualitative to see if you can understand why the data is atypical. I think of this triangulation process as understanding the why.
At the end of problem identification, you should come away with something like this: “[what] is or isn’t happening because [why].” Here are some simple, imaginary problems.
- [Native Spanish speakers aren’t publishing] because [the translation on the publish button is not correct]
- [A majority of new users have started creating 2 sites in their first session] because [the onboarding tooltips we display were confusing and seemed to suggest that a user should “add new site”].
- [Polldaddy users keep saying they want the app to be “like Google docs”] because [they aren’t able to intuit how to reorder questions in polls].
What’s the ultimate solution to the problem?
Once you have an understanding that there is a problem (what) and triangulate that with qualitative data (why), you would have a decent understanding of where to start with solutions.
In a dreamworld of no technical or temporal restraints, how would we solve this problem? Then implement it. HAHA THAT IS A FUNNY JOKE.
Rarely is the dreamworld solution the one that’s practical to tackle out of the gate, but it’s important that you start with it. That way, you can dilute the end-to-end solution in simpler terms to get a workable minimum viable solution out the door that still helps users reach their goals as fast as possible. The following illustration is one of my favorite examples – Imagine that the problem is that people want to get from point A to point B faster because they are wasting time and energy getting there by foot, sometimes not reaching point B at all. Let’s say that the dreamworld solution is a car (personal politics aside). You could start working on a beautiful car today – then in 3 years, you’d have something beautiful to deliver! Or, you could start by solving the problem in a way that allows you to get something to users ASAP, like a scooter – not a perfect solution, but one they can use, one you can monetize and learn from – as you also put resources and learnings back into the ultimate solution.
With the latter process, you may actually learn that users never wanted/needed a 4door sedan (what you originally thought), and instead need something else, like a convertible coupe. You just spared yourself a lot of sunk cost while building a loyal, perhaps paying, user base, all while your competitors built flying boat bicycles for 3 years that no one wanted.
Are all solutions that work workable solutions?
Not a trick – the answer is no. Because you can build a feature or a product and it addresses the problem does not mean it’s the best way forward. Let’s take one of the imaginary scenarios from above as an example:
[Native Spanish speakers aren’t publishing] because [the translation on the publish button is not correct]
We could address this in many ways:
- by removing Spanish as a supported language so we don’t have to worry about faulty translations
- by hiring someone full time to look at the Spanish strings throughout the app
- by adding arrows that contextually point to the publish button that have supplemental support text
- by finding a better translation for the publish button
- or by rethinking the way in which the publish flow works entirely to abstract the need for specific languages out, thereby futureproofing and making the interface more language agnostic.
These would all speak to the problem in some capacity, but some solutions are better than others; some deliver more value to the user and the business faster than others.
So, before designing cool interfaces and projects, know what problem you’re trying to fix. Validate your hunch that there is a problem with real data – in doing so, you may find that there isn’t actually a problem, or that the problem arises for entirely different reasons than you originally thought.
Note: nothing about the aforementioned flow explicitly speaks to how to handle user-reported problems and requests. When aggregated, I tend to view these as data points in the quantitative sense. User reports that reveal trends can help you understand that there’s something going on (what), but they don’t tell us why. Sometimes listening to users wholesale leads us to solutions that solve proposed problems but not in the best way – for users, for the product, nor for the business.