Feature Success Metrics

Every tech SaaS company has Pendo or Amplitude or a similar analytics tool. The question is whether we are using these tools effectively. Spoiler alert: I think not.

The key to using any analytics tool actually starts in the mind of the designers and product managers. (namaste!) The most important question that never gets asked is “What does success or failure actually look like?”

Realistically, the answer greatly varies between B2C, B2B, mobile, websites, and games. For just the sake of focus, I am going to just address business applications in a browser. Specifically, new features you develop.

The depressing reality is that most features are measured bluntly. Clicks are the most common method. Alternatively, people sometimes make journeys and measure drop-offs, but most B2B SaaS products do not lend themselves to this analysis.

Imagine measuring Excel or PowerPoint as a product. How would a journey truly measure success of the software? I think the problem is that designers and product managers do not sit down and actually talk about success.

The sad truth is that most success, including almost all PM OKRs, are just focused on “shipping the feature”. Did you ship on time or not? This is not success of the feature. At best, it is just the success of the project manager.

I think product groups should be meeting at the beginning of a project and talking about what success truly looks like? What real world outcomes would happen if we did a good job in this feature? Let’s think about some scenarios that might increase or decrease some metric.

  • A usability enhancement might actually decrease clicks. Success would be if people did a flow at the same rate, but with fewer clicks.
  • A value-add feature might increase popularity of that flow leading to increased number of that outcome. Like an enhancement to dashboards might lead to more dashboards. Does it also lead to more viewing of dashboards?
  • Flexibility in data retrieval might lead to a decrease in exporting.
  • Micro-animations might make demos flow more smoothly and lead to increased stage 3 opportunities.
  • Easter eggs might lead to more social posts that increase awareness and loyalty of the user base.
  • In product hints and guides might lead to fewer support calls.

This list goes on and on. Measuring clicks is not success. You have to engage the team and really think about what you hope to accomplish with your efforts. Then you should figure out how to measure that outcome. And finally, you should actually report on how you did.

This last step is crucial. Often, it feels like a wheel where each feature is only important while we build it and then we never talk about it again. The wheel just turns and on to the next feature. If we don’t analyze the results, we will never learn and improve.

This is why many products are terrible and why many teams are depressed. We want to believe in what we are doing. This is a good way to start the healing. Define success more thoughtfully and then measure it.

Comments

Whatya think?