How about a set of performance metrics to determine how a product management program is performing?
Product managers (PMs) are those individuals entrusted by their companies to make the most at the intersection of customers, internal capabilities, partners, and industry trends. The output of the PM work is the product requirements document--a document detailing the product features for the next version of a product/technology, and supporting evidence (business case or rationales). Usually, in your typical organizational chard, the PM function is between engineering and marketing. In small companies, the PM role is still a function of the executive management. The PM functions get formalized as such when growing companies are in a position to release later versions of their initial products or when new product lines are about to be released.
To start off, there ought to be different sets of metrics at a large software company compared, and a startup, respectively. Metrics will drive behavior and performance, so choose wisely, discuss with (executive) management! One gets what one measures/rewards, so make sure that's what you want!
In a mature business, product managers get measured by:
- Number of recorded customer visits and/or calls;
- PRD references to customer visits.
Typically, 10 visits per quarter are a good result, 7 average, and 4 below average. This is also dependent on what kind of prep goes into a visit.
This is followed by a percentage metric for the product requirement document (PRD)--how many features have named customer input. This can be tricky to formulate into a sensible measurement, but basically it monitors that product managers go through customer visit reports and mark the source of a requirement into the PRD. It has more management interest, but again it does confirm that PMs actually get requirements from the market, not from an engineering sugar high.
In case customer input is a dimension of the metric, make sure PMs get customer input on the specific value (i.e. what they will pay for it) of the feature(s) tagged as customer-driven within the PRD.
The above metrics are used for bonus pay-out, but also could work as a stoplight metric for individual managers and products.
Especially in a cost-sensitive environments, small enterprise, make sure that things like "market share" or "sales" are NOT considered. Too often organizations focus on exactly the wrong metric. Profits are what count.
Also, at a startup, you will likely want to expand your customer base, so think about a metric there also. If any operational responsibility lies with the PM (some do, some don't), think about an appropriate metric there depending on the company's production model.
Yes, a startup will have different needs from a large company. Not the least of which is strict financial discipline, managing your business for positive free cash flow. There is another one. Slightly different from profits…
Customer acquisition and retention are key. Worth measuring for are plan vs. result conformance (i.e. did the team do what it said it was going to and when).
In certain contexts, it may also be worth measuring the "downstream effects" of PM’s requirements on the Engineering, such as the amount of rework due to spec misunderstandings (indicates whether or not the specs contain sufficient detail), and the percentage of features that meet PM's acceptance criteria on the first try (indicates the quality of communication between PM & Engineering).
The availability of data will emerge as a difference between a start-up and a larger company; large companies routinely store quite a bit of information that can be useful on scorecards. It may be more of a task to find meaningful, measurable data in a start-up.
This is reminiscent (for some anyway) of measuring the "cost of quality" of a product, service or operation. It was always really the inverse (cost of poor quality), but it gave support for investing in doing things better. The real problem was that only big companies could afford the time/money of gathering and analyzing the data on an ongoing basis. Perhaps the better take-away is to add *concept* to the 5000 other things going on in the back of the PM's head. Then, if/when a situation arises that strongly, but anecdotally, reflects a large cost of quality, then you have a rationale and approach to analyze and document what could be institutionalized to make things consistently better.
A group can look in the right area for an important measure of performance, but still - in practice – generate "sterile metrics." A high visit *count* involving golf without shoptalk, superficial conversation without probing, talking only to users and not buyers (or vice versa), and other scenarios still amounts to poor performance. Again, it's *understanding* the craft, not just launching on the buzzwords.
Further, while Profit is definitely a critical (maybe most critical) measure, in the real world it is difficult to avoid drowning in a QUARTERLY profit mentality. Excepting start-ups for the moment, growing and large companies too often will rationalize (or simply drop) bug counts in their release criteria to get a product "out the door" to either book sales or make the cut for an analyst report. It is way too easy for that to become a pattern, and the company's true brand.
And, in several industries/markets that evolve very quickly (like telecom) it is really advantageous to have two-tiered engineering: one group reads IEEE publications and scratches their heads over requirements 2 years out, while the other group wraps and ships current product. With a good dose of the quarterly profit disease, it is, again, easy to jeopardize even a major advantage in the NEXT market cycle to make the next release date - which was probably set arbitrarily anyway. However, even this (the firm two-tiered effort) can be taken to dangerous extremes. It is still a matter of "balance."
Customer interaction and gathering market-input in product definition are critical to product success, and therefore should be part of measuring a PM's performance.
One cannot hold someone accountable for that over which someone has no control. In most companies, the PM does not manage the entire product team. For example, the product manager typically cannot hire and fire:
- Sales people selling the product;
- Marcom folks marketing the product;
- Engineers developing the product.
Yet the profitability of the product depends on the performance of the sales, marcom, and engineering personnel over which the product manager has no authority.
So no single person can be held accountable for product profitability unless
- It is a one-product company, in which case the CEO has ultimate responsibility, or
- The PM has complete authority over the entire product team.