I have had three careers, one as a consultant, one as a CIO and one as a product architect. In each of them I was challenged to help my colleagues (and at PwC my clients) sort through and set priorities for the technology products that I oversaw through the product lifecycle—develop, maintain retire, replace.
This might seem like a pretty straightforward process but trust me it wasn’t/isn’t. There are always limited resources, there are always a long list of deserving (and some not so deserving) candidates and there are many forces (and agendas) at play in the selection of items that move to the top of the priority list. And, while I was never able to nail my approach down to a strictly deterministic process I do have some thoughts that might be helpful.
As with most prioritization efforts, there are several characteristics that need to be considered when managing the product/feature development queue. They include:
- Business Criticality—how important the item is to the business? This is a measure of whether the item is essential to business success. Certain features must be in place for a business process to operate properly. If you are offering an accounting system (which is business critical itself) you would not be very successful if you didn’t offer accounts receivable and accounts payable modules. For instance, a vendor putting together an accounting module, would put those two features above mobility, which is (at least in my view) a nice to have.
- Urgency—does the item requires immediate action? People sometimes mistake urgency with either business criticality or stakeholder impact. Be clear, you can have an item that is urgent (if we don’t implement now we might as well not implement it) but really doesn’t have much impact on the business. Too often urgent (but not business critical) items rise to the top of the list and delay the truly important (in terms of contributing to business success) items that aren’t.
- Strategic Alignment and Scope—this is a measure of how well the item aligns with where the business/product wants to go. It always amazes me how often a proposed feature might be useful but doesn’t move the product in the direction that the business wants to go. I may, in fact, move it away from its long-term objective and one has to seriously question whether the detour is worth the cost.
- Stakeholder Impact—this one is obvious. How does it help the client? The tricky part is making sure that you define the client properly. Many years ago, the Booth School of Business (then the GSB) was trying to figure out how to improve post-graduation placement of their students. It wasn’t until they figured out that the key stakeholders were the supporting the companies that came to recruit were key to improving the metric—yeah, it seems obvious now. That insight allowed the school to focus on features (one of which was a first rate recruiting facility and associated support services) that catered to this important stakeholder group. The implementation of those features contributed significantly towards improving the number and quality of offers Booth students were getting.
- Value—this one measures the amount of benefit per unit cost. This can be measured both in terms of value to customer (do that feel like they are getting good value for money) or the business (are they getting a good ROI on the investment). I would note that the business only does well if the customer recognizes and appreciates that value that they are getting. And, I can’t tell you how many times I have seen good ideas (that seem like they would be darn profitable and get developed without ever asking whether the customer would be willing to pay enough for the product to make it worth offering.
- Current Ineffectiveness— do the alternatives to the solution you are proposing work well enough? If they do, or the customers perceive that they do, they might not feel that switching provides enough “value” to justify switching. This is often the case with innovative solutions where the customer might not believe that they have an issue that needs addressing. This was the case with CPM and construction payment. It was difficult for our potential customers to understand how badly off they were until we fixed the problem (how badly the payment process was managed) and they were able to look back to where they were. In other words, our potential customers didn’t think that they had a problem with payments. It therefore required carefully crafted (marketing and sales) communication and even then, it took us longer than we expected to gain acceptance, but once we did our business really took off.
- Ability to Execute—this refers to the target customers ability to implement the product/solution that you are proposing. At Textura, we were pretty sure that our target customers were not going to invest in on-site computers to host our application. And, the knowledge that investment in hardware by our prospective clients (not because it was sexy) was an important but, not the only one reason that we implemented CPM in the cloud. So, a lack of ability to execute doesn’t always put the kibosh on a product or feature, it sometimes requires that clever (remember, when we originally conceived Textura, the cloud was not the thing it is today) packaging to make a product work.
- Degree of Cooperation—very often you require that others participate as a prerequisite to the successful operation of your product. Textura’s CPM required sophisticated interfaces to the customer’s accounting system. And, without some level of cooperation from the accounting system vendors, it would have taken longer, and we would have been less successful than we were.
It is possible to think about this model objective and purely quantitative. I would note that I have used this model (mostly) successfully in both quantitative (assigning scores) and more subjective (by forcing discussion of candidates using these criteria) implementation. Both worked. Selecting the right one requires that you understand the orientation of your audience. IT folks tend to like the unambiguous nature of a quantitative (assign a numerical rating to each candidate), while product planners generally are more accepting of a softer intuitive ranking that isn’t burdened by what might be characterized as an arbitrary numerical rating. You get the point.
I note that the times that I failed to use this model successfully, it was because of flawed execution, not the because of flaws in the framework.
That said, it is important to consider that there are at least a couple of mutually exclusive levels at which this prioritization should be done:
- Feature Level—Features address a reasonably small (possibly atomic) piece of functionally. A feature, in Word, might be autosave. This feature allows you to set Word to periodically save the document that you are working on. This prevents the user from losing more than a couple of minutes worth of work to a machine crash.
- Function Level—A function is a grouping of features that accomplish a task. In word, spell-check is a good example of a function. It is a grouping of actions that assist the user by either identifying potentially misspelled words in one of several ways. The feature (which can be turned on or off) that highlights words as you type, is one of the actions that group together to achieve a purpose—in the case of spell-check, a perfectly spelled document.
- Application Level—is the grouping of functions that perform a process. The application Word, assists the user in writing documents. To do this, it pulls together a set of features and functions (capturing keystrokes, spell-checking, saving the file, etc. with the goal of puling together a written document that promotes communication and the exchange of ideas.
The distinction between these levels because they often end up in the same queue and their varying resource requirements can lead to the smaller (and sometime less critical) ones getting done and the larger (and more critical) ones languishing as they wait for sufficient resources to attend to them. This is a project management and not prioritization issue, but one that is well worth thinking through as you think about setting up a product development and software engineering operations.
Finally, my experience is that:
- The decisions that are made based on analysis (quantitative or intuitive) using these criteria are a lot better than those made without them.
- As I mentioned earlier, urgency often dominates a discussion (this client won’t sign without this feature) over value for money (how many clients are going to use this feature?); and
- A structured analysis provides the less articulate members of the team support in making their arguments clear.
So, the use of this framework improves business outcomes and that is the point, isn’t it?