Sunday, July 3, 2011

More thoughts on development metrics

Each of the phase of software development has some deliverables and invariably all of the deliverables undergo reviews. They may be a walkthrough in which a group runs through the document or may be one-on-one review done by an experiences team member or a tech expert. All of our development and testing projects have the reviews built into their schedule. The review comments are logged and acted upon. So a review efficiency metrics should be easily be created. These are called differently in different organizations. Sometimes they are called phase yield. Sometimes phase review efficiency. But the gist of all of them is mostly the same, i.e. to see how good was the work product and how efficient was the review. This is used to figure out metrics such as review efficiency or PCE - Phase Containment Efficiency!

I have seen some project teams meticulously maintaining the review logs. And then there have been some teams which do not maintain these logs and manage to get away with it. Also for teams which do maintain the review logs, do they need to log each and every minor comment into the log? Sometimes, the reviewers ask questions to know more details about the design. Will that qualify as a review comment? Moreover, when the there are to and fro comments from the author and reviewer on the same topic, do we need to track each of the comments as separate review remarks or club them under a single remark. These are the practical difficulties which come up while implementing the review log system. But more complex is the scenario, which I have seen in my experience, is that, when there are multiple reviewers (who belong to the client’s department), and some of them are very thorough and give a sea of comments and some are very superficial and give very few review comments, if we use the no of review comments as a leading metric for gauging the quality of work product, it may not portray the correct picture!

No comments: