Productivity and Quality are two essential metrics in any software project, and vital in the life of IT companies. It is essential to be able to answer, not with feelings or opinions but with concrete data and numbers, just a single query to a database, to questions such as: what has been the productivity of this project? What quality has the application developed or the new functionalities or enhancements implemented? Is this quality and productivity, amongst others, better or worse than the internal or external averages? Are you able to answer those questions with concrete numbers and in a few minutes?
The answer might not be dozens of ad-hoc indicators (or KPIs, despite this word sounding motivating), some of them different depending on the project and impossible to compare externally and even sometimes internally. The answer might be a single number applicable to all the projects, in the same way that when we ask for possible effort deviations we expect a percentage (planned effort vs real effort) without taking into account the technology used on the project, the type of project or its size.
If we compare the quality and the productivity of different IT projects, we might be able to understand why the productivity or the quality of some projects is better or worse than other ones. This apparently easy question (to know the reasons) is essential in order to perform improvement actions, extremely important to bring the estimation process to higher maturity levels, and for creating the best synergy between customers and providers, providing the best equilibrium between functionalities, quality and cost.
Productivity and quality go hand in hand. If, on one hand, we put a higher emphasis on the productivity concept without taking into account the quality of the product, we run the risk of creating products that aren’t as good as desired, because we made “speed” our priority. If, on the other hand, we only focus on quality, we can potentially produce products that may not be competitive, but this will certainly be dependant by the criticality of the product. The main objective is to have the higher productivity, the highest quality and to provide the best added value to customers: these objectives are synonymous with competitiveness. Usually companies consider these indicators as extremely strategic and at the same time confidential.
Despite initial beliefs that the quality and productivity metrics of a project can give the perception of being simple, this is not always true. It is needed to size the software product created, or the size of the new functionalities. However, it is important not to associate the size of a product to the project’s effort (in this affirmation we manage four completely different concepts: product, project, size and effort) because it does not always mean a higher effort spend equates to a higher product size, or vice-versa. Let us imagine for a moment an identical software solution created by two different companies. One company may have used 10% less time than the other to develop the product, but the size of both products will be the same.
There are different methods to determine the size of a software application, of new functionalities, or to modify existing ones. Some of them are based on the software code created or modified: more software code creation or modification is equal to more size: big mistake and unfair. One of those methods is the Lines of Code (LOC), initially easy to count, but soon it will be taken into account that there is not a standard criteria of “what” is considered a line of code. Other methods are based on statements or even on the backfiring concept. Those methods often benefit those that have made the solution (from the code point of view) much more complex than required, without structuring the code properly, or perhaps repeating the same instructions in different parts of the technical solution. Perhaps they have used more lines than required because of a lack of correct programming techniques or perhaps just to artificially increase the number of lines (more lines in this case represents more size, and more size is equal to better productivity and quality). This is the reason why I used a few lines before the word “unfair”.
On the other side, we have the Functional Size (FSM; Functional Size Measurement), defined as standard ISO/IEC, based on the functionalities that the product provides to the user, and not in the physical code. First methods, moving as example from IT projects to the smartphone world, take as a base of measure the number of transistors (technical part) that the smartphone has; second one (Functional Size) is to take its functionalities such as screen size and type, weight, camera specifications, memory size, battery life, operating system, sound characteristics, design, etc.
The effort combined with the size provides us the productivity. It is essential to compare the productivity of different types of projects: multi-site software development vs unique location development, big projects vs small ones, projects developed in a world region or country vs those developed in other zones, etc. In addition, it is vital to compare this productivity with standards indicators, to know productivity trends after implementing changes in development tools, methodology, etc. The objective, from the metrics point of view, is to detect improvement opportunities, and even to answer questions such as: in what way has the productivity been increased after implementing a new methodology or applying a set of changes? Are Agile projects more (or less) productive than the Waterfall ones? Those questions and answers will clearly prove the ROI of changes and improvements, but this productivity must go hand in hand with quality, as previously mentioned.
In most cases the quality indicator is associated with the standard concept of “defect density”, that is the combination of number of defects and the size of the product. Obviously if we do not have the product size (just a number), it is not possible to determine this quality indicator. Again, one thing is the product size, and the other, which is completely different, is the project effort (synonymous with time).
Although the Defect Density is a standard metric, taking only the number of defects and size can provide incoherent results. It is necessary to refine the “defect” concept, adjusting factors such as number of defects, because someone can see one defect (for example, a report that contains four columns incorrectly aligned) and on other occasions this can be seen as four defects (one for each column, especially if they have been discovered at different times, or by different people), or the effort needed to solve the defect (it is not the same defect that can be solved in 15 minutes, or another one that requires four days to be solved). The impact is the other important axis: in some cases we deal with aesthetic defects (the alignment of a column, as in the above example), but in other cases the defect can have important consequences (for example, a calculation error). We can include other adjustment factors such as in which project phase the defect was detected, the tests needed on other parts of the application or in other applications (regression test), the unavailability of a system, etc. All this provides the adjusted defects, with information that is more coherent and comparable.
In spite that perhaps we can be in a discreet productivity range, with productive low costs required to be competitive, the harmony between productivity (that sooner rather than later will be translated into cost) and quality is essential to provide the best value for money, having the objective to obtain the best productivity and at the same time the best quality.
But for this, as mentioned, we might be able to answer with a single number questions as easy as: what has been the productivity of this project?, what is the quality of this new application or of the new implemented functionalities?, or what percentage has the productivity and/or the quality increased after implementing a set of changes in the company? In addition, the last question: have you answered those questions with numbers, and in a few minutes?