Functional Size: The Importance of Having Good Strategic Information
Turning data into information and using that information to manage and improve businesses, projects, processes, and products is a fascinating process. It’s often said that you can only manage what you can measure and, as a general rule, whatever is managed is improved. Famous quotes on the topic include Peter Drucker’s, “If you can’t measure it, you can’t improve it” and this one from James Harrington: “If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it”.
Simply announcing that something will be measured often spurs those involved to make improvements.
In information technology (IT), one can talk about financial metrics, productivity metrics, quality metrics, lead-time metrics, and reference metrics, among others. But it’s also interesting to understand and manage the different drivers that have a positive or negative effect on such metrics. Periodically compiling and registering these metrics is a valuable ongoing task, as is the use of strategic information to make something better, i.e., for continuous improvement.
I’d like to linger on the magic word “drivers” for a moment. Hypothetically, one might say that your productivity in a specific technology under certain circumstances is, say, 1.25. You develop products at this productivity rate with an extremely low defect rate. Your competitors have lower productivity and standard market repositories such as ISBSG suggest that your productivity rate is good. But it is vital to remember that this value of 1.25 may be affected by other factors such as project size. For example, productivity might be different if you were talking about a very small project or, conversely, a two-year project with a 400-person team.
It’s important to examine different variables
It’s important to examine different variables such as project size, team size, time restraints, the critical nature of the application/project, development at different work centres, and product complexity in order to fine-tune your analysis. All of these factors can affect productivity.
You should be able to use precise, recorded metrics to answer questions like: What is our estimated productivity for a 15,000-hour project using a specific technology in a given environment? What would change if it were a 600-hour project? Which drivers affect our productivity, quality, and lead time, and how do they affect projects? Some of these drivers are internal, whereas others are external and depend on other parties such as the client. You have to understand and manage both, particularly the drivers under your control. You may have an answer to these questions because you’re thinking something along the lines of, “I’m an experienced project manager and I’ve got everything under control”. But the key words here are “precise, recorded metrics”. Do you have precise, recorded metrics that reflect reality?
Each client operates in their own specific world
Seasoned IT departments, i.e. the ones that work with well-defined procedures and more or less the same technologies, environments and products, can answer these questions involving tier-3 metrics fairly easily. But it would be much more difficult for IT companies with dozens or hundreds of clients, each with their own procedures, documentation requirements, standards, and development environments, to answer these questions precisely. Each client operates in their own specific world, even though the technology being used is the same.
This is where function points come in. It’s not possible to have a complete set of IT metrics without knowing the functional size of an IT project or product. This functional size is derived from the function points.
It can sometimes be hard to grasp that a 4,000-hour IT project can be smaller, in terms of the delivered end product, than a 1,000-hour project. It’s important not to confuse the effort the project requires with the size of the project, since these basic concepts do not always follow the correlation of larger size = greater effort, smaller size = less effort.
Here’s an interesting example: if you produce oranges, you may record the number of kilos of oranges you produce and the time/effort needed to pick 1,000 kilos of oranges. You may also know that, under certain circumstances, picking 1,000 kilos of oranges can be faster or slower; factors such as the dampness of the soil and the size of the orange trees may make it possible to pick more or fewer oranges in a certain period of time. In this case, you are talking about size (kilos of oranges), effort (how much time or how many people multiplied by time), productivity (e.g. kilos picked per day), and the drivers that affect productivity, such as the size of the orange trees. If the orange trees are big, people have to climb the tree and productivity is lower. Naturally, this raises a whole host of questions which you should be prepared to answer with strategic information. For instance, is it better to have big orange trees that produce more oranges per tree or to have small orange trees that enable higher productivity at orange-picking time? You have to strike a balance between quality, productivity, market strategy, and profit margin. Picking oranges from bigger trees may be less productive but may yield a higher-quality product that results in a faster turnover and wider profit margin. Who knows? You need more information to manage for optimal outcome.
The concepts of size, effort, productivity, and productivity drivers
The concepts of size, effort, productivity, and productivity drivers have been used for centuries. It would be difficult to find a company of any size that produces shoes, cars, or whatever where it would take more than a minute to answer questions like: How many shoes or cars did your company manufacture last year? Did you manufacture more or fewer shoes or cars than in previous years?
Now try asking a software company or many IT departments the same questions: How much software did your company develop last year? Did you develop more or less software than in previous years? The responses you hear may refer to revenue figures, the number of projects, the number of employees working on those projects, or the hours spent on IT activities. You might get an answer couched in financial terms or in reference to effort. What you won’t get is a direct answer to the question: How much software did your company develop last year?
The only way to answer that question is to quantify the product. At a car company, the answer would be the number of vehicles built; on a farm, it would be the number of kilos of produce harvested; in a footwear factory, it would be the number of pairs of shoes manufactured. But even in these examples, it would still be necessary to add a second variable to fine-tune this information, such as the kind of vehicle being built. After all, manufacturing 1,000 vans is not the same as manufacturing 1,000 luxury cars.
This article was originally published in MetricViews, a semi-annual publication of the IFPUG, (International Function Point Users Group). IFPUG is the global leader in spreading and promoting software development and maintenance activities through the use of function point analysis and other software-measuring techniques. You can find the complete article at here.