Four methods for software effort estimation
| Joost Schalken-Pinkster |
Understanding the size and effort of a software project early on is a difficult problem. Several different methods exist, but no method is perfect. In this article we present an overview of the four methods most mentioned in literature: 1) expert opinion-based, 2) top-down estimation, 3) bottom-up estimation and 4) estimation using a parametric or algorithmic model.
Expert estimation means that an expert estimates how much effort a project requires. The advantages of asking somebody else than the project manager to estimate a project, is that some experts have deep knowledge about the problem at hand. Many experts will use intuition and/or previous experience to estimate the project, which is usually more time-efficient than any of the other methods. The problem with expert estimates is that their reliability is typically unknown (unless the organisation tracks the performance of their estimators) and that their estimates are not objective. This last aspect is important when discussions arise.
Note that there is big difference in asking an independent expert for an estimate, or asking the person that has to perform the task for an estimate for that task. Asking the person that has to do it probably yields more information, because the person has an incentive to estimate accurately.
Top down estimation
Top-down, analogy-driven estimation methods use experience from the past to make estimates for the future. Analogy-driven estimation methods require examples of completed IT projects to base the new estimates upon. Note that one can do top down estimations in multiple ways. The easiest way is finding three previous projects, and take the average of these projects. A better, more advanced way is to use more information for selecting the most similar projects. One should start with a database of completed projects. For each completed project, the effort that was required to complete it is listed together with information about the aspects that had an influence on the costs of that project. These aspects are also known as effort drivers. An estimate for a new project is derived by comparing the new project to all old projects and selecting the (effort of the) project that is most similar. The advantage of using analogy-driven estimation methods is that their basis is more objective and repeatable than expert estimation.
Bottom-up estimation methods take a project definition and examine what activities or deliverables need to be completed in order to achieve the project’s objective. One keeps breaking up the project activities or deliverables into smaller sub activities or partial deliverables until each sub activity of partial deliverable requires less than two weeks of effort. After this project decomposition, in which the project is broken into smaller bits, each individual part (be it activity or deliverable) is estimated. This estimates are then combined into an overall estimate. The advantage of using bottom-up estimation methods is that their estimates are more open to inspection (and more understandable) than an expert’s holistic estimate. Also, since estimation errors tend to cancel each other out, the bottom-up estimate is typically more reliable than an expert’s single estimate of an entire project. Unfortunately the method has as a disadvantage that it generally is less objective than a parametric estimation method, since the decomposition contains subjective assessments of what needs to be done for the project. Also it can take a lot of work to produce a good bottom-up estimate.
Parametric estimation methods
Parametric estimation methods use a model or algorithm that takes as input some aspects of the project (such as the required functionality and the quality that is expected). The model (a formula) or algorithm (computational steps) then produce an estimate based on those inputs alone. The advantage of using parametric estimation methods is that they tend to be more objective as their counterparts. When properly used and with sufficient calibration these methods can produce highly reliable estimates. Unfortunately their use is more complex and often more time-consuming than the other estimation methods. Function Point Analysis is one of the most commonly used methods. Cocomo is another. We discussed both methods in our course on software project management, and many students found these methods difficult to apply. The reason is that one needs to have fairly detailed requirements/specifications in order to apply these methods. In our lectures we explained how to do Function Point Analysis per user story, using this academic paper (PLANNING AGILE SOFTWARE PROJECTS WITH REDUCED GUESS ESTIMATION, Buğra KOCATÜRK, Jean-Marc Desharnais). In our view, this is a very agile way to do FPA.
In our experience, none of these four methods are very precise when they are applied early on in a project. The most important advice is therefore to not spend much time on the first estimate, and to re-estimate during the project.
This article is part of a series that summarizes our university course on software project management. The series consists of (1) creating a vision, (2) project scope, (3) managing non-fuctional requirements, (4) effort estimation, (5) planning and scheduling, (6) organisational change, (7) risk management and an overview of projectmanagement books and articles.
Image: Scale by Domiriel – Creative Commons
Dr. Joost Schalken-Pinkster has obtained a Ph.D. in software engineering in 2007. Since then he has worked continuously in IT as architect management consultant and lecturer. Besides working at ICT Institute, Joost is lecturer at Utrecht Applied University where he focuses on code construction, software design and software architecture.