This school of thought takes the basic point of view that there is a set of measures that can be developed, validated, and subsequently applied everywhere. Being the most traditional approach, it has led to the software engineering community being very active in developing a large number of measures for various purposes. Fenton and Neil (2000) state that the literature contains thousands of proposed measures, most of which have not been validated and have never been used in practice. While the statement might be intentionally provocative and exaggerated, there is no doubt that the number of proposed measures is very large, and for many of them, the use of them outside the environment in which they were originally developed has not been reported.
Proponents of this approach includes consultants, such as Putnam and Myers (1997) and the Airlie council, a set of software engineering experts that developed a set of project management measures for application to any and all projects (Brown 1996). Also the well-known international quality standards, such as ISO/IEC 9126 and the new ISO/IEC 2502n series standards, see, e.g. (Bøegh 2008), take the fixed set of measures approach. Using quality standards to evaluate and measure software quality has been found rather problematic or even concluded “not suitable for measuring design quality of software products” (Al-Kilidar et al. 2005).
Despite being widely used in other fields, in particular quality management of manufacturing, this approach is problematic when applied to software development since it assumes that all development efforts are similar and thus can use the same measures. While all software development undertakings share characteristics, such as the reliance upon a skilled workforce and a creative component, factors such as the type of software, team characteristics, process characteristics, and organizational characteristics all affect the set of measures that should be applied. For example, it is unlikely that a five-person project developing a simple Web application would need and use the same measures as a 300 person project developing real-time critical software for a telecommunications switching system.
However, there are many clear benefits from having a predefined, fixed set of measures. For example, building tool support is much simpler, and it does make it possible to compare and benchmark. It is also beneficial from an organizational point of view, since measures do not have to be redeveloped for each new development effort, but instead relies on heavy measure reuse. Taking the SPMN control panel (Brown 1996) as an example of this point, deploying it as is over a large number of organizations and projects is in many ways much simpler than any approach or toolset that requires both the definition of new measures and supporting such arbitrary measures. After successful deployment, projects might be easily benchmarked using a standard set of measures, given that the users are able to correctly interpret the measures. Thus, while probably an impossible goal to achieve at the general level, it is worthwhile to strive for a set of standardized set of measures that can be reused, with qualification, within the same organization.
Al-Kilidar, H., K. Cox, and B. Kitchenham. “The Use and Usefulness of the ISO/IEC 9126 Quality Standard.” In Proceedings of International Symposium on Empirical Software Engineering, 2005.
Bøegh, J. A New Standard for Quality Requirements. IEEE Software 25(2):57–63. 2008.
Brown, N. Industrial-strength management strategies. IEEE Software, 13(4):94-103, 1996.
Fenton, N. and Neil, M. Software metrics: A roadmap. In Proceedings of the Conference on The Future of Software Engineering, ICSE’00, 2000.
Putnam, L. and Myers, W. Industrial Strength Software: Effective Management Using Measurement. Los Alamitos, CA, USA, 1997.