Evaluation of COTS Products: Some Thoughts on the Process

NEWS AT SEI

Author

David J. Carney

This library item is related to the following area(s) of work:

System of Systems

This article was originally published in News at SEI on: September 1, 1998

This column will be devoted to discussions about the use of commercial off-the-shelf software (generally called "COTS") in defense and government systems. In the first two columns, this one and the succeeding one, I will speculate about the process of evaluating commercial software: supposing that we had the technical apparatus to measure the key attributes of a software component (which is at least debatable), how would we go about doing it? When would it be done? Who would get the job? These are questions that fall under the heading of process, and these considerations are as significant for COTS-based development as they are for more traditional ways of building systems.

Introduction

We are presently witnessing a widespread movement by government and industry organizations toward use of COTS products either as stand-alone solutions or as components in complex, heterogeneous systems. This trend results from the realization that using preexisting software products can be a means of lowering development costs, shortening the time of system development, or simply maintaining currency with the rapid changes in software technology that are taking place today.

However, in choosing to make use of commercial components, an organization must immediately deal with the problem of assessing or evaluating these products. The technical literature has examples that describe some specific techniques for assessing a commercial product, its attributes, and its fitness for use in a given context.

In addition to considering techniques in themselves, however, we also observe the need to define some of the more general process-related issues that arise when evaluating COTS products. For example, whose job is it to do this? How do the traditional notions of evaluation differ from COTS evaluation? What new activities might be implied when COTS products are under evaluation? What implications exist for the relationships between these activities, or for their sequence? To begin to answer some of these questions, I will consider some preliminary notions about the process of COTS evaluation. I will examine and make some suggestions about the nature of evaluation, the overall process, and the major roles that must be filled to accomplish it. In my next column, I will examine how these ideas might be instantiated in an actual evaluation exercise.

The Meaning of "COTS Evaluation"

At the outset, we first must constrain the domain of discourse. While evaluation activities are pertinent to any selection of a COTS product, they are especially important when the product will be a component in a complex, heterogeneous system and when the constraints native to the product and its vendor must be harmonized with the constraints of the system that incorporates it. Most of the descriptions found herein are based on the assumption of such a system.

Even in the restricted context of complex, heterogeneous systems, the term evaluation is used with a great many meanings by various people. One common understanding of evaluation makes it roughly synonymous with acceptance testing. Another common (though quite different) understanding of evaluation is that it refers to assessing software through such mechanisms as benchmark tests. Still another understanding of the term considers that evaluation is the activity performed at the close of a project, to determine its success and to capture the good and bad lessons that have been learned. None of these definitions can claim to be the correct one, but all are in common use, and all have some measure of validity.

In our experience, this diversity of meaning and understanding often gives rise to confusion and frustration. The way to alleviate this problem is precision. The very term COTS, for instance, centers our interest on commercial products. But even that is not sufficiently precise, since our major interest is evaluation of commercial products for the purpose of deciding whether to select one for use. Thus, COTS evaluation in this view is a decision aid, a notion that is quite different from evaluation as acceptance testing. The distinction is critical, and I intend the term evaluation to have only this restricted meaning. For the remainder of this column, therefore, I claim a very precise focus: given a set of commercial products, and a need to decide whether to select one for use, evaluation is the overall term I use for the broad collection of activities performed toward that end.

The Permeative Nature of COTS Evaluation

One consequence of this view is that COTS evaluation is not a cleanly separable action, but is more permeative: it exists in multiple forms and at subtly different levels. For example, let us leave the domain of software and consider an everyday model of "off-the-shelf evaluation," where a consumer magazine is consulted to help choose an automobile. Presumably, the overall process will be to (1) assemble some list of candidate cars, (2) apply some evaluation techniques using the available information found in the magazine, and (3) make a selection. What I claim as the "permeative" characteristic is that although evaluation appears to be happening only in the second step, it is also happening, with varying degrees of explicitness, throughout (1) and (3); in essence, evaluation activities pervade the entire process.

Suppose, for instance, that the fictional car buyer has consulted the issue containing reports on the latest foreign cars, seeking guidance on the new imports: We might think that this step is somehow "pre-evaluative," and occurs in advance of any real evaluation process. But consider how many selections (e.g., decisions about inclusion and omission) have already occurred. Used cars have been rejected. Domestic cars have been rejected. Both of these decisions have presumably occurred through some sort of evaluation process. And, unless the fictional purchaser has unlimited funds, there are implicit criteria that exist—cost, fuel economy, safety features, and so forth. Some of these constraints and criteria have been included, and others excluded, and they are in some way prioritized. All of these decisions are, whether implicitly or explicitly, based on some form of evaluation.

Thoughts About the Overall Process

The phrase overall process is somewhat misleading, since while I am concerned with the process aspect of COTS evaluation, my intention is not to define a specific process to accomplish this. On the contrary, as with any important process consideration, an actual evaluation process will depend on a large number of variables that are particular to the organization performing an evaluation--the kind of problem, the specific needs of users, the products under examination, and so forth.

At the most abstract level, there are three large-scale tasks that are involved when COTS products are evaluated:

  1. Plan the evaluation.
  2. Design the evaluation instrument.
  3. Apply the evaluation instrument.

(I use the phrase evaluation instrument to refer to the collection of constraints, strategies, and techniques that cooperatively are the mechanism for evaluation. In a very informal evaluation, this "instrument" may be nothing other than an individual’s intuition. In a formal, costly, and complex evaluation of life-critical software, this "instrument" will likely have extensive documentation and be the object of considerable debate and refinement.)

Each of these three broad tasks has within it a collection of activities that are performed. This common collection of activities will usually be carried out for any specific process that is used. Their order of execution, the scope of each activity, and even whether all of them are always performed will vary, depending on circumstance. In effect, these activities are the primitive building blocks of an evaluation process, and the three large-scale tasks provide generalized areas within which the specific activities are performed. These constituent activities might be something like the following:

Plan the evaluation.

  • Define the problem.
  • Define the outcomes of the evaluation.
  • Assess the decision risk.
  • Identify the decision maker.
  • Identify resources.
  • Identify the stakeholders.
  • Identify the alternatives.
  • Assess the nature of the evaluation context.

Design the evaluation instrument.

  • Specify the evaluation criteria.
  • Build a priority structure.
  • Define the assessment approach.
  • Select an aggregation technique.
  • Select assessment techniques.

Apply the evaluation instrument.

  • Obtain products.
  • Build a measurement infrastructure.
  • Perform assessment.
  • Aggregate data.
  • Form recommendations.

The Roles That Must Be Filled

If we are concerned with tasks and process steps, it is also true that we must consider who will perform these. There are three essential roles in the COTS evaluation process:

  1. decision maker
  2. analyst
  3. stakeholders

The decision maker is the person (or persons) who has both the authority and the need to select a COTS product. While it is common for there to be a single decision maker, it is equally common for this role to be diffuse, and performed either by a committee or by some other distributed entity. The analyst is the person (or persons) who designs and executes the various activities that constitute a COTS product evaluation. The scope of this role is related to the scope of the evaluation. For a large or complex evaluation process, there will generally be some notion of a "lead analyst" whose work is primarily conceptual and analytical, with the actual hands-on tasks being done by other analysts. The stakeholders are all of the persons who share a problem or need, and who will benefit in some manner if a commercial product can alleviate that problem or need.

These rather disconnected thoughts have not arrived at any sweeping conclusions. However, by identifying in the abstract the key activities and roles that are involved with COTS evaluation, we can now begin to construct portions of actual processes and eventually arrive at some useful understanding of "how to do" COTS evaluation. In the next column, I will take some existing evaluation practices and examine them in comparison to the activities and roles described here. Stay tuned.

About the Author

David Carney is a member of the technical staff in the Dynamic Systems Program at the SEI. Before coming to the SEI, he was on the staff of the Institute for Defense Analysis in Alexandria, Va., where he worked with the Software Technology for Adaptable, Reliable Systems program and with the NATO Special Working Group on Ada Programming Support Environment. Before that, he was employed at Intermetrics, Inc., where he worked on the Ada Integrated Environment project.

The views expressed in this article are the author's only and do not represent directly or imply any official position or view of the Software Engineering Institute or Carnegie Mellon University. This article is intended to stimulate further discussion about this topic.

Find Us Here

Find us on Youtube  Find us on LinkedIn  Find us on twitter  Find us on Facebook

Share This Page

Share on Facebook  Send to your Twitter page  Save to del.ico.us  Save to LinkedIn  Digg this  Stumble this page.  Add to Technorati favorites  Save this page on your Google Home Page 

For more information

Contact Us

info@sei.cmu.edu

412-268-5800

Help us improve

Visitor feedback helps us continually improve our site.

Please tell us what you
think with this short
(< 5 minute) survey.