Software Architecture Evaluation in the DoD Systems Acquisition Context

NEWS AT SEI

Authors

Lawrence G. Jones

Rick Kazman

This library item is related to the following area(s) of work:

Software Architecture

This article was originally published in News at SEI on: December 1, 1999

Many modern defense systems rely heavily on software to achieve system functionality. Because software architecture is a major determinant of software quality, it follows that software architecture is critical to the quality of any software-intensive system. For a DoD acquisition organization, the ability to evaluate software architectures before these architectures are realized as finished systems can substantially reduce the risk that the delivered systems will not meet their quality goals. This column presents the basic principles of applying a software architecture evaluation in the DoD system acquisition context.

What Is Software Architecture?

The software architecture of a program or computing system is the structure or structures of the system, which comprise software components, the externally visible properties of those components, and the relationships among them. [Bass 98]

It is important to understand that there is no such thing as the architecture of a system--that is, there is no single artifact that one can definitively point to as the architecture. There are, however, many relevant and important views of an architecture depending on the stakeholders and the system properties that are of interest. If we consider the analogy of the architecture of a building, various stakeholders such as the construction engineer, the plumber, and the electrician all have an interest in how the building is to be constructed. Although they are interested in different components and different relationships, each of their views is equally valid and is necessary to ensure that they will function properly together. Thus, all views are necessary to fully represent and fully analyze the architecture of the building. The analogy holds for a software architecture, but in this case the stakeholders might include the development organization, the end user, the system maintainer, the operator, and the acquisition organization. Each of these stakeholders has an important interest in different system properties. We will elaborate on the importance of software architecture to the delivered system and these various stakeholders next.

Why Is Software Architecture Important?

The point is often made that the DoD buys systems not software, so why should the DoD concern itself with software architectures? Simply stated, almost all modern systems, including modern defense systems, rely heavily on software to achieve critical functionality. Thus, many important system quality goals--security, availability, modifiability, performance, and so forth--are achieved through software. The software architecture is a major determinant of software quality and thus of system quality. So, even though the DoD is buying a system, the software and in particular the software architecture are of paramount importance in determining whether the DoD gets the level of system qualities required. These interrelationships are depicted in Figure 1 (adapted from Fisher [98]).

Figure 1: The Relationships Among System Quality Requirements and Software Architectures

Figure 1: The Relationships Among System Quality Requirements and Software Architectures

It is also important to understand that architectures allow or preclude nearly all of the quality attributes of large, complex systems. For example, if your system has stringent performance requirements, then you must pay attention to things such as component interactions, communication mechanisms, scheduling policies, and component deadlines. If you have modifiability goals for your system, then you need to pay attention to encapsulation properties of your components. If reliability is important, then the architecture must provide schemes for redundancy, restart, fail-over. The list of such system-wide architectural concerns and strategies goes on and on. All of these approaches to achieving system quality are architectural in nature, having to do with the decomposition of the total system into parts and the ways in which those parts communicate and cooperate with each other. While a "good" architecture cannot guarantee a successful implementation (i.e., an implementation that meets its quality goals), a "bad" architecture can certainly preclude one, as shown in the many case studies in Software Architecture in Practice [Bass 98].

Additionally, architectural decisions are among the earliest design decisions made. If an inappropriate architectural choice is made, the consequences are profound. Studies show that the cost to fix an error found during requirements or early design phases are orders of magnitude less than the same error found during deployment or maintenance [Boehm 81]. Thus, it makes economic sense to take steps to ensure the quality of a software architecture. Next we will describe an approach that has proven successful in improving the quality of a software architecture.

Software Architecture Evaluation

The SEI has been developing the Architectural Tradeoff Analysis Method(ATAM) for the past two years [Kazman 99]. This method not only permits evaluation of specific architecture quality attributes but also allows engineering tradeoffs to be made among possibly conflicting quality goals. The ATAM draws its inspiration and techniques from three areas: the notion of architectural styles, the quality attribute analysis communities, and the Software Architecture Analysis Method (SAAM), which was the predecessor to the ATAM [Kazman 96]. The ATAM is intended to analyze an architecture with respect to its quality attributes, not its functional correctness.

The ATAM involves a wide group of stakeholders including managers, developers, maintainers, testers, reusers, end users, and customers. It is meant to be a risk-mitigation method, a means of detecting areas of potential risk within the architecture of a complex, software-intensive system. This focus has several implications, including

  • The ATAM can be done early in the software development life cycle.
  • It can be done inexpensively and quickly (because it is assessing architectural design artifacts).
  • It need not produce detailed analyses of any measurable quality attribute of a system (such as latency or mean time to failure) to be successful, but instead identifies trends where some architectural parameter is correlated with a measurable quality attribute of interest.

What we aim to do in the ATAM, in addition to raising architectural awareness and improving the level of architectural documentation, is to record any risks, sensitivity points, and tradeoff points that we find when analyzing the architecture. Risks are architecturally important decisions that haven't been made (for example, the architecture team hasn't decided what scheduling discipline to use or whether to use a relational or object-oriented database), or decisions that have been made but whose consequences are not fully understood (for example, the architecture team has decided to include an operating system portability layer, but is not sure what functions should go into this layer). Sensitivity points are parameters in the architecture to which some measurable quality attribute is highly correlated. For example, it might be determined that overall throughput in the system is highly correlated to the throughput of one particular communication channel, and availability in the system is highly correlated to the reliability of that same communication channel. Finally, a tradeoff point is found in the architecture when a parameter of an architectural construct is host to more than one sensitivity point where the measurable quality attributes are affected differently by changing that parameter. For example, if increasing the speed of the communication channel mentioned above improves throughput but reduces its reliability, then the speed of that channel is a tradeoff point.

To use ATAM effectively in the DoD, it is necessary to understand the special characteristics of the DoD acquisition environment.

The DoD Acquisition Management Process Context

DoD 5000.2R prescribes a high-level acquisition process known as the DoD Acquisition Management Process. It serves as the overall roadmap for program execution and includes mandatory acquisition procedures and specific guidance for acquisition programs [Bergey 99]. Although the DoD management process is primarily directed toward major system-acquisition programs, it is intended to serve as a general model for all DoD acquisition programs.

In particular, the contractual process that must be followed is prescribed in the Defense Federal Acquisition Regulation Supplement1[DFARS 98]. This contractual process includes three important phases: the pre-award phase, the award phase, and the post-award phase. During the pre-award phase, the acquisition organization prepares and issues a request for proposal (RFP) and interested bidders may respond. During the award phase, source selection occurs. During the source-selection process, the acquisition organization evaluates proposals, obtains best and final offers from bidders, and selects a winning bidder. During the post-award phase, the government administers the contract, monitoring the technical progress and performance of the winning bidder. Depending on the scope of the contract, the winning bidder may or may not be responsible for support of the developed system following delivery and deployment.

We will use these phases as a basis for describing different points at which the ATAM might be effectively applied in a DoD or government acquisition.

Applying Architecture Evaluation within the DoD Acquisition-Management Process

Pre-Award and Award Phases for a System-Development Contract

Two major activities that take place in the contractual pre-award and award phases are generation of an RFP and source selection respectively. Release of the RFP defines the official beginning of the solicitation period. After the solicitation formally closes, source selection commences with proposal evaluation and ends with a contract award. Specifying an ATAM-based architecture evaluation can be an effective means of evaluating the technical risks associated with a proposed software architecture. The results can be used as part of the technical evaluation criteria for source selection. The requirement to perform an ATAM-based architecture evaluation must be appropriately integrated into the RFP and the source-selection plan.

Post-Award Contract Administration and Performance Phase for a System- Development Contract

After contract award, an ATAM might be used for (at least) four purposes:

  • Select an architecture from among several candidate architectures.
  • Assist in architecture refinement once an architecture has been chosen.
  • Ensure that the chosen architecture is properly documented and communicated.
  • Assist in early evaluation of architectural designs to reduce program risks.

Other ATAM Contractual Applications

The ATAM could also be applied in an acquisition for upgrading an existing system after it has been operationally deployed and is in its post-development/support life-cycle phase. Additionally, an ATAM could also be applied to a legacy system to evaluate and improve how well the architecture would support a set of proposed upgrades. Many legacy systems have only implicit software architectures: they were either never properly documented or the documentation has not been updated in concert with the system. In such cases, an architecture extraction and reconstruction activity would have to precede the ATAM to redocument the architecture [Kazman 98].

Conclusions

While the ATAM is still in the developmental phase, it has already proven its ability to significantly improve software architectures in several pilot projects in a software development environment. The next challenge is to codify the application of ATAM principles in an acquisition environment. ATAM principles have been effectively applied to a limited extent in source selection, and the initial results are promising. The SEI is collaborating with several acquisition organizations on the use of the ATAM, to help them transition the process into their own organizations and to help them include appropriate language in an RFP to make architectural evaluation an integral part of evaluating proposals. As experience is gained in this area we will continue to share our lessons learned.

About the Authors

Lawrence G. Jonesis a senior member of the technical staff in the Product Line Systems Program of the Software Engineering Institute (SEI) of Carnegie Mellon University. In addition to his product line duties, he is also a member of the Capability Maturity Model Integration (CMMI) team. Before joining the SEI, he served in the United States Air Force in a variety of software development, management, and education positions. He is also the former chair of the Computer Science Department at the U.S. Air Force Academy. He holds a PhD in computer science from Vanderbilt University and master’s and bachelor’s degrees in industrial engineering from the University of Arkansas.

Rick Kazman is a senior member of the technical staff at the SEI, where he is a technical lead in the Architecture Tradeoff Analysis Initiative. He is also an adjunct professor at the Universities of Waterloo and Toronto. His primary research interests within software engineering are software architecture, design tools, and software visualization. He is the author of more than 50 papers and co-author of several books, including a book recently published by Addison-Wesley entitled Software Architecture in Practice. Kazman received a BA and MMath from the University of Waterloo, an MA from York University, and a PhD from Carnegie Mellon University.

References

[Bass 98]
Bass, L., Clements, P., Kazman, R. Software Architecture in Practice. Addison Wesley, Reading, MA, 1998.

[Bergey 99]
Bergey, J., Fisher, M., Jones, L. The DoD Acquisition Environment and Software Product Lines. CMU/SEI-99-TN-004, May 1999, Pittsburgh, PA.

[Boehm 81]
Boehm, B. Software Engineering Economics, Prentice-Hall, Englewood Cliffs, NJ, 1981.

[DFARS 98]
Defense Federal Acquisition Regulations Supplement, 1998 Edition [online]. August 17, 1998.

[Fisher 98]
Fisher, M. Software Architecture Awareness and Training for Software Practitioners; U.S. Army CECOM Course. June 1998, Pittsburgh, PA.

[Kazman 96]
Kazman, R., Abowd, G., Bass, L., Clements, P., "Scenario-Based Analysis of Software Architecture", IEEE Software, 13:6, Nov. 1996, 47-55.

[Kazman 98]
Kazman, R., Carriere, S. J., "Playing Detective: Reconstructing Software Architecture from Available Evidence", Automated Software Engineering, 6:2, April 1999, 107-138.

[Kazman 99]
Kazman, R., Barbacci, M., Klein, M., Carriere, S. J., Woods, S. G. "Experience with Performing Architecture Tradeoff Analysis," Proceedings of the 21st International Conference on Software Engineering (ICSE 21), (Los Angeles, CA), May 1999, 54-63.

1 The Defense Federal Acquisition Regulation Supplement (DFARS) is the DoD implementation and supplementation of the Federal Acquisition Regulations (FAR).

The views expressed in this article are the author's only and do not represent directly or imply any official position or view of the Software Engineering Institute or Carnegie Mellon University. This article is intended to stimulate further discussion about this topic.

Find Us Here

Find us on Youtube  Find us on LinkedIn  Find us on twitter  Find us on Facebook

Share This Page

Share on Facebook  Send to your Twitter page  Save to del.ico.us  Save to LinkedIn  Digg this  Stumble this page.  Add to Technorati favorites  Save this page on your Google Home Page 

For more information

Contact Us

info@sei.cmu.edu

412-268-5800

Help us improve

Visitor feedback helps us continually improve our site.

Please tell us what you
think with this short
(< 5 minute) survey.