Evolution of Quality Attribute Workshops as an Architecture-Evaluation Technique, The

NEWS AT SEI

Author

Mario R. Barbacci

This library item is related to the following area(s) of work:

Software Architecture

This article was originally published in News at SEI on: September 1, 2002

In previous columns,  I described initial experiences applying Quality Attribute Workshops (QAWs) to evaluate the implications of system-design decisions. This column provides an update on the development of the method and provides lessons learned from applying the QAW method in four different U.S. government acquisition programs. Most of these lessons were integrated into the method incrementally, as described in a recent SEI technical report [1].

QAWs provide a method for analyzing a system’s architecture against a number of critical quality attributes, such as availability, performance, security, interoperability, and modifiability, that are derived from mission or business goals. The QAW does not assume the existence of a software architecture. It was developed to complement the Architecture Tradeoff Analysis Method (ATAM) in response to customer requests for a method to identify important quality attributes and clarify system requirements before there is a software architecture to which the ATAM could be applied. The QAW analysis is conducted by applying a set of test cases to a system architecture, where the test cases include questions and concerns elicited from stakeholders associated with the system. In this column, I describe the activities in the QAW method, how it has been adapted to specific customer needs, and several lessons learned during the evolution of the process.

The QAW process, shown in Figure 1, can be organized into four distinct groups of activities: (1) scenario generation, prioritization, and refinement; (2) test case development; (3) analysis of test cases against the architecture; and (4) presentation of the results. The first and last segments of the process occur in facilitated one-day meetings. The middle segments are undertaken independently by those developing or analyzing the test cases, and may involve experimentation that continues over an extended period of time.

Figure 1: The QAW Process

Figure 1: The QAW Process

The process is iterative in that the test-case architecture analyses might lead to the development of additional test cases or to architectural modifications. Architectural modifications might prompt additional test-case analyses, and so forth.

The first activity in the QAW process is to generate, prioritize, and refine scenarios. In the QAW, a scenario is a statement about some anticipated or potential use or behavior of the system (see sidebar 1). Scenarios are generated in a brainstorming, round-table session and capture stakeholders’ concerns about how the system will do its job. Only a small number of scenarios can be refined during a one-day meeting, so stakeholders must prioritize the scenarios generated previously by using a voting process. Next, the stakeholders refine the top three or four scenarios to provide a better understanding of their context and detail (see sidebar 2). The result of this meeting is a prioritized list of scenarios and the refined description of the top three or four scenarios on that list.

The next activity in the QAW process is to transform each refined scenario from a statement and list of organizations, participants, quality attributes, and questions into a well-documented test case. The test cases may add assumptions and clarifications to the context, add or rephrase questions, group the questions by topic, and so forth (see sidebar 3). Who is responsible for developing the test cases depends on how the method has been applied and who carried out the task (e.g., sponsor/acquirer or development team).

The test-case architecture analysis is intended to clarify or confirm specific quality attribute requirements and might identify concerns that would drive the development of the software architecture. Some of the test cases could later be used as “seed scenarios” in an ATAM evaluation (e.g., to check if a concern identified during the test-case analysis was addressed by the software architecture). The results of analyzing a test case should be documented with specific architectural decisions, quality attribute requirements, and rationale (see sidebar 4).

The results presentation is the final activity in the QAW process. It is a one- or two-day meeting attended by facilitators, stakeholders, and the architecture team. It provides an opportunity for the architecture team to present the results of its analysis and to demonstrate that the proposed architecture is able to handle the cases correctly.

Tailoring QAW

The application of the method can be tailored to the needs of a specific acquisition strategy and might include incorporating specific documents or sections of documents into the request for proposals (RFP) or contract [2].

Application Before Acquisition

In one application, the QAW method was used in a pre-competitive phase for a large system. Stakeholders involved laboratories and facilities with different missions and requirements. An architecture team (with members from various facilities) was building the architecture for a shared communications system before awarding a contract to a developer, and tailored the QAW process as follows:

  • Stakeholders from different facilities held separate meetings to generate, prioritize, and refine scenarios.
  • The architecture team turned these refined scenarios into test cases and analyzed the proposed architecture against them.
  • The architecture team then presented the results of the analysis first to a review team, and later to the original stakeholders.

Application During Solicitation and Proposal-Evaluation Phases

Figure 2 illustrates a common acquisition strategy. Starting with an initial request for proposals, the acquisition organization evaluates proposals from multiple contractors and chooses one to develop the system.

 Figure 2: Common Acquisition Strategy

Figure 2: Common Acquisition Strategy

In one application of the QAW method, the QAW activities took place during the competitive selection, and were customized as follows:

  • Before the competitive solicitation phase, scenario-generation meetings were conducted at three different facilities. These were representative of groups of users with similar needs and responsibilities (e.g., technicians, supervisors, analysts at headquarters).
  • Early in the competitive solicitation phase and before the release of the RFP, the acquirer conducted bidders’ conferences to inform potential bidders about the need for conducting architecture analysis.
  • The acquirer developed several test cases for each type of user, drafted sections of the RFP to incorporate architecture-analysis requirements, and included the test cases as government-furnished items (GFIs) in the RFP proper.
  • As part of their proposals, the bidders were expected to conduct an architecture analysis of the RFP test cases, present their results to the acquirer, and write reports consisting of the results of their analysis, their response to requests for clarification, risk-mitigation plans for the risks identified during the presentation, and any new or revised architecture representations.

Application During a Competitive Fly-Off

Figure 3 illustrates a “rolling down select,” a different acquisition strategy. Starting with an initial request for proposals, the acquisition organization awards contracts to a small number of contractors to conduct a “competitive fly-off.” In this phase, the contractors work on a part of the system, still competing for award of the complete contract. At the end of the phase, the contractors submit updated technical proposals, including additional details, and the acquirer makes a final “down select,” or selection of one of the competing contractors.

Figure 3: Acquisition Strategy Using Competitive Fly-Off< p>

Figure 3: Acquisition Strategy Using Competitive Fly-Off

In this application, the QAW method was used during the Competitive Fly-Off phase (with three industry teams competing) of the acquisition of a large-scale Command, Control, Communications, Computers, Intelligence, Surveillance, and Recognizance (C4ISR) system. In this case, the QAW process was customized as follows:

  • The scenario-generation meetings were conducted with each contractor separately. As a result of these meetings, participants gained an understanding of the process, a list of prioritized scenarios, and a set of refined high-priority scenarios.
  • A government technical assessment team (TAT) used these scenarios to develop a number of test cases. Changes were made to hide the identity of the teams and extend the coverage of the scenarios over a set of assets, missions, and geographical regions. An example was developed to make the process more understandable, and copies were distributed to all industry teams.
  • The contractors performed the analysis and presented the results in a dry-run presentation. There was a large variation in the presentations for these meetings, ranging from performing only one test case in great detail, to performing all test cases in insufficient detail. Each contractor was then informed of how well it did and how it could improve its analysis. The contractors then completed the analysis in a final presentation of the results, allowing them to correct any flaws.

Lessons Learned

Scenario Generation and Refinement

The scenario-generation meeting is a useful communication forum to familiarize stakeholders with the activities and requirements of other stakeholders. In several cases, the developers were unaware of requirements brought up by those with responsibility for maintenance, operations, or acquisition. In one case, potential critics of the project became advocates by virtue of seeing their concerns addressed through the QAW process. We also learned that the facilitation team has to be flexible and adapt to the needs of the customer, as the following observations indicate:

  • The approach relies on identifying the right stakeholders and asking them to do some preparatory reading and attend the meeting for a day. In one case, the task of inviting these stakeholders fell on the architecture team, which created some awkward situations. The hosts of the meeting need a way of attracting the right people to the meeting. This could include invitations explaining the advantages of participating, and recommendations from upper management to cause interest in attending.
  • The process of generating scenarios in a brainstorming session is usually inclusive, but the process for refining the high-priority scenarios might not be. Some stakeholders might feel left out of the refining effort if other, more vocal stakeholders dominate the process. It is the responsibility of the facilitators to make sure that everyone can contribute. The template describing specific details to be identified during the scenario refinement was a great improvement over the initial refinement exercises, because it kept the stakeholders focused on the task at hand and avoided diversions.
  • Some of the scenarios or questions generated during the refinement might not be focused on quality attributes. This is usually an indicator that the issues involved are “hot buttons” for some of the stakeholders. Although we normally try to focus the scenarios on quality attributes, the underlying issues could be important, and on occasion, we have allowed the scenarios and questions to stand.
  • The scenarios generated in a meeting can be checked against system requirements in two ways. First, unrefined scenarios can be checked for whether they relate to existing requirements. If they do, requirements details might be used to refine the scenarios. Second, refined scenarios can lead to a better understanding of some requirements. Undocumented requirements can be discovered by both means.
  • The scenarios generated in a meeting can be checked against the expected evolution of the system. In projects planning a sequence of releases, the scenarios should specify the release to which they apply, ensuring that the projected deployment of assets and capabilities match the scenarios and test cases.

Developing the Test Cases

Building the test cases from the refined scenarios takes time and effort.

In one case, the QAW facilitators did not extract sufficient information during the refinement session to build the test cases, and the facilitators had to organize additional meetings with domain experts to better define the context and quality-attribute questions. An unintended consequence was that the resulting test-case context was far more detailed than if it had been generated during the scenario-refinement session. As a result, only portions of the larger test-case context were relevant to the test-case questions. We learned that having an extremely detailed test-case context is not worthwhile. It takes too long to develop, may be hard to understand, and does not lead to focused questions. A test case should not be more than a few sentences.

Since the software architecture is not yet in place, the questions and expected responses should not force design decisions on the development team. Hence, the questions must be quite general, and the expected responses may suggest architectural representations (for example, “what is the availability of this capability?”) but not design solutions (for example, “use triple modular redundancy for high availability”).

Analyzing the Architecture Using Test Cases

The test-case architecture analysis might reveal flaws in the architectures and cause the architecture team to change the design. The test cases generated by the QAW process often extend the existing system requirements.

In one case, the new requirements seemed to challenge the requirements-elicitation effort and raised concerns on the architecture team. A typical comment was, “The system wasn’t meant to do that.” Some judgment must be made as to which test cases can be handled and at which phase of system deployment. While this can lead to extended arguments within the team, it is a useful exercise, since these concerns must be resolved eventually.

In another case, the stakeholders were concerned because the process only analyzed a few test cases out of a large collection of scenarios. They wanted to know what was to be done with the remaining scenarios. This issue should be resolved before the scenario-generation meeting. One approach is to analyze the architecture incrementally against an ever-expanding set of test cases and, if necessary, adjust the architecture in each increment. However, this approach is constrained by budgets, expert availability, and participants’ schedules.

Results Presentation

Like in the scenario-generation meeting, participants are provided with a handbook before the meeting. The handbook includes the test cases and provides a test-case analysis example so the participants know what to expect at the meeting. In some applications of the QAW, we have conducted the results presentations in two phases: first as a rehearsal, and then as a full-scale presentation. The following observations are derived from conducting a number of QAWs:

  • In one case, the initial example given in the participants’ handbook was too general. This reduced the level of buy-in from participants. We corrected this by developing another example with the right level of detail.
  • A dry-run presentation should be conducted when the architecture team making the presentation is unsure about (a) the level of detail required; (b) the precision expected from its answers to the test-case questions; (c) how to incorporate other analysis results (for example, reliability, availability, and maintainability analysis; or network loading analysis); or (d) what additional architectural documents might be needed.
  • The full-scale presentation takes place after “cleaning up” the results of the dry-run presentation. Concerns that arise in the full-scale presentation have to be addressed as potential threats to the architecture.

The process for conducting QAWs is solidifying as we continue to hold them with additional customers, in different application domains, and at different levels of detail. The approach looks promising. The concept of checking for flaws in the requirements before committing to development should reduce rework in building the system.

References

[1] Barbacci, M. et al. Quality Attribute Workshops, 2nd Edition (CMU/SEI-2002-TR-019). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 2002.

[2] Bergey, J. & Wood, W. Use of Quality Attribute Workshops (QAWs) in Source Selection for a DoD System Acquisition: A Case Study (CMU/SEI-2002-TN-013). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 2002.

About the Author

Mario Barbacci is a Senior Member of the staff at the Software Engineering Institute (SEI) at Carnegie Mellon University. He was one of the founders of the SEI, where he has served in several technical and managerial positions, including Project Leader (Distributed Systems), Program Director (Real-time Distributed Systems, Product Attribute Engineering), and Associate Director (Technology Exploration Department). Prior to joining the SEI, he was a member of the faculty in the School of Computer Science at Carnegie Mellon University.

His current research interests are in the areas of software architecture and distributed systems. He has written numerous books, articles, and technical reports and has contributed to books and encyclopedias on subjects of technical interest.

Barbacci is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE) and the IEEE Computer Society, a member of the Association for Computing Machinery (ACM), and a member of Sigma Xi. He was the founding chairman of the International Federation for Information Processing (IFIP) Working Group 10.2 (Computer Descriptions and Tools) and has served as chair of the Joint IEEE Computer Society/ACM Steering Committee for the Establishment of Software Engineering as a Profession (1993-1995), President of the IEEE Computer Society (1996), and IEEE Division V Director (1998-1999).

Barbacci is the recipient of several IEEE Computer Society Outstanding Contribution Certificates, the ACM Recognition of Service Award, and the IFIP Silver Core Award. He received bachelor’s and engineer’s degrees in electrical engineering from the Universidad Nacional de Ingenieria, Lima, Peru, and a doctorate in computer science from Carnegie Mellon.

The views expressed in this article are the author's only and do not represent directly or imply any official position or view of the Software Engineering Institute or Carnegie Mellon University. This article is intended to stimulate further discussion about this topic.

Find Us Here

Find us on Youtube  Find us on LinkedIn  Find us on twitter  Find us on Facebook

Share This Page

Share on Facebook  Send to your Twitter page  Save to del.ico.us  Save to LinkedIn  Digg this  Stumble this page.  Add to Technorati favorites  Save this page on your Google Home Page 

For more information

Contact Us

info@sei.cmu.edu

412-268-5800

Help us improve

Visitor feedback helps us continually improve our site.

Please tell us what you
think with this short
(< 5 minute) survey.