MSTAR® and the IBM Rational Unified Process (RUP®)
Mosaic’s Structured Testing and Assessment Repository (MSTAR®) is a proven testing methodology providing state-of-the-art testing processes. The IBM Rational Unified Process (RUP®) is a comprehensive software development methodology incorporating state-of-the-art development processes. Much interest has been expressed on how these two powerful methodologies fit together. This paper describes how MSTAR® complements and extends the power of RUP®.
EXECUTIVE SUMMARY
RUP® places a high value on testing early and continuously to measure and provide feedback on the developed product’s quality. RUP® includes a testing discipline that is responsible for iteratively performing the required testing. This discipline is supported by quality concepts and presents testing as a critical risk management technique. While the RUP® testing discipline represents a good iterative testing approach for software quality, it does not provide the depth of testing guidance needed to ensure that testing consistently meets its quality objectives across all projects.
MSTAR® shares RUP®’s vision of early and continuous testing and is supported by similar quality concepts. MSTAR® encompasses industry-recognized testing activities based on years of experience focused on developing testing expertise. With this expertise, MSTAR® offers a depth of guidance, best practices and support that transforms RUP®’s testing discipline into a comprehensive, consistent and highly effective risk management tool. The following table summarizes the testing support provided by RUP® and MSTAR® for the key testing activities. This table reflects RUP®’s most current testing discipline.
Testing Activities
|
RUP®
|
MSTAR®
|
Testing Organization
|
- Four test stages are conceptually described
- Unit
- Integration
- System
- Acceptance
- Unit test is included in the Developer Component Implementation activity
- The other test levels are not explicitly included in any activity but are assumed to follow the generic iterative testing workflow
|
- Test levels are defined based on responsibilities with objective entry/exit criteria
- Standard test levels are defined
- Unit
- System
- Acceptance
- Pilot
- Customized test levels are supported based on project needs
- Integration test level is a common custom test level
- Test levels are integrated with the development process
|
Plan Test
|
- The Test Strategy artifact defines the test approach based on test motivators (quality risks)
- A Test Idea List identifies test conditions
- No other explicit test planning activity is included
|
- Multiple test plans are developed based on risk
- The Master Test Strategy provides the overall plan to establish a foundation for all the test levels
- Detailed Test Plan(s) are developed for each test level
- Test conditions are measurably based on requirements
|
Design Tests
|
- No specific activity to design tests is included
- The Structure the Test Implementation activity identifies Test Suites and what Test Scripts will be needed
- Test Suites are implemented by combining Test Scripts
- Test Ideas are conceptually used to define tests
- Maintaining traceability is included but discussed without specifics
- There is no reference to test data
- Activities mention manual testing but lean towards test automation
|
- Super Scenarios, Scenarios and Test Cases are developed based on testable requirements and proven testing techniques
- Different types of testing are included based on the need to test different types of requirements
- Test coverage is managed by maintaining traceability of scenarios to requirements
- Test data is managed outside the scenarios in Data Profiles to promote reusability
- An architectural approach to automation supports seamless automation of manual Test Scenarios
|
Execute and Evaluate Tests
|
- Test Suites are executed
- Test Log is captured
- Problems are entered as Change Requests
- The Test Evaluation Summary artifact includes test execution coverage based on executed Test Scripts, defect analysis and the tester’s assessment of risk and quality
|
- Automated and manual Super Scenarios are executed
- Problems are entered as Defects
- Guidelines/Templates/Samples are included for:
- Test Schedules
- Execution Checklists
- Test Execution Logs
- Defect Reports
- Test Coverage Reports
- Test Status Reports
- Objective coverage measures and risk assessments are enabled through the process for sizing software based on testable requirements
|
Test Management
|
- The Assess and Advocate Quality activity includes review of testing artifacts to identify and manage risk
- Testing work plan is prepared as part of the Project Management discipline
- Various size measures and testing metrics are conceptually suggested
|
- Guidelines/Templates for Testing Work Plan are provided
- Testing estimating guidelines are provided
- The process for estimating system size based on testable requirements provides consistent size measurement throughout the software life cycle
- Sample reports provide testing metrics for objective risk management
|
|
As illustrated in the previous table, MSTAR® and RUP® support the same quality testing workflow, but MSTAR® completes and extends the RUP® process with comprehensive guidelines, templates and samples based on industry testing experience.
To read the entire paper, please email us for the white paper. We offer white papers for both the current version and prior (2002) version of RUP®.