Integrating Systems Engineering and Test & Evaluation in System of Systems Development

Size: px
Start display at page:

Download "Integrating Systems Engineering and Test & Evaluation in System of Systems Development"

Transcription

1 Integrating Systems Engineering and Test & Evaluation in System of Systems Development Dr. Judith Dahmann The MITRE Corporation McLean, VA USA jdahmann at mitre.org Kathy Smith GBL Systems Camarillo, CA USA Ksmith at gblsys.com Abstract - This paper presents an approach to integrated systems engineering (SE) and test and evaluation (T&E) for based on work underway by the National Defense Industry Association Systems Engineering Division Systems of Systems and Developmental Test and Evaluation Committees [1]. The paper focuses on how to approach T&E for given the challenges of large scale development as a continuous improvement process that provides information on capabilities and limitations for end users and feedback to the and system SE teams toward evolution. Keywords-component; system of systems, system of systems engineering, engineering artifact. test and evaluation I. INTRODUCTION Application of systems engineering to systems of systems () has been gaining momentum both in defense and nondefense sectors. In the US Department of Defense, there is increasing attention to risks introduced by system interdependencies and increasing emphasis by the Military Services and the Joint Staff to apply systems engineering to and mission capabilities. On the non-defense side, the European Commission [2] has launched several new infrastructure projects to develop processes and tools to support planning, development and integration in civil applications. Work on SE has moved from general principles and guidance to more focused implementation approaches. As the practice has matured, there has been growing interest in how to address some of the challenges related to, particularly in the application of test and evaluation to. This paper is based on the work of the National Defense Industry Association (NDIA) Systems Engineering (SE) Division, and Test Committees [1]. From the initiation of the committee in 2008, members identified and T&E as an area of high interest, and initially focused on understanding the challenges of and T&E. In August 2010, the and Test Committees joined in a workshop to address these challenges. A recommendation from the workshop was to develop a best practices model for T&E for. A team was formed and, over the following year, work was undertaken to review the issues facing T&E of given the driving characteristics of, current guidance on SE T&E, and experience with and T&E. The results of this work is reflected in this paper[3]. II. BACKGROUND AND SCOPE While have been defined in various ways [4,5,6], the key characteristic of is the independence of the systems which comprise an. For the purposes of this paper an is defined as a set or arrangement of systems that results when independent and useful systems are integrated into a larger system that delivers unique capabilities [7]. The focus of this paper is on acknowledged [8]. Acknowledged have recognized objectives, a designated manager, and resources for the ; however, the constituent systems retain their independent ownership, objectives, funding, development and sustainment approaches. This paper also applies to mission-level where multiple platforms, information technology (IT) and other systems are brought together to meet larger mission level capability objectives. Other, such as platform-level (integration of separately developed systems on a submarine for instance) and IT-based (where information across an is managed as an enterprise asset), share many of the issues addressed here, but they have their own specific considerations, as well. A potential next step in this research is to look more closely at how this approach applies to other types and domains. The literature on T&E for has focused on several areas. In defense acquisition, work has addressed testing systems in their operational environments in the context of the larger [9-13]. This has included methodologies, tools and environments to support end-to-end testing of the. Much of this has focused on military command and control and network centric warfare applications as well as for other government domains with large scale challenges, such as air traffic control systems [14]. This literature addresses issues of complexity and emergent behavior, and has investigated ways to construct or leverage realistic environments to conduct end-to-end test events to generate data to evaluate the behavior and performance of systems and. This paper acknowledges the importance of this growing body of knowledge, but it recognizes that due to the characteristics of many today, it is not realistic to expect to conduct full end-to-end testing each time changes are made in a complex, even with the use of

2 simulation environments and other advances in testing technologies. T&E of can focus in several areas. The focus of this paper is on the application of T&E during evolution process to address planned and unplanned changes in constituent systems. This can be viewed as developmental T&E or Verification and Validation (V&V) and is done as part of evolution, not as a separate process. T&E can also be viewed as feedback on fielded to assess overall performance and identify operational problems and emergent (unexpected) behavior. This typically draws on data from operationally realistic environments and it may be periodic and not tied directly to update cycles. Finally, T&E is also needed when testing individual systems in an context. The approach discussed in this paper may apply to this aspect, but the focus in developing the approach has been at the versus the system level. So, this paper examines the nature of and SE to identify ways to achieve the goal of T&E given the recognized challenges of large scale during the evolution of an. The overall goal of T&E is to reduce risk by providing crucial information to decision makers [7]. The T&E approach for presented here is characterized by several core ideas Integrate T&E with SE throughout the evolution of an ; Focus T&E on risk, both in the planning of the and in its implementation Employ a variety of sources of evidence including prior T&E results, data from analysis, and test events as needed The objective of the approach is to provide the data needed to achieve the goal of T&E within the constraints of large scale. III. CHARACTERISTICS OF SOS AND CHALLENGES FOR T&E present unique development challenges which also impact T&E. [15] These result from several factors. Most acknowledged mission are comprised of multiple, existing systems which maintain managerial and operational independence and hence have their own development and upgrade cycles. evolution is a product of changes in constituent systems which are typically implemented asynchronously across an. Since the systems typically continue to support other users concurrently with their role in an, there is natural resistance to delaying a system s deployment until other systems developments have completed and an test can be conducted. In addition, systems may be making changes independently from the to meet other user needs. Consequently, it can be very difficult to align changes in systems across an and, in effect, there may be changes in a large taking place fairly continuously. These are challenges which face SE of in development of a sound architecture and in implementing a disciplined approach to evolution of an. Similarly, they affect the ability to conduct T&E as typically practiced in system acquisition where T&E is used to support acceptance and deployment decisions. Finally, mission can be large and complex, which make the challenges of constructing end to end tests costly and time consuming. This complexity poses serious challenges for constructing and validating full test environments, even when employing simulation which can reduce the costs and time for testing elements of an. IV. IMPLEMENTATION OF SOS SE: WAVE MODEL Most development is evolutionary, with updates to the accruing over time based on changes in constituent systems. A model of implementation of the evolution of an is shown in figure 4 [16]. This wave model for systems engineering of an builds on Dombkins work on wave planning for complex systems management [17]. It is characterized by multiple overlapping iterations of evolution supported by ongoing analysis which provides the basis for each iteration or wave of evolution. Continuous monitoring of the external environment is key for SE, since management controls only a small part of the environment. Finally the architecture evolution is also important. The architecture of an provides a persistent framework for the evolution over time, however the architecture is typically implemented incrementally and may itself evolve. The wave model is described in more detail elsewhere [16]. Table 1 displays a description of each step along with the associated key SE artifacts [18]. Artifacts particularly germane to T&E are in bolded text. The wave model provides the framework for examining the role of T&E in evolution. Initiate Conduct Analysis Develop Arch Plan External Environment Continue Analysis Implement Evolve Arch Figure 1. SE Wave Model Elements [10] IV. T&E IN SOS EVOLUTION Plan Continue Analysis Implement Evolve Arch This discussion is grounded in the principle that applying effective T&E to an is based on effective application of SE at the level. As a result, SE and T&E share key common elements making it somewhat difficult to tell where SE stops and T&E begins. objectives and metrics serve as the basis for requirements on constituent systems and for evaluating objectives. As such SE provides a starting framework to address T&E challenges including approaches to accommodating asynchronous system development and test and to developing architecture approaches which shelter the from changes in systems.

3 In this section, the key activities in each step of the wave model are identified as context for discussion of T&E. The T&E activities are overlaid on the SE activities, showing where shared or common activities occur and how T&E may affect or supplement them to ensure effective T&E for the. A. Initiation In this paper we assume that the Initiate step has taken place, there is an organization responsible for the and there is SE support. However, as in acknowledged, the systems which support the objectives maintain management and operational independence, and typically continue to support their original users. As described in An Implementers View of SE for [16], Initiate provides the foundational information to start the SE TABLE 1. STEPS IN THE WAVE MODEL AND ASSOCIATED ARTIFACTS Step in Model Analysis: Provides an analysis of the as is and the basis for evolution by establishing an initial baseline and developing initial plans for the engineering efforts. Develop and Evolve Architecture: Develops and evolves the persistent technical framework for addressing evolution and a migration plan identifying risks and mitigations. Plan : Evaluates the priorities, options and backlogs to define the plan for the next upgrade cycle. Implement : SE team monitors systems implementation and test and leads integration and test. Artifacts Characterize Capability objectives CONOPs Constituent System Info Technical Baselines Performance Measures &Methods Performance Data Requirement Space Risks & Mitigations Plan for SE SE Planning Elements Master Plan Agreements Architecture Allocated baseline Risks and mitigations Agreements Implementation, Integration & Test Plans Integrated Master Schedule (IMS) d Master Plan, Technical Baselines, and Requirements Space Test Report Technical Plans, Requirements Space, Performance Data System Test Reports IMS Technical Baselines process, including an understanding of objectives, key users, user roles and expectations, and core systems supporting capabilities. Information important to the execution of this element includes a statement of top-level objectives for the ( capability objectives), a description of how systems in the will be employed in an operational setting ( CONOPS) and programmatic and technical information about systems that affect capability objectives (systems information). B. Analysis Conduct Analysis provides an analysis of the as is and the basis for evolution by establishing an initial baseline and developing initial plans for the engineering efforts. Artifacts important to this step are shown in Table 1. These are developed based on a set of SE analysis activities focused on characterizing the. These include Understand operational context and develop a concept of operations (CONOPS) which includes key steps in the process and constraints on those steps. This provides some form of an activity sequence (business process model, set of mission threads, or use cases), including conditions, players and performance objectives Develop the functional baseline by defining the specific tasks for each component of the activity sequence to further delineate the functionality supporting the capability objective Define the current system or product baseline by identifying systems supporting the capability objectives and align them to the components and functionality needs, with data on current performance Develop a baseline functional architecture for the by identifying the key functions to be supported across the activity sequence, including performance objectives What is the relationship of these activities and artifacts to T&E of the? First, looking at the SE activities, it is clear that many of these are key for SE and they also are critical elements of T&E. Capability and performance objectives and methods provide a foundation for T&E. CONOPs, mission threads and tasks are all needed elements for structuring T&E activities. Beyond this, recognizing that most acknowledged are comprised of existing, fielded systems, understanding current system performance requires an understanding and assessment of available evidence from various sources, including systems T&E results. Systematic development and analysis of this data is core to analysis. Further, since typically brings systems together in new ways, there may not be data available to understand key aspects of current performance of the. In cases where more data is needed, addition testing may be required to support the analysis. In sum, analysis establishes T&E foundations for the. Further, T&E feedback from constituent systems is an important input to the analysis. C. Architecture Development and Evolution Develop and Evolve Architecture focuses on creating the persistent technical framework for addressing evolution and developing a migration plan, identifying risks and mitigations. As described in An Implementers View of SE for [16], The architecture provides a shared representation of the technical framework and is used to inform and document decisions and guide evolution. It

4 defines the way in which the contributing, constituent systems work together. Implementation of the architecture will add to the requirements space with changes needed in systems both in interfaces and in functionality when key to cross-cutting issues. Architecture development and evolution is conducted based on a set of logical activities: Delineate end to end capabilities Identify how current systems support the capability objectives Align systems with functional needs by identifying how specific systems support the capability objectives and align to functionality needs Identify and evaluate alternative approaches to organizing and augmenting systems to meet needs building on knowledge gained through analysis The evaluation of architecture alternatives is based on a set of considerations which include support for functional and performance objectives, impact on constituent systems, costs of implementation and sustainment and risks. The products are the selection of architecture and plans for implementation, including changes needed in constituent systems. How does T&E factor into architecture development and evolution? First, as in analysis, data on attributes and performance of systems, typically drawn from system T&E, is a key contributor to identification and analysis of architecture approaches. Further, because the components of the typically include existing systems, T&E methods and tools can contribute to the assessment of alternative architectures through application of various approaches including live, virtual and constructive environments to assess alternatives against desired architecture objectives. Finally, T&E considerations may suggest important architecture characteristics. For example, if elements of the are known to have long lead time test and certification requirements (e.g., airborne platforms) and there is a need for rapid improvements in the performance, an architecture which shelters these constituent systems from changes would be a consideration. Similarly, if some systems are known to be very dynamic, an objective of the architecture may be to isolate changes in those systems from impact on other systems in the. The robustness of the architecture enables continued support for objectives in the context of change in these systems, and limits risk of impacts on the. It may also facilitate more efficient T&E, as discussed below. D. Planning for s In Plan the focus is on evaluating the priorities, options and backlogs to define the plan for the next upgrade cycle. The artifacts generated during this step, listed in Table 1, provide the technical plans for the next increment of evolution and the supporting schedules and agreements. SE activities in this step include Identify areas to be addressed in the next iteration, based both on areas with shortfalls in performance and the feasibility of making changes in the associated systems, given their development plans Evaluate options for addressing needs, working with the relevant systems to identify and assess alternative ways to address the needs and identify the selected approach which allocates incremental capabilities and hence new requirements to constituent systems Develop plans for system and development, integration and test; this includes the level Integrated Master Schedule (IMS) which focuses on key synchronization points among system level plans and additions to system plans for development and test, not on aggregation of system-level plans at the system level, Identify and address risks and mitigations, which include systems changes and their potential impacts How does T&E factor into planning an update? First, as both the activities and artifacts indicate, T&E planning is a core part of the planning for the update. Second, given the earlier discussion of barriers to conducting full testing with each change in a constituent of the, this step is critical in identifying areas of risk which warrant T&E attention. To plan for T&E, expected changes in systems are identified. This includes both changes planned as part of the as well as changes planned by the system for other reasons. Using the architecture as the technical framework, the potential impact of these changes can be assessed. How does a planned change affect other systems in the given the CONOPS and system dependencies? Where will these changes be experienced? Are other components of the equipped to handle the changes? Where are the risks and how can these be assessed? How can they be mitigated? Changes with no impact on other parts of the (e.g., improved quality of sensor feeds but no change in format, interfaces, volume, etc.) may not need attention beyond T&E at the system level. Changes with potential impact, and hence risk to other systems and the, will require T&E attention. This identification of risk based on unintended or adverse impacts of the planned changes on the or other systems, is a key part of the SE activities in the update process as well. In SE update planning, an important factor is determining what changes will be made in an iteration is ensuring that once these changes are made, the (and the constituent systems) will continue to perform as well as before, or better. At a minimum, these should operate within an acceptable level of performance. Depending on the architecture, changes in one system (added capacity for example) could affect systems downstream if they don t have the ability to handle the impact of the change (an increased load). In addition, a change in an interface or functionality to support the, could impact other systems which depend on the current features. This is an area of shared equity for SE and T&E. It needs to be evaluated during the update planning process because the results may change the update plans if unacceptable impacts are identified which cannot be addressed during implementation. Even in areas where the results suggests that the changes will have no impact on other parts of the, additions to systems testing may be needed to verify

5 that the implementation does not include unanticipated changes beyond what was specified in the plans and which could increase the risk of undesirable impacts. For those areas where there is confidence of success but manageable risks, then the focus is on T&E during implementation. In sum, a critical part of planning an update is the analysis of changes and risks to identify the areas to be addressed by T&E. Changes in the systems in the are identified (both planned by the and planned independently by the constituents) and together SE and T&E address the following questions: What are the potential impacts of these changes? What are the risks? What evidence is there that these changes will not adversely impact other systems and mission objectives? A recently developed method to assess these potential ripple effects is known as Functional Dependency Network Analysis (FDNA) [19]. What data is needed and how can this data be obtained? Can this be done as part of the system tests? Are added test events needed? How are these incorporated into the overall plan and IMS? What testing tools and environments are needed to address the specific challenges? Do we need test drivers to address asynchronous development? What environments can be created or employed to address specific risks? D. Implementation Implement involves actions by both the systems and. The systems implement and test changes while the team monitors progress and conducts integration and testing, resulting in a new product baseline. T&E is a key part of implementation. Systems making updates conduct T&E at the system level. Based on the assessment of impacts of changes on other parts of the, added T&E elements have been included in the system test plans. T&E includes monitoring the implementation of the planned system testing, conducting added testing to address risks, evaluating the results, and recommending changes in plans as needed. capability results are identified (both planned and unplanned). T&E questions surrounding the implementation step include the following. Does performance meet expectations for this increment? What are the potential impacts of any shortfall or differences on the next increment? What are the risks? What evidence is there that these changes will need to be regression tested in the next increment? The answers to these questions become the basis for capabilities and limitations information provided to end users and serve as feedback to the SE and T&E teams for the and systems toward continued evolution of the. V. DISCUSSION This paper presents the first step in an ongoing effort to examine ways to achieve the goals of T&E given the challenges of large acknowledged, mission level. The approach has been on integrated SE and T&E with a focus on early assessment of risk and interdependencies as part of planning and analysis as well as the driver for focusing T&E for an iteration of evolution. The work leaves open a number of important questions and opportunities for future work. First, what are the implications of this approach for analysis and architecture methods and tools? What approach enable a foundational understanding of the constituent systems and their relationships to provide the analytic ability to assess impacts of changes. Second, how applicable is this approach to other types or domains? What are the differences in these types and domains and what do these mean for extending this approach for these other problem areas? Third, what has been the experience with use of the critical aspects of this approach by current efforts? How can this experience be used to assess, adapt and mature the approach? Finally, while this paper has focused on a tractable approach to addressing risk with increments of evolution, it leaves open the question of how to assess the end to end behavior of the in context. Past work [15] has suggested the need for continuous feedback from field experience from operations or use in training or operational exercises. This type of T&E after deployment is key to understand the degree to which the is meeting user capability needs and it can be critical to understand unanticipated aspects (emergent behavior) of an in operation. There has been discussion of this need and there are operational examples, but this remains an area which warrants research attention. VI. SUMMARY The approach presented in this paper began with recognition of T&E constraints. Full T&E to address impact on the of all changes in constituent systems is typically not feasible given the size of many (including the number of different constituent systems) and the dynamic nature of constituent systems and their independent, asynchronous development schedules. This is certainly the case for conventional live testing and is often the case even with approaches using various forms of virtual and constructive simulation. As a result, the approach focuses T&E specifically on areas of risk critical to and constituent system success. It begins with an examination of the changes which have been made in the, and identifies areas critical to success and places where changes could have adverse impacts. Risk is assessed using evidence from a range of sources including live test where the risks warrant it. Evidence can be based either on activity at the level, or roll-ups of system level activities. T&E actions can be explicit verification testing, application of models and simulations, use of linked integration facilities, and analysis of results of system level operational test and evaluation. T&E results provide continuous improvement feedback to end users in the form of capabilities and limitations rather than as test criteria for deployment and to SE teams of both the and systems on progress and issues.

6 References [1] Team members include Judith Dahmann (MITRE) and Rob Heilmann (OSD) (coleads), John R. Palmer (Boeing), Kathy Smith (GBL), Jim Buscemi (GBL), Ed Romero (Navy), Paola Pringle (Navy), William Riski (Booz Allen Hamilton), Keith A. Taggart (SPEC), Laura Feinerman (MITRE), Kent Pickett (MITRE), Chris Scrapper (SAIC), George Rebovich (MITRE), P. Michael Guba (INTEROPTIK), Elizabeth Wilson (Raytheon). [2] A. Konstantellos, Prerequisites and challenges for a European Systems of Systems R&D initiative, Proceedings of Institute of Electrical and Electronics Engineers Systems of Systems Engineering Conference, Albuerque, June [3] J. Dahmann, R. Heilman et al. Best practices model for SE and T&E: A Report from NDIA SE and T&E Commeittees, Presentation at NDIA SE Conference, San Diego, October [4] Maier, M. Architecting principles for systems-of-systems, Systems Engineering Vol. 1, No. 4 (1998) [5] Sage, A.P. and C.D. Cuppan, On the systems engineering and management of systems of systems and federations of systems, Information, Knowledge, Systems Management, Vol. 2, No. 4 (2001), pp [6] Jamshidi, M. Introduction to systems of systems, in Systems of Systems Engineering: Innovations for the 21st Century, Wiley, [7] DoD. Defense Acquisition Guidebook. Washington, D.C.: Pentagon, February [8] J. Dahmann and K. Baldwin. Understanding the current state of US defense systems of systems and the implications for systems engineering, Proceedings of the IEEE Systems Conference, Montreal, Canada, April [9] E. Bjorkman, et al., Testing in a joint jnvironment , ITEA Journal 2009; 30: [10] J. Columbi, Interoperability test and tvaluation: A systems of systems field study, Crosstalk, November [11] S. Conley, Test and evaluation strategies for network-enabled systems, ITEA Journal 2009; [12] K. Meiners.. Net-Centric ISR, Presentation at NDIA SE Conference, San Diego, October, [13] J. Goodenough, V&V of, Presentation at Federal Avaition Administration Verification and Validation Summit, Atlantic City, NJ, October [14] M. Molz, Initiatives in V&V, Presentation at Federal Avaition Administration Verification and Validation Summit, Atlantic City, NJ, October [15] Dahmann, J., G. Rebovich, J, Lane and R. Lowry. Systems of systems test and evaluation challenges, Proceedings of Institute of Electrical and Electronics Engineers Systems of Systems Engineering Conference, May, Loughborough, UK, June [16] J. Dahmann, G. Rebovich, J, Lane and R. Lowry. Implementer s View of Systems Engineering for Systems of Systems, Proceedings of Institute of Electrical and Electronics Engineers Systems Conference, Montreal, Canada. May [17] D. Dombkins. Complex Project Management, Booksurge Publishing, South Carolina: [18] Dahmann, J., G. Rebovich, J, Lane and R. Lowry, Systems engineering artifacts for, Proceedings of Institute of Electrical and Electronics Engineers Systems Conference. San Diego, CA, April 5-8, [19] Garvey, P. R., Pinto, C. A., Introduction to Functional Dependency Network Analysis, Proceedings from the Second International Symposium on Engineering Systems, Massachusetts Institute of Technology, June 2009,

7