Architecture Assessments; Needs and Experiences.

Similar documents
Passit4Sure.OG Questions. TOGAF 9 Combined Part 1 and Part 2

MBA BADM559 Enterprise IT Governance 12/15/2008. Enterprise Architecture is a holistic view of an enterprise s processes, information and

TOGAF 9.1 Phases E-H & Requirements Management

TOGAF 9.1 in Pictures

TOGAF Foundation Exam

Step 2: Analyze Stakeholders/Drivers and Define the Target Business Strategy

Requirements Analysis and Design Definition. Chapter Study Group Learning Materials

Business Architecture Fundamentals

Architecture Deployment.

Case presentation TOGAF and ArchiMate for. April, 2012 Henry Franken - BiZZdesign

ADM The Architecture Development Method

Deployment of MBSE processes using SysML

New Opportunities for System Architecture Measurement

Architecture. By Glib Kutepov Fraunhofer IESE

It s about your success

Design of an Integrated Model for Development of Business and Enterprise Systems

The Course Modules for TOGAF Online Certification Training: 1. Introduction. TOGAF Structure. 2. Core Concepts

ATAM. Architecture Trade-off Analysis Method with case study. Bart Venckeleer, inno.com

Available online at ScienceDirect. Procedia Computer Science 36 (2014 )

Architecture Practice: a fundamental discipline for information systems

Architecting Agile Businesses:

Architecture-Centric Procurement

2013 Rational Software Open Labs

Scaling Architecture Evaluations Within Real-

October 16-17, Omni Shoreham 2500 Calvert Street NW Embassy Conference Room Washington, DC 20008

University Process Innovation Framework for Process Analysis

Dragon1 EA Framework. Introduction in.

Systems Engineering for Systems of Systems

Quizzes for 1 st Study Group Session

TOGAF 9.1. About Edureka

Objective (c.f., p.58)

Dr. Fatma Dandashi October, 2003

IBM Software Services for Lotus To support your business objectives. Maximize your portal solution through a rapid, low-risk deployment.

MoDAF Update INCOSE. Martin Owen, VP Enterprise Architecture 28 September Telelogic AB

DoD Architecture Framework Version 2.0 Draft

Certkiller.OG questions

Architectural Representations for Describing Enterprise Information and Data

REQUIREMENTS DOCUMENTATION

First Strategic Thinking

Measurement Tailoring Workshops

Evaluating Application Architecture, Quantitatively

A Reference Architecture Primer

Risk Based Testing. -Why we need RBT? -Types of risks -Managing risks -Methods of evaluation & risk analysis -Costs and benefits

An overview of The Open Group's Enterprise Architecture and Evolution of IT4IT

INCOSE (MBSE) Model Based System Engineering (SoS) System of Systems Activity Introduction

White Paper. Seven Secrets of Successful PMOs

SCENARIO-BASED SOFTWARE ARCHITECTURE EVALUATION METHODS An overview

Net-Ready Key Performance Parameter (NR-KPP) Implementation Guidebook

( %)'* + 7# (&)*)')%&&+)*)-.)/##############################################################!

Enterprise Architecture as an Essential Tool Supporting Army Transformation. Fernando Mezquita

Project4EU. a unified Work Programme Management solution for European Union Agencies. White Paper April P a g e

7. Model based software architecture

How to Drive Business Value with Capacity Management

Multiple Views of System Specifications

The 9 knowledge Areas and the 42 Processes Based on the PMBoK 4th

Intermediate Systems Acquisition Course. Lesson 2.11 Program Approval ADE-2. Program Approval ADE-2

Resource Management?

Less Heavy Systems Engineering; How Much is Approriate?

Collaborative Development of Systems Architecting Design Rules

Strategy Analysis. Chapter Study Group Learning Materials

Document Control Information

AGENDA. The PMO Dissected: What Makes It Tick? What a PMO Is NOT WHAT IS A PMO? - FOUNDATION

Quizzes for 1 st Study Group Session

Toolbox for Architecture Framework Discussions at The Open Group. SKF Group, February 2018

A Proposed Community Roadmap for Advancing the Practice of Model-Based Systems Engineering in Government Programs and Enterprises

Spaceflight Software Architecture Analysis Techniques

Document Control Information

Requirements Collaboration. March 22, 2012

OPM The Missing Link. Jane Betterton, BS,ChE., PMP PMIRGC JaneBetterton.com/ BTSResults.com

OPM The Missing Link

Key Drivers How To. Early hazard detection with warning and signaling Maintain safe road condition

How to appraise or assess an architect?

The Role and Task of the System Architect

ENOVIA Life Sciences Accelerator for New Product Introduction

Defining and Creating a Business Analysis Center of Excellence by: Dr. Martin Schedlbauer

Achieving Benefits in Project and in Program Management Guidelines for Success. Here Is What We Will Cover

Estimating SOA, As Easy as 1 2 3

ITIL CSI Intermediate. How to pass the exam

Analyzing and Evaluating Enterprise Architectures John Klein Senior Technical Staff

Service Design Method Cards

Practical Software and Systems Measurement

Iasa Engagements enhance Corporate Membership

Software Project & Risk Management Courses Offered by The Westfall Team

Project Management Professionals

Methodology for Selecting the Preferred Networked Computer System Solution for Dynamic Continuous Defense Missions

Enterprise Management Frameworks & TOGAF 9

Design for Six Sigma in the Software Lifecycle -- Did We Lose the Fox?

Architecture and Design Fundamentals

Enterprise Architecture: an ideal discipline for use in Supply Chain Management

Enterprise Architecture TJTSE Yrityksen kokonaisarkkitehtuuri

IBM WIoT CP Summit Open Labs (NO COST - not a substitute for full training courses)

PrepAwayExam. High-efficient Exam Materials are the best high pass-rate Exam Dumps

Diversity of Cultural Expressions

Federal Segment Architecture Methodology Overview

Practical Business Process Guide

Exam Questions OG0-093

Avancier Methods (AM) Applications architecture diagrams

BETTER TOGETHER: BUILDING MORE EFFECTIVE CROSS-FUNCTIONAL TEAMS

Sid Koslow Winner of the 2015 Glen A. Gilbert Memorial Award

INTERMEDIATE QUALIFICATION

Transcription:

Architecture Assessments; Needs and Experiences. White Paper Resulting from Architecture Forum Meeting April 14,15 2009 (Washington DC, USA) Edited by: Dr. Gerrit Muller, Embedded Systems Institute Mr. Eirik Hole, Stevens Institute of Technology Input was provided by the following participants in the Architecture Forum: Name Organization Name Organization John Bagley Raytheon Rolf Siegers Raytheon Mans Bjuggren 1 Micronic Laser Systems AB Martin Simons 1 Daimler Rob Cloutier Stevens Institute of Technology Arne Skogstad FFI Eirik Hole Stevens Institute of Technology Dinesh Verma Stevens Institute of Technology Kenneth Kung Raytheon Jim Williams Federal Aviation Administration Gerrit Muller Buskerud University College/ESI 1 contribution via internet connection Published on November 20, 2009

1. Introduction According to literature Architecture Assessments are generally undertaken as a risk reduction measure to ensure that the current architectural approach and design is aligned with needs and objectives of its critical stakeholders. Some of the benefits claimed by SEI (http://www.sei.cmu.edu/architecture/tools/atam/index.cfm) are: Identification of risk early in the life-cycle, clarified quality attribute requirements, improved architecture documentation, documented basis for architectural decisions, and increased communication among stakeholders. The Architecting Forum members discussed architecture assessments based on the following questions: 1. Does your company have a need for architecture assessments? 2. Why, what triggers this need? 3. What scope do architecture assessments cover? 4. Who is doing the assessment? E.g. internal or external assessors. 5. What does your company do with the assessment outcome? This discussion provided a baseline of the state of practice of architecture assessments. The next step was to elaborate, by using a hypothetical case to study preferred assessment format, duration, and required assessors and participants. The following questions are explicitly addressed: 1. What Process, Method, or Techniques do you propose? E.g. TOGAF [TOGAF 2009], ATAM[ Kazman 2000], ARID [Clemens 2002], 2. What are you looking at? 3. What are you looking for? 4. What is the assessment result? A list of assessment questions was generated after reflection on the hypothetical case. This list is a starting point to create assessment checklists. 2

2. From assessment trigger to outcome All participants answered the questions formulated in the introduction, for their part of the company. We have processed these answers to consolidate the current state of practice of architecture assessments. Needs and triggers The members, who were present, all indicated a need for architecture assessments. Companies typically want to know: Is our current architecture fit for future needs? Is the architecture still well aligned with strategic objectives? Can our architecture adapt to occurring changes or paradigm shifts, e.g. network centric warfare? Is our ongoing investment justifiable and explainable to independent assessors? It was observed that Assessments are pro-active (e.g. at process gates) or re-active, often triggered by some (perceived) problem. Related to the trigger assessments can be planned, avoiding (big) project conflicts, or disruptive. One of the challenges of assessment is to ensure that the assessment is real and to avoid that participants only go through the motions because some external force asks for an assessment. Interestingly, the announcement of an assessment triggers the creation of documentation before the assessment. In other words, the assessment already has impact before it actually takes place. A side effect of assessments is that overview and awareness of cross-cutting concerns is created. Often the need for overview and awareness is an implicit, unspoken, need. 3

Scope Architecture assessments are performed at different levels, from components up to an entire portfolio. The scope depends mostly on the initial need that triggered the assessment. In general the scope is relatively broad, e.g.: Do we achieve the appropriate level of re-usability across the product line? Does the system align with the enterprise? Scoping of the assessment might be done in two steps, e.g. zoom in on a subset of issues after a global overview. The heavy focus on requirements triggered by Systems Engineering processes narrows the focus of architects and other invloved stakeholders. An assessment should look broad enough. Principle 8.1 Architecture assessments must be broad enough and not be limited to requirements. Who are the assessors? Quite often internal people, architects, managers, designers or engineers form the assessment team. External reviewers are used only in larger organizations with more established architecting processes. For example, in one organization Vice-President Engineering and Program Managers are typical part of the assessment team. During the discussion it was stated that architecture should be assessed by architects. What could be the role of non-architects? The participants supported the importance of having architects review architecture. In fact requirement for the assessment team is that it should contain sufficient "system thinkers" or "big picture people": people how are able to relate specific detailed issues to the much broader context. We tend to call these people architects. That does not exclude non architects, such as managers or experts as member of the assessment team. In some cases the presence of the right expert or manager as assessment team member might even be crucial. However, we consider the main task of the assessment team to assess the architecture as a whole, the big picture, with attention for (threatening) 4

details. Note that the assessment result is often reported to non-architects, for instance the business owner Principle 8.2 Architecture assessors have to be system thinkers with eye for detail It was pointed out that stakeholders can also contribute significantly to an assessment, especially customers can make a significant contribution to the assessment. Ownership of the assessment and the assessment results is important. Ownership helps to achieve stakeholder involvement. A question that popped-up in the same discussion is: do we need to speak the same language? Principle 8.3 Architecture assessments mitigate the risk of inbreeding by involving independents. The who questions (who assesses, for who is the assessment, and for who are the architecture descriptions) triggered lots of discussions. There is a tension between: architects facilitate communication and reasoning across stakeholders; this requires understandability and accessibility contemporary systems have so many concepts and technologies that the descriptions cannot be simplified to be accessible and understandable for everyone. Principle 8.4 Every part of the description of an Architecture should be understandable by directly related stakeholders. The high level description of an architecture should be understandable by non-architects. What is done with the assessment outcome? Most common is that the outcome of the assessment is consolidated in a presentation of limited size, e.g. 20 slides. The outcome is communicated upward, e.g. management team, as well as downward, e.g. the engineering team itself. Quite often the follow-up is delegated to the team that has been reviewed. One of the companies used as rule-of-thumb that 90% of the findings are handled by the reviewed team, for the most severe 10% of issues follow-up is ensured by entering them in the appropriate tracking systems. In exceptional cases the outcome of an assessment caused the abortion of the ongoing tendering process. 5

Forward Architecting Architecting is primarily a forward oriented activity, where by pro-active analysis and synthesis a solution is shaped. Assessments are reactive, architecture assessments tend to detect issues that have not been covered sufficiently by the architecting effort. All processes, tools, checklists that are proposed for architecture assessments are presumably also beneficial for the forward architecting activity. Ideally an assessment is only a formality, since the forward architecting activity did cope with all architectural challenges pro-actively. 3. Elaborating a hypothetical case The course System Architecture and Design from Stevens Institute uses a submarine case in the assessment topic. We divided the participants in two teams and asked them to make an architecture assessment proposal for that case for a 3-day assessment. The proposal addressed: the process or method to be used the format of the assessments the participants the content to be discussed/assessed Figures 1 and 2 show the results of the two teams. Team 1 produced a three day agenda. The agenda alternates review team meetings, plenary sessions and smaller working sessions. Team 1 let the review team meeting start at 12pm, the actual review itself starts at 3pm the first day. The morning is reserved for traveling, assuming that not all reviewers and participants are on the same site. Two hours are reserved for the team to meet each other and to prepare for the actual review. One hour is used plenary to explain the assessment process itself. 6

Day I AM 12-2pm travel Review Team Meeting Day II 8: 30 -Noon (with break) technical arch overview done by: chief architect Day III 8: 30-9 9-noon Review Team Meeting continue interview 2-3pm Reconnect General context vs skills, domain Process Overview all noon-3pm Present tradeoff decisions Key artifacts, key arch drivers Review team meeting (with break) noon-2pm 2-3pm Build outbrief Review outbrief 3-3: 30 3: 30-5: 30 Break Business/mission overview (with break) to define way forward checklist tailoring 3-4pm with chief arch; tailor formal outbrief all done by customer or program manager or eng.domain expert team member tasking identify SME's for interview 4pm-... travel 5: 30 -... Review Team Meeting 3pm-5: 30 Interviews, Research data gathering Figure 1. Agenda proposal for architecture assessment by team 1. The review starts with a business/mission review. This is followed, the next day, by a technical architecture overview. The chief architect is the main presenter who presents trade-off decisions, key artifacts (tangible byproducts produced during architecting) and key architecture drivers. The whole morning is reserved for this technical architecture overview. The early afternoon is used by the review team to reflect on the first part of the assessment and to define the way forward. For example, based on the information obtained so far, the checklists may be tailored, more specific tasks may be allocated to team members, and Subject Matter Experts (SMEs) to be interviewed are identified. The remainder of the afternoon and the following morning are used for data gathering by interviewing and delving into artifacts. At noon the third day the assessment team starts to build the presentation with findings and recommendations: the outbrief. The outbrief is reviewed and tailored with the chief architect. From 3 to 4pm the outbrief is presented plenary. Team 2 created a coarse agenda, but added additional viewpoints on people, the way-ofworking, the output, and the tools: see Figure 2. The session of team 2 elaborated the idea that the assessment team itself makes high level models during the second day. Day 1 is used to get into the problem, to find artifacts and SMEs. The high level model is the means to discuss the architecture, and to identify potential gaps and risks. 7

1 st Day People Output understand (perceived) problem context who can answer this? find artifacts associated with architecture and a level above formulate initial thoughts invite Subject Matter Experts (for "perceived" problem) to discuss the (operational) architecture 2 nd Day assessment team high level architecture model (for artifacts given) assessment team need to listen to 1 st day architecture discussion + invite assessment team SME's submarine team chief architect systems engineer tech customer rep Way of working Assessment team discussion on technical architecture in light of perceived problem. If any problems are found, ID gaps + risks report on findings + recommendations outbrief to everyone who participated in assessment Tools definitions + measures tailor them for this assessment "SysML" tool for 2 nd day model use existing models, if available spreadsheets, templates Figure 2. assessment approach by team 2. Day 3 is used to finish the model and discussions, to create a report, and to do a plenary outbrief. Reflection on case results. The forum explicitly reflected on the case results as presented by both teams. A major issue that popped up is the amount of time that should/can be invested in architecture assessments, as one participant remarked: We invented Cadillacs. Three days is perceived as a huge investment that is only warranted for large programs. The major cost of investment is the time of critical resources both from the project/program itself as well as from the assessors. In practice it is hard to line-up all these resources at the same time. On the other hand, tens of person-days of assessment investment might save tens of person-years of development and engineering work when new risks are uncovered. From noon to noon was proposed as alternative assessment duration. Several member have good experiences with 8

that format, where one afternoon, one evening and one morning is available for the assessment. We also looked at our original questions: 1. What Process, Method, or Techniques do you propose? E.g. TOGAF, ATAM, ARID, 2. What are you looking at? 3. What are you looking for? 4. What is the assessment result? What Process, Method, or Techniques do you propose? In general the mood during the forum meeting was that having some assessment processes, procedures, and guidelines to do assessments is beneficial. The assessment proposals from the teams did not follow one of initially mentioned methods explicitly. Although both teams used elements from TOGAF and ATAM, looking at qualities, expecting certain artifacts and using checklists. The question popped up if there are other methods or techniques that could be used for assessments. Examples mentioned were fault tree analysis, simulations, and FMEAlike techniques for reliability, safety, and security. These specific examples were seen as techniques that should be done by the project or program team itself; the assessment team might assess the outcome of such analysis. The team 2 presentation triggered an extensive discussion about tools and modeling. Does architecture assessment require any other tools than architecting? Can any assessment tool also be used for forward architecting, e.g. checklists? The team 2 presentation also raised questions about what the assessment team can or may do and what the responsibility is of the project or program team. Making high level models definitely is a responsibility of the project or program team. However, when these models are lacking or inadequate the assessment team gains lots of insight by making the models on the spot. 9

What are you looking at? The consensus is that you look at architecture related artifacts of the system and its context, and at people. Examples of artifacts are Concept of Operations (ConOps), requirements, architecture description, trade-off decisions. Quantification and metrics are seen as important to look at. In case of people the assessment team looks at competencies of involved people, but also at behavior, needs, and emotions. Scenarios provide specific examples and a means to look in more depth. What are you looking for? This can be checklist based, for example a checklist with issues the assessors should look for: gaps and risks concerns of the assessment owner the critical parts of an architecture (e.g. internal and external interfaces, key performance parameters) the most relevant issues to be assessed, e.g.: how well does architecture address requirements how adaptable/extendable is the architecture how well do requirements address needs concordance (coupling, dependencies) These items are quite generic, so an assessor needs to be open-minded: what you look at and for might be highly situational. What is the assessment result? The assessment result is a limited size presentation that is presented and shared with a broad set of stakeholders. The identified issues are mostly provided to the project or program team. It is the responsibility of the project or program team to address these issues, or to ignore them based on their own judgment. 10

Assessment support 4. What questions to ask during an assessment governance/ architecting who architects who decides usage context system enterprise & users requirements design realization technology system in context creation life cycle business life cycle context architecture fundamentals documentation artifacts tools Figure 3. Questions to ask during an assessment can be classified in the categories Assessment support, Governance, System in context, and Documentation. The last step in the assessment discussion was a brainstorm to identify what questions to ask during an assessment. The idea behind this list is that it can be transformed into a checklist for architecture assessments. Figure 3 shows a classification of these questions. This classification has been introduced after the brainstorm. Note that architects in the defense industry are quite used to DoDAF and frequently refer to the view number, rather than descriptive label. The used OV-n (Operational View) and AV-n (All View) numbers are defined in the section abbreviations at the end. 11

What artifacts and how many do you have? Knowing this number helps to plan review. Checklist with 60 process items: done, planned or will not be done. Figure 4. Assessment Support question. Figure 4 shows that the brainstorm resulted in one question that helps the assessment team in planning the assessment itself. This question can be posed before the actual assessment workshop. governance/architecting How is the architecture controlled? How is the architecture packaged? How do you communicate it through the project? E.g. via repository, meetings, change notifications, et cetera. In other words assess the architecting as well. What is the customer involvement in the operational view? What is necessary to define the architecture, e.g. list of artifacts or realization plan? What is your equivalent AV-1 (architects contract)? Show me your plan. Figure 5. Governance and Architecting questions. The assessment of an architecture most often also touches the organizational context of the architecture: how is the architecture created, who is responsible and many more governance questions, see Figure 5. Many process, people and governance related issues often pop-up as side effect of the architecture assessment. 12

usage context What are the hottest issues customers want to be addressed? For example, how will the system handle the load of xxx transactions What is the scope, what is the operational goal/problem? For example, Usability problems for military operators, for instance low educated ground soldiers. Who are intended operators? What are the top 5 concerns of the stakeholders? Interoperability: what other systems (or architectures) are influenced by this architecture? Context diagram or OV-2. Understanding the operational context: Are there any scenarios or missions used to validate the architecture? E.g. behavioral aspects or qualities. How are you going to test and integrate? life cycle context Figure 6. Questions from the context of the system Many architecture assessment related questions are targeted towards the understanding of the context of the system: see Figure 6. The fit of the system in the context is one of the primary yardsticks of an architecture. 13

usage context system Show me behavioral model, first level white box model: for example, OV-5 activity diagram, or functional block diagram, or state diagram, or simulation. Does it connect to requirements, does it make sense? How are ilities addressed not present in requirements? This is a measure of craftsmanship. What is the key subset of requirements that is driving the architecture, e.g. as presentation, as spreadsheet, or dump of requirements database, but a small 1..20 items set. What are key performance parameters and related technical measures? Look for end-to-end numbers. Did the architects think about non behavioral requirements, how are these documented, how are these allocated? For example, show me the timing budget for e.g. the threat response. life cycle context Figure 7. Questions about the system and its design and realization. Another major set of questions is targeted at the system itself, its requirements and how these requirements are addressed by the design and realization, see Figure 7. What are the (5 to 10) architecture principles? How do you address security? E.g. physical before virtual, do not harm, et cetera. What are the main architectural trade-offs, what approach, criteria, or strategy is used? Figure 8. Architecture questions. Figure 8 shows a number of questions that are directly related to the architecture and are more overarching than the questions in Figures 5 and 6. 14

documentation Show me the artifacts Show me the bookkeeping how ConOps and requirements trace to system design. Is a traceability matrix, e.g. DOORS, present? Show me OV-1, and then models (look what is there from DoDAF). Figure 9. Documentation questions. Figure 9 shows the documentation related questions. These questions are more focused on the availability of artifacts. 5. Conclusions Many organizations have a need for architecture assessments. Some organizations have the assessments embedded and schedule assessments pro-actively, for example at one of the process gates. In other organizations some trigger is needed to ask for an assessment. The scope of such assessments can range from small, e.g. component level, to large, e.g. enterprise level. We concluded that architecture assessments ought to be done by architects complemented with stakeholders, especially customers. The use of assessment methods and techniques, such as checklists, is seen as beneficial. It was observed that all methods and techniques developed for assessments are also applicable for forward architecting. The assessment team looks primarily at artifacts and people, while looking for assessment owner concerns, vulnerabilities, gaps, risks, and understanding how the most relevant architectural issues. The assessment result is shared with a broad group of people, upward with management teams, downward with engineering teams, and where appropriate with external stakeholders. We made an initial list of questions that can grow and be transformed into an architecture assessment checklist. These questions address assessment support, governance and 15

architecting, the context of the system, the system and design and realization, overarching architecture, and the documentation. Acronyms and Abbreviations ARID Active Reviews for Intermediate Designs ATAM Architecture Trade off Analysis Method AV AV-1 ConOps DoDAF FMEA OV OV-1 OV-2 OV-5 SysML TOGAF All Views Overview and Summary Information Concept of Operations Department of Defense Architecture Framework Failure Mode and Effects Analysis Operational View High-Level Operational Concept Graphic Operational Node Connectivity Description Operational Activity Model Systems Modeling Language The Open Group Architecture Framework Acknowledgements The initial presentation by Jim Williams from the FAA stimulated our discussions. Literature [Kazman 2000] R. Kazman, M. Klein, and P. Clements. ATAM: Method for architecture evaluation. http://www.sei.cmu.edu/publications/documents/00.reports/00tr004.html, 2000. 16

[TOGAF 2009] The Open Group. Open Group Architecture Framework (TOGAF), ISBN: 978-90- 8753-230-7, http://www.opengroup.org/architecture/togaf9-doc/arch/index.html [Clements 2002] P. Clements, R. Kazman and M. Klein, Evaluating Software Architectures Methods and Case Studies, Addison-Wesley, Reading, MA (2002). 17