The Role of Assumptions in Knowledge Engineering

Size: px
Start display at page:

Download "The Role of Assumptions in Knowledge Engineering"

Transcription

1 to appear in International Journal of Intelligent Systems (IJIS), 13(7), The Role of Assumptions in Knowledge Engineering Dieter Fensel 1 and V. Richard Benjamins 2 1 University of Karlsruhe, Institute AIFB, Karlsruhe, Germany, dieter.fensel@aifb.uni-karlsruhe.de 2 Artificial Intelligence Research Institute (IIIA), Spanish Council for Scientific Research (CSIC), Campus UAB, Bellaterra, Barcelona, Spain, richard@iiia.csic.es, & Dept. of Social Science Informatics (SWI), University of Amsterdam, Roetersstraat 15, 1018 WB Amsterdam, The Netherlands, richard@swi.psy.uva.nl, Abstract. Problem-solving methods are means to describe the inference process of knowledge-based systems. During the last years, a number of these problemsolving methods have been identified that can be reused for building new systems. However, problem-solving methods require specific types of domain knowledge and introduce specific restrictions on the tasks that can be solved by them. These requirements and restrictions are assumptions that play a key role in reusing problem-solving methods, in acquiring domain knowledge, and in defining the problem that can be tackled by the knowledge-based systems. In the paper, we discuss the different roles, assumptions play in the development process of knowledge-based systems and provide a survey of assumptions used by diagnostic problem solving. We show how such assumptions introduce target and bias for goal-driven machine learning and knowledge discovery techniques. 1 INTRODUCTION During the last years, Problem-solving methods (PSMs) have become quite successful in describing the reasoning behavior of knowledge-based systems ([Chandrasekaran, 1986], [Marcus, 1988], [Puppe, 1993], [Schreiber et al., 1993], [Schreiber et al., 1994], [Eriksson et al., 1995], [Steels, 1990], [Terpstra et al., 1993], [Angele et al., 1996]). On the one hand, PSMs refine generic inference strategies and search methods to task and domain-specific circumstances. On the other hand, they are not designed for one specific application problem. Instead, they are usable for a family of similar problems: Similar in terms of the goals that should be achieved and similar in the type of knowledge that is required as resource for the reasoning process. Libraries of PSMs are described in [Benjamins, 1995], [Breuker & Van de Velde, 1994], [Chandrasekaran et al., 1992], [Motta & Zdrahal, 1996], and [Puppe, 1993]. One of the first problem-solving methods for knowledge-based systems (KBSs) is heuristic classification (see Figure 1). [Clancey, 1985] identified it as a generic reasoning pattern of several expert systems applied to different problems. It consists of three main inference steps: a data abstraction step that abstracts concrete values like body-temperature = 39.2 degree Celsius to the value high fever ;

2 2 a heuristic match step that uses these abstract descriptions to heuristically establish some possible solution classes. a refinement step that should find a final solution by discrimination. Each of the inference step requires specific knowledge types as resource. A data abstraction step can only be performed if hierarchical knowledge over data is available, and a refinement step of solutions can only be done if hierarchical knowledge over solution classes is available. Describing PSMs by their inference steps, knowledge types, and inference structures that determine the data and knowledge flow between the inferences, has become a common style in knowledge engineering. Examples for diagnosis are provided by [Benjamins, 1995] and for planning by [Barros et al., 1997]. Basically, these descriptions decompose the entire inference process into more elementary sub steps. [Van de Velde, 1988] and [Akkermans et al., 1993] proposed the description of the competence of a PSM in extension to their decompositional descriptions. Such competence descriptions define the goals that can be achieved by a PSM independent from how these goals are achieved. Thus, such competence descriptions resemble the idea of functional specifications from software engineering for PSMs. A functional specification of a software artefact describes what the software system does without referring to the way how it achieves its functionality [Fensel, 1995c]. Examples of such competence descriptions can be found in [Fensel & Groenboom, 1997], [Fensel & Schönegge, 1997b], and [Fensel & Schönegge, submitted] However, establishing the competence of a PSM requires the definition of a control flow that defines the execution order of the inference steps of a PSM [Fensel et al., to appear] and a notion of the functionality that is provided by the domain knowledge. The competence definition of a PSM like heuristic classification critically depends on the competence of its hierarchical and heuristic match knowledge. Statements about the absolute or relative correctness and completeness of the method can only be done in terms of assumptions over the absolute or relative correctness and completeness of the domain knowledge. These assumptions are therefore more precise characterizations of the knowledge types and competence of a PSM. In consequence, current work on PSM pays much more attention to Patient abstractions match Disease classes abstract refine Patient data Diseases Legenda: premise and/or conclusion inference Fig. 1 Heuristic Classification [Clancey, 1985].

3 3 these assumptions ([Benjamins & Aben, 1997], [Fensel, 1995a], [Fensel et al., 1996], [Wielinga et al., 1995], [Benjamins & Pierret-Golbreich, 1996], [Benjamins et al., 1996], [Fensel & Benjamins, 1996], [Fensel & Straatman, to appear], [O Hara & Shadbolt, 1996], [Motta & Zdrahal, 1996], [Breuker, 1997], [Fensel & Schönegge, submitted]). In this paper, we will take a closer look at assumptions of PSMs. In Section 2, we introduce a general framework for specifying KBSs at a conceptual level that takes into account the important role of assumptions. Also, we sketch the twofold role assumptions can play and express the relationship between these two roles as the law of conservation of assumptions (cf. [Benjamins et al., 1996]). In Section 3, we provide an extensive survey on assumptions used in diagnostic problem solving. This survey provides the empirical base for our argument and delivers numerous illustrations for our point. In Section 4, we describe the role assumptions play in knowledge acquisition. We describe methods for assumption verification, assumption identification, and knowledge acquisition guided by assumptions and discuss the role that existing verification, machine learning, and knowledge discovery techniques can play in these processes. Finally, we provide conclusions and future work. 2 THE LAW OF CONSERVATION OF ASSUMPTIONS Mostly, papers on problem-solving methods focus on the description of reasoning strategies and discuss their underlying assumptions as a side aspect. We take a complementary point of view and focus on these underlying assumptions as they play important roles: Assumptions are necessary to characterise the precise competence of a problem-solving method in terms of the tasks that can be solved by it, and in terms of the domain knowledge that is required by it. Assumptions are necessary to enable tractable problem solving and economic system development of complex problems. First, assumptions reduce the worst-case or averagecase complexity of computation [Fensel & Straatman, to appear]. Second, assumptions may reduce the costs of the system development process through simplifying the problem that must be solved by the system [Fensel, 1997b]. Finally, assumptions have to be made to ensure a proper interaction of the problem solver with its environment. In the following, we will first discuss the different elements of a description of a KBS and second we will sketch their proper relationships and the process of deriving them. 2.1 The Four Elements in Specifying KBSs In [Fensel & Groenboom, 1997], we provided different aspects of a specification of knowledge-based system which are related by assumptions (see Figure 2): a task definition defines the problem to be solved by the KBS; the PSM defines the reasoning process of the knowledge-based system; and a domain model describes the domain knowledge of the knowledge-based system. Each of these three elements is described independently to enable the reuse of task descriptions in different domains, the reuse of PSMs for different tasks and domains, and the reuse of domain knowledge for different tasks and PSMs. Therefore, a fourth element of a specification of a KBS is an adapter that is necessary to adjust the three

4 4 other (reusable) parts to each other and to the specific application problem. The task definition specifies the goals that should be achieved in order to solve a given problem, which are functionally specified as an input-output relation. A task definition also defines assumptions about the domain knowledge. Already such a simple task like the selection of the maximal element of a set of elements requires a preference relation as domain knowledge. Assumptions are used to define the requirements on such a relation (e.g. transitivity, symmetry, etc.). The reasoning of a knowledge-based system can be described by a problem-solving method. A PSM consists of three parts. First, a definition of the functionality defines the competence of the PSM independent of its realisation. Second, an operational description defines the dynamic reasoning process. Such an operational description describes how the competence can be achieved in terms of the reasoning steps and their dynamic interaction (i.e., the knowledge and control flow). The third part of a PSM concerns assumptions about the domain knowledge. Each inference step requires a specific type of domain knowledge with specific characteristics. The description of the domain model introduces the domain knowledge as it is required by the PSM and the task definition. Three elements are needed to define a domain model. First, a description of properties of the domain knowledge at a meta-level. The meta-knowledge characterises properties of the domain knowledge. It is the counterpart of the assumptions on domain knowledge made by the other parts of a KBS specification: assumptions made about domain knowledge by these parts, must be stated as properties of the domain knowledge. The second element of a domain model concerns the domain knowledge and case data necessary to define the task in the given application domain, and necessary to carry out the inference steps of the chosen problem-solving method. The third element is formed by external Task definition Goals (T G ) Assumptions (T A ) Problem-solving method (PSM) Competence (PSM C ) Operational Specification (PSM O ) Assumptions (PSM A ) Adapter Signature mappings (A M ) Assumptions (A A ) Domain model Meta knowledge (D M ) Domain knowledge + case data (D K ) External assumptions (D A ) Fig. 2 The different elements of a specification of a knowledge-based systems.

5 5 assumptions that link the domain knowledge with the actual domain. These external assumptions capture the implicit and explicit assumptions a modeler made while building a domain model of the real world. The description of an adapter maps the different terminologies of task definition, PSM, and domain model onto each other, collects the assumptions of task and PSM and may introduce further assumptions that have to be made to relate the competence of a PSM with the functionality as it is introduced by the task definition. Because it relates the three other parts of a specification together and establishes their relationship in a way that meets the specific application problem, they can be described independently and selected from libraries. The consistent combination and adaptation of the three different components to the specific aspects of the given application (because they should be reusable they need to abstract from specific aspects of application problems) must be provided by the adapter. 2.2 The Law of Conservation of Assumptions When establishing the proper relationship between PSM and task, one usually requires correctness and completeness of the PSM relative to the goals of the task: Correctness requires that each output that is derived by the PSM also fulfils the goal of the task: i,o (PSM A (i) PSM C (i,o) TASK A (i) TASK G (i,o)) simplified: i,o (PSM(i,o) TASK(i,o)) Completeness requires that the PSM provides an output for each input that leads to a fulfilled goal of the task: i (TASK A (i) o 1 TASK G (i,o 1 ) PSM A (i) o 2 PSM C (i,o 2 )) simplified: i ( o 1 TASK(i,o 1 ) o 2 PSM(i,o 2 )) It is not necessarily the same output because the task may not be a function (i.e., several output are possible). However, a perfect match is unrealistic in many cases. In general, most problems tackled with KBSs are inherently complex and intractable (see e.g. [Bylander, 1991], [Bylander et al., 1991], and [Nebel, 1996]). 1 A PSM has to describe not just a realization of the functionality, but one which takes into account the constraints of the reasoning process and the complexity of the task. The way PSMs achieve efficient realization of functionality is by making assumptions [Fensel & Straatman, to appear]. These assumptions put restrictions on the context of the PSM, such as the domain knowledge and the possible inputs of the method or the precise definition of the goal to be achieved when applying the PSM. Notice that such assumptions can work in two directions to achieve this result. First, they can restrict the complexity of the problem, that is, weaken the task definition in such a way that the PSM competence is sufficient to realize the task. Second, they can strengthen the competence of the PSM by assuming (extra) domain knowledge. Weakening: Reducing the desired functionality of the system and reducing therefore the complexity of the problem by introducing assumptions about the precise task definition. 1. Exceptions are classification problems which have often known polynomial time complexity (see [Goel et al., 1987]).

6 6 Goal Goal Assumption teleoplogical 1 Assumption teleoplogical 2 PSM PSM Assumption ontological 1 Assumption ontological 2 Fig. 3 An example of this type of change is to no longer require an optimal solution, but only an acceptable one, or to make the single-fault assumption in model-based diagnosis. Strengthening: Introducing assumptions about the domain knowledge (or the user of the system) which reduces the functionality or the complexity of the part of the problem that is solved by the PSM. In terms of complexity analysis, the domain knowledge or the user of the system is used as an oracle that solves complex parts of the problem. These requirements therefore strengthen the functionality of the method. Both strategies are complementary. Informally: The two effects of assumptions. TASK - Assumption weakening = PSM + Assumption strengthening That is, the sum of both types of assumptions may be constant. Decreasing the strength of one assumptions type can be compensated by increasing the strength of the other type (see Figure 3), i.e. TASK - PSM = = Assumption weakening + Assumption strengthening This is called the law of conservation of assumptions in [Benjamins et al., 1996]. More formally, both types of assumptions appear at different places in the implications that define the relationship between PSM and task: Adapted Correctness i,o (Assumption strengthening PSM(i,o) ( Assumption weakening TASK(i,o))) Adapted Completeness i ( o 1 TASK(i,o 1 ) Assumption weakening ( Assumption strengthening o 2 PSM(i,o 2 )) Recalling that an implication is true if the premise is false or if the premise and the conclusion are true, this twofold impact can be explained easily. Assumptions weaken the implication by either strengthening the premise or by weakening its conclusion. 2 The first type of assumptions is used to weaken the goal which must be achieved by the PSM and the second type of assumption is used to improve the effect which can be achieved by the method by requiring external sources for its reasoning process. Therefore, we will call the first type teleological assumptions (i.e., Assumptions teleological,) and the second type ontological assumptions (i.e., Assumptions ontological,). Both types of assumptions serve the 2. A formula α is weaker than a formula β iff every model of β is also a model of α, i.e. β = α and = β α.

7 7 same purpose of closing the gap between the PSM and the task goal which should be achieved by it. On the other hand, both types achieve this through a move in quite the opposite direction (see Figure 3). In the second case of Figure 3, the PSM makes less assumptions about available domain knowledge. This must be compensated by stronger teleological assumptions, i.e. by decreasing the actual goal which can be achieved by the method. These relationships make it natural to view the sum of the effects of both types of assumptions as constant. The role of the two different types of assumptions (i.e., the direction of their influence) remains different. Ontological assumptions are required to define the functionality of a PSM, i.e. they extend the effect which can be achieved by the operational specification of a PSM. Teleological assumptions are required to close the gap between this functionality of a PSM and a given goal. They have to weaken the goal in cases where the final goal is beyond the scope of the functionality of the PSM. Besides their different direction, both types of assumptions have something in common which leads to the natural question whether they are interchangeable. The composed outcome of their joined effort is constant. The question arises whether and how the weakening or strengthening of ontological assumptions can be compensated by strengthening or weakening the teleological assumptions and vice versa. This question is quite essential for the applicability of a PSM for a given task and domain. The knowledge requirements of a PSM can be weakened or strengthened according to the available domain knowledge, the effort which is required to derive and to model further knowledge, and the (teleological) assumptions which can be made to reduce the goal which must be achieved. Teleological assumptions have to be made if the (ontological) assumptions about available domain knowledge cannot be satisfied to an extent that would enable the achievement of the goals as they are originally specified. The applicability problem for PSMs is therefore essentially a question of the relationships between these two different types of assumptions. We will illustrate this by an example taken from the area of diagnosis with component models. Component-based diagnosis with multiple faults is in the worst case exponential in the number of components ([Bylander et al., 1991]). Every element of the power-set of the set of annotated components is a possible hypothesis. If one is not interested in problemsolving in principle but in practice, further assumptions have to be introduced that either decrease the worst-case, or at least the average-case behavior. A drastic way to reduce the complexity of the diagnostic task is achieved by the single-fault or N-fault assumption (SFA) [Davis, 1984], which reduces the complexity to polynomial in the number of components. If the single-fault assumption holds, the incorrect behavior of the device is completely explainable by one failing component. Interestingly, the same assumption can either be interpreted as a requirement on domain knowledge or as a restriction of the delivered functionality. The SFA defines either strong requirements on the provided domain knowledge, or significantly restricts the diagnostic problems that can correctly be handled by the diagnostic system.

8 8 If the SFA has to be satisfied by the domain knowledge, then each possible fault has to be represented as a single entity. In principle this causes complexity problems for the domain knowledge as each fault combination (combination of faulty components) has to be represented. However, additional domain knowledge could be used to restrict the exponential growth. [Davis, 1984] discusses an example of a representation change where a 4-fault case (i.e., 15 different combinations of faults) is transformed into a single fault. A chip with four ports can cause faults on each port. When we know that the individual ports never fail, but only the chip as a whole, a fault on four ports can be represented as one fault of the chip. Even without such a representation change, we do not necessarily have to represent all possible fault combinations. We could, for example, exclude all combinations that are not possible or likely in the specific domain (expert knowledge). Instead of formulating the requirement above on the domain knowledge, one can also weaken the task definition by this assumption. This means that the competence of the PSM meets the task definition under the assumption that only single faults occur. That is, only in cases where a single fault occurs, the method works correctly and complete. It turns out that the same assumption can either be viewed as a requirement on domain knowledge or as a restriction of the goal of the task. Therefore, it is not an internal property of an assumption that decides its status, instead it is the functional role it plays during system development or problem solving that creates this distinction. Formulating it as a requirement asks for strong effort in acquiring domain knowledge during system development, and formulating it as a restriction asks for additional external effort during problem solving if the given case does not fulfil the restrictions and cannot be handled properly by the limited system. 3 ASSUMPTIONS IN DIAGNOSTIC PROBLEM SOLVING The first diagnostic systems built were heuristic systems, in the sense that they contained compiled knowledge which linked symptoms directly to hypotheses (usually through rules). With these systems, only foreseen symptoms can be diagnosed, and heuristic knowledge that links symptoms with possible faults needs to be available. One of the main principles underlying model-based diagnosis [Davis, 1984] is the use of a domain model (called Structure, Behavior, Function (SBF) models in [Chandrasekaran, 1991]). Heuristic knowledge that links symptoms with causes is no longer necessarily in these systems. The domain model is used for predicting the desired device behavior, which is then compared to the observed behavior. A discrepancy indicates a symptom. General reasoning techniques such as constraint satisfaction or truth maintenance can be used to derive diagnoses that explain the actual behavior of the device using its model. Because the reasoning part is represented separately from domain knowledge, it can be reused for different domains. This paradigm of model-based diagnosis gave rise to the development of general approaches to diagnosis, such as constraint suspension [Davis, 1984], DART [Genesereth, 1984], GDE [de Kleer & Williams, 1987], and several extensions to GDE (GDE+ [Struss & Dressler, 1989], Sherlock [de Kleer & Williams, 1989]).

9 9 In this section, we will focus on assumptions underlying these approaches to diagnostic problem solving. First, we discuss assumptions that are necessary to relate the task definition of a diagnostic system with its real-world environment (see Section 3.1). That is, assumptions on the available case data, the required domain knowledge and the problem type. Second, we discuss assumptions introduced to reduce the complexity of the reasoning process necessary to execute the diagnostic task (see Section 3.2). Such assumptions are introduced to either change the worst-case complexity or the average-case behavior of problem solving. Third, we sketch further assumptions that are related to the appropriate interaction of the problem solver with its environment (see Section 3.3). 3.1 Assumptions Necessary to Define the Diagnostic Task In model-based diagnosis (cf. [de Kleer et al., 1992]), the definition of the task of the KBS requires a system description of the device under consideration and a set of observations, where some indicate normal and other abnormal behavior. The goal of the task is to find a diagnosis that, together with the system description, explains the observations. In the following, we discuss four different aspects of such a task definition and show the assumptions related to each of them. The four aspects are: identifying abnormalities, identifying causes of these abnormalities, defining hypotheses, and defining diagnoses Identifying Abnormalities Identification of abnormal behavior is necessary before a diagnostic process can be started to find explanations for the abnormalities. This identification task requires three kinds of knowledge, of which two are related to the type of input, and one to the interpretation of possible discrepancies (see [Benjamins, 1993]): observations of the behavior of the device must be provided to the diagnostic reasoner; a behavioral description of the device must be provided to the diagnostic reasoner; knowledge concerning the (im)preciseness of the observations and the behavioral description as well as comparison knowledge (thresholds, etc.) are necessary to decide whether a discrepancy is significant. Other required knowledge concerns the interpretation of missing values, and whether an observation can have several values (i.e., its value type). Relevant assumptions state that the two types of inputs (i.e., observations amd behavioral descriptions) need to be reliable. Otherwise, the discrepancy could be explained by a measuring fault or a modelling fault. In other words, these assumptions guarantee that if a prediction yields a different behavior than the observed behavior of the artefact, then the artefact has a defect [Davis & Hamscher, 1988]. These assumptions are also necessary for the meta-level decision whether a diagnosis problem is given at all (i.e., whether there is an abnormality in system behavior). This decision relies on a further assumption: the no design error assumption [Davis, 1984] which says that if no fault occurs, then the device must be able to achieve the desired behavior. In other words, the discrepancy must be the result of a faulty situation where some parts of the system are defect. It cannot be the result of a situation where the system works correctly, but cannot achieve the desired functionality because it is not designed for this. If this assumption does not hold, one has a design problem and not a diagnostic problem.

10 Identifying Causes Another purpose of the system description is the identification of possible causes of faulty behavior. This cause-identification knowledge must be reliable [Davis & Hamscher, 1988], or, in other words, the knowledge used in model-based diagnosis is assumed to be a correct and complete description of the artefact. Correct and complete in the sense, that it enables the derivation of correct and complete diagnoses if discrepancies appear. 3 In accordance with different types of device models and diagnostic methods, these assumptions wear different clothes. In the following, we restrict our attention to component-oriented device models that describe a device in terms of components, their behaviors (a functional description), and their connections. 4 The set of all possible hypotheses is the power-set of the set of annotated components { mode 11 (c 1 ), mode 12 (c 1 ),..., mode nmn (c n )}, where the annotation mode ji (c j ) describes that the j-th component is in mode i. [Davis, 1984] has pointed out that one should be aware of the underlying assumptions for such a diagnostic approach and listed a number of them. First, the localised failure of function assumption: the device must be decomposable in welldefined and localised entities (i.e., components) that can be treated as causes of faulty behavior. Second, these components have a functional description that provides the (correct) output for their possible inputs. If this functional description is local, that is, it does not refer to the functioning of the whole device, the no function in structure assumption [de Kleer & Brown, 1984] is satisfied. Several diagnostic methods also expect the reverse of the functional descriptions, thus, rules that derive the expected input from the provided output called inference rules in [Davis, 1984]. If only correct functional descriptions are available, fault behavior is defined as any other behavior than the correct one. Fault behavior of components can be constrained by including fault models, that is, functional descriptions of the components in case they are broken (cf. [de Kleer & Williams, 1989], [Struss & Dressler, 1989]). If one assumes that these functional descriptions are complete (the complete fault knowledge assumption), then components can be considered innocent if none of their fault descriptions is consistent with the observed faulty behavior. A result of using fault models is that all kinds of non-specified and physically impossible behaviors of a component are excluded as diagnosis. For example, using fault models, it becomes impossible to conclude that the fault one of two light bulbs is not working is explained by a defect battery that does not provide power and a defect lamp that lights without electricity (cf. [Struss & Dressler, 1989]). Further assumptions that are related to the functional descriptions of components are the no fault masking and the non intermittency assumption. The former assumption states that the defect of an individual or composite component, or of the entire device must be visible by changed outputs (cf. [Davis & Hamscher, 1988], [Raiman, 1992]). According to the latter assumption, a component that gets identical inputs at different points of time, must produce 3. A typical problem of diagnosis without knowledge about fault models (i.e., incomplete knowledge) is that the reasoner provides, in addition to the right diagnoses, also wrong diagnoses. The result is complete but not correct because the provided domain knowledge is not complete. 4. It is a critical modelling decision what to view as a component and which types of interactions are represented (cf. [Davis, 1984]). Several points of view are possible to decide what is regarded as being a component. Different levels of physical representations result in different entities; the independent entities that are used in the manufacturing process of the artefact could be used as components; or functional unities of the artefact could be seen as components.

11 11 identical outputs. In other words, the output is a function of the input (cf. [Raiman et al., 1991]). [Raiman et al., 1991] argue that intermittency results from incomplete input specifications of components, but that it is impossible to get rid of it (it is impossible to represent all required additional inputs in a complete way). A third assumption underlying many diagnostic approaches is the no faults in structure assumption (cf. [Davis & Hamscher, 1988]) that manifests itself in different variants according to the particular domain. The assumption states that the interactions of the components are correctly modelled and that they are complete. This assumption gives rise to three different classes of more specific assumptions. First, the no broken interaction assumption states that connections between the components work correctly (e.g. no wires between components are broken). 5 If this is too strong, the assumption can be weakened by representing the connections themselves as components too. Second, the no unexpected directions assumption (or existence of a causal pathway assumption, [Davis, 1984]) states that the directions of the interactions are correctly modelled and are complete. For example, a light bulb gets power from a battery and there is no interaction in the opposite direction. The no hidden interactions assumption (cf. [Böttcher, 1996]) assumes that there are no nonrepresented interactions (i.e., closed-world assumptions on connections). A bridge fault [Davis, 1984] is an example of a violation of this assumption in the electronic domain. Electronic devices whose components unintendedly interact through heat exchange, is another example [Böttcher, 1996]. In the worst case, all potential unintended interaction paths between components are represented [Preist & Welhalm, 1990]. The no hidden interactions assumption is critical since most models (like design models of the device) describe correctly working devices and unexpected interactions are therefore precisely not mentioned. A refinement of this assumptions is that there are no assembly errors (i.e., every individual component works correctly but they have been wired up incorrectly) Defining Hypotheses In addition to knowledge that is required to identify a discrepancy and knowledge that provides hypotheses used to explain these discrepancies, one requires further knowledge to decide which type of explanation is required. [Console & Torasso, 1992] distinguish two types of explanations: weak explanations, that are consistent with the observations (no contradiction can be derived from the union of the device model, the observations, and the hypothesis), and strong explanations, that imply the observations (the observations can be derived from the device model and the hypothesis). Both types of explanation can be combined by dividing observations in two classes: observations that need to be explained by deriving them from a hypothesis, and observations that need only be consistent with the hypothesis. In this case one requires knowledge that allows to divide the set of observations. The decision which type of explanation to use, can only be made based on assumptions about the environment in which the KBS is used Defining Diagnoses Having established observations, hypotheses and an explanatory relation that relates hypotheses with observations, one must establish the notion of diagnosis. Not each 5. It is possible to represent the interactions between components as possible hypotheses but this leads to new problems (see 3.1.5).

12 12 hypothesis that correctly explains all observations needs to be a desired diagnosis. One could accept only parsimonious hypotheses as diagnoses (cf. [Bylander et al., 1991]). A hypothesis or explanation H is parsimonious if H is an explanation and there exists no other hypothesis H that also is an explanation and H < H. One has to make assumptions about the desired diagnosis (cf. [McIlraith, 1994]) in order to define the partial ordering (<) on hypotheses. For example, whether the diagnostic task is concerned with finding all components that are necessarily fault to explain the system behavior, or whether it is concerned with finding all components that are necessarily correct to explain the system behavior. In the first case, we aim at economy in repair, whereas in safety critical applications (e.g., nuclear power plants) one should obviously choose for the second case. As shown by [McIlraith, 1994], the assumptions about the type of explanation relation (i.e., consistency versus derivability) and about the explanations (i.e., definition of parsimony) make also strong commitments on the domain knowledge (the device model) that is used to describe the system. If we ask for a consistent explanation with minimal sets of faulty components (i.e., H 1 < H 2 if H 1 assumes less components as being fault than H 2 ), we need knowledge that constrains the normal behavior of components. Otherwise we would simply derive all components as correct. If we ask for a consistent explanation with minimal sets of correct components (i.e., H 1 < H 2 if H 1 assumes less components as being correct than H 2 ), we need knowledge that constrains the abnormal behavior of components. Otherwise we would simply derive all components as faulty. The definition of parsimonious hypotheses introduces a preference on hypotheses. This could be extended by defining further preferences on diagnoses to select one optimal one (e.g., by introducing assumptions related to the probability of faults). Again, knowledge about preferences must be available to define a preference function and a corresponding ordering Summary Figure 4 summarises the assumptions that are discussed above and groups them according to their purpose. All these assumptions are necessary to relate the definition of the functionality of the diagnostic system with the diagnostic problem (i.e., the task) to be solved and with the domain knowledge that is required to define the task. Table 1 provides an explanation of the assumptions along with the role they play (function), the domain they are about (case data, domain knowledge or task), and some references where they are discussed in more detail. Table 1: Effect Assumptions in component-oriented diagnosis (cd = case data, = domain knowledge, t = task). name explanation is about function some references existence of observations reliability of observations observations must be provided to the diagnostic system The provided observations must be reliable. cd It is necessary for detecting discrepancies. It is necessary for assuming that the discrepancy must be explained by a diagnosis. It is necessary for detecting discrepancies. [Benjamins, 1993] [Benjamins, 1993], [Davis & Hamscher, 1988] [Benjamins, 1993] cd existence of a behavioral description reliability of behavioral description The desired system behavior must be known to the diagnostic reasoner. The description of the system must be reliable. It is necessary for assuming that the discrepancy must be explained by a diagnosis. [Benjamins, 1993], [Davis & Hamscher, 1988]

13 13 Table 1: Effect Assumptions in component-oriented diagnosis (cd = case data, = domain knowledge, t = task). name explanation is about function some references existence of knowledge to identify discrepances Knowledge is required to compare the observations with the behavioral description. It is necessary for interpreting discrepancies. [Benjamins, 1993] reliability of the discrepancy identification knowledge The knowledge used to detect discrepancies must be reliable. It is necessary for interpreting discrepancies correctly. [Benjamins, 1993] no design error The discrepancy between expected and actual behavior does not result from the (incorrect) design of the device. t The behavioral discrepancy is a fault and not just an impossibility. [Davis, 1984] existence of a set of components The device can be decomposed into a set of components. The entire device can be decomposed into smaller units that constitute the device. [Davis, 1984], [Davis & Hamscher, 1988] localized failure of function, no function in structure Faulty components can be identified as causes. The reasons for faulty behavior do not have to be constructed but can be selected from a finite set. [Davis, 1984], [de Kleer & Brown, 1984] existence of a set of annotations (i.e., of component modes) Components could have several behavioral modes that need to be provided. The diagnostic reasoner can select from the behavioral modes provided for each component. [Struss & Dressler, 1989], [de Kleer et al., 1992] completeness of the set of annotations = complete fault knowledge All possible modes of the components are known. It is used to infer the mode of a component if all other behaviors do not (even not partially) explain the fault. [Struss & Dressler, 1989], [de Kleer et al., 1992] existence of inputoutput descriptions of the components This knowledge defines the input-output behavior of the components. The behavioral description of the components is required to detect their faulty behavior and to derive the overall behavior of the complete device. [de Kleer & Williams, 1987], [Davis & Hamscher, 1988] existence of outputinput descriptions of the components This knowledge defines the output-input relation of the components. This knowledge can be used to derive additional discrepancies. [Davis, 1984], [Raiman, 1989] existence of functional descriptions of faulty behavior of components This knowledge defines the input-output behavior of the components in case they are broken. The behavioral description of the components is required to identify different possible faults of a component. [de Kleer & Williams, 1987], [Struss & Dressler, 1989] complete behavioral descriptions (complete fault models) All possible behaviors of a component are modelled by its functional description. It is used to completely constrain the possible behavior of a component. [de Kleer & Williams, 1987], [Struss & Dressler, 1989] no fault masking A fault of a component is visible in its behavior and in the behavior of the entire device. cd & It is necessary for detecting faulty components. [Davis, 1984], [Davis & Hamscher, 1988], [Raiman, 1992] non intermittency The output of a component is a function of the input (e.g., the behavior does not change over time). cd It is necessary for interpreting the discrepancy between an observation and an output of a behavioral description of a component. [Davis, 1984], [Raiman et al., 1991]

14 14 Table 1: Effect Assumptions in component-oriented diagnosis (cd = case data, = domain knowledge, t = task). name explanation is about function some references existence of a model of the component interactions It assumes that the possible interactions between components are known to the reasoner. This model is required to derive the overall behavior of the system and the local inputs of components from the local outputs of the components. [Davis, 1984], [Davis & Hamscher, 1988] no fault in structure assumption Faulty components are the only causes. Only components need to be treated as possible causes for the faulty behavior. [Davis, 1984], [Davis & Hamscher, 1988] no broken interactions The interactions work properly, i.e., the connections work properly. Only components need to be treated as possible causes for the faulty behavior and the interaction model describes the real interactions. [Davis, 1984], [Davis & Hamscher, 1988] no unexpected direction The direction of the interaction is as represented. Only components need to be treated as possible causes for the faulty behavior and the interaction model describes the real interactions. [Davis, 1984] no hidden interactions (closed world assumption) There are no interactions that are not represented in the model. Only components need to be treated as possible causes for the faulty behavior and the interaction model describes the real interactions. [Davis, 1984], [Böttcher, 1996] no assembly error The components are not wired incorrectly. Only components need to be treated as possible causes for the faulty behavior and the interaction model describes the real interactions. [Davis & Hamscher, 1988], [Böttcher, 1996] type of explanation relation (type of hypotheses) Need an observation be consistent with the hypothesis or must it be derivable from it. & t The problem solving is either constraints satisfaction or abductive inference. [Console & Torasso, 1992], [de Kleer et al., 1992], [ten Teije & van Harmelen, 1996] classification of observations It introduces an distinction between observations that describes normal and abnormal behavior. In abductive inference only the abnormal behavior must be explained. [Console & Torasso, 1992] type of explanation (type of diagnosis) Should the set of fault components contain all components that need to be fault or that could be fault. & t The diagnosis is used for an economic repair process versus it is used for safety-critical monitoring. [McIlraith, 1994] preference knowledge on diagnoses It defines preferences between diagnoses. Necessary for selecting the diagnoses with high preferences. [de Kleer & Williams, 1987], [Davis & Hamscher, 1988] All these assumptions are necessary to relate a model of the device with the actual device under concern. There is no such thing as an assumption-free representation. Every model, every representation contains simplifying assumptions [Davis & Hamscher, 1988]. If the assumptions are too strong, one could consider weakening them. 6 However, this raises

15 15 Assumptions for Effect Assumptions for identifying abnormalities observations existence reliability behavioral description existence reliability discrepancy i dentification k nowledge existence reliability no design error Assumptions for identifying causes cause identification knowledge set of devices localized failure of function no function in structure set of annotations existence correct & complete functional description existence correct no fault masking complete non intermittency Assumptions for defining hypotheses description of the interactions of components existence no fault in structure Fig. 4 output-input relation fault behaviors complete fault models correct no broken interactions complete no unexpected directions no hidden interactions no assembly error consistency derivability Assumptions for Effect. classification of observations Assumptions for defining diagnoses no heat exchange between electronical devices order for parsimonious preferences fault probabilities another problem in model-based diagnosis, namely its high complexity or intractability. This will be discussed in the following section For example, they can be weakened by representing all desired interactions as components (e.g., wires) that could fail; by representing additional possibilities of interactions (e.g., electronical devices can interact via heat exchange) [Böttcher, 1996]; by representing all potential unintended interaction paths between components [Preist & Welhalm, 1990]; by representing additional inputs to get rid of intermittency [Raiman et al., 1991]. Each of these weakenings significantly increases the computational complexity of the problem-solving process.

16 Assumptions Necessary to Define an Efficient Problem Solver Besides the assumptions that are necessary to define the diagnostic task, further assumptions are necessary because of the complexity of model-based diagnosis. Component-based diagnosis is in the worst case exponential in the number of annotated components ([Bylander et al., 1991]). Every element of the power-set of the set of annotated components is a possible hypothesis. As we are not interested in problem-solving in principle but in practice, further assumptions have to be introduced that either decrease the worst-case, or at least the averagecase behavior Reducing the Worst-Case Complexity: The Single-Fault Assumption A drastic way to reduce the complexity of the diagnostic task is achieved by the single-fault or N-fault assumption [Davis, 1984], which reduces the complexity to polynomial in the number of components. If the single-fault assumption holds, the incorrect behavior of the device is completely explainable by one failing component. As already mentioned in section 2.2, this assumption defines either strong requirements on the provided domain knowledge, or significantly restricts the diagnostic problems that can correctly be handled by the diagnostic system. In the first case, each possible fault has to be represented as a single entity. In the second case, the methods works only in cases where a single fault occurs Reducing the Average-Case behavior: The Minimality Assumption of GDE As the single-fault assumption might be too strong an assumption for several applications, either as a requirement on the domain knowledge or as a restriction on the task, [Reiter, 1987] and [de Kleer & Williams, 1987] provide approaches able to deal with multiple faults. However, this re-introduces the complexity problems of MBD. To deal with this problem, GDE [de Kleer & Williams, 1987] exploits the minimality assumption, which reduces, in practical cases, the exponential worst case behavior to a complexity that grows with the square of the number of components. In GDE, this assumptions helps reducing the complexity in two ways. First, a conflict is a set of components that cannot work correctly given the provided domain knowledge and the observed behavior. Under the minimality assumption, each super-set of a conflict is also a conflict and all conflicts can be represented by minimal conflicts. Second, a hypothesis contains at least one component of each conflict. Every super-set of such a hypothesis is again a hypothesis. Therefore, diagnoses can be represented by minimal diagnoses. The minimality assumption requires that diagnoses are independent or monotonic (see [Bylander et al., 1991]): a diagnosis that assumes more components as being faulty, explains more observations. A drastic way to ensure that the minimality assumption holds, is to neglect any knowledge about the behavior of faulty components. Thus, any behavior that is not correct is considered as fault. A disadvantage of this is that physical rules may be violated (i.e., existing knowledge about faulty behavior). We already mentioned the example provided in [Struss & Dressler, 1989], where a fault (one of two bulbs does not light) is explained by a broken battery that does not provide power and a broken bulb that lights without power. Knowledge about how components behave when they are faulty (called fault models) could be used to constrain the set of diagnoses derived by the system. On the other hand, it increases the complexity of the task. If for one component m possible fault behaviors are provided, this leads to m+1 possible states instead of two (correct and fault). The maximum number of candidates increases from

17 17 2 n to (m+1) n. A similar extension of GDE that includes fault models, is the Sherlock system (cf. [de Kleer & Williams, 1989]). With fault models, it is no longer guaranteed that every super-set of the faulty components that constitute the diagnosis, is also a diagnosis, and therefore the minimality assumption as such cannot be exploited. In Sherlock, a diagnosis does not only contain fault components (and implicitly assumes that all other, not mentioned, components are correct), but it contains a set of components assumed to work correctly and a set of components assumed to be fault. A conflict is now a set of some correct and fault components that is inconsistent with the provided domain knowledge and the observations. In order to accommodate to this situation, [de Kleer et al., 1992] extend the concept of minimal diagnoses to kernel diagnoses and characterise the conditions under which the minimality assumption still holds. The kernel diagnoses are given by the prime implicants of the minimal conflicts. Moreover, the minimal sets of kernel diagnoses sufficient to cover every diagnosis correspond to the irredundant sets of prime implicants 7 of all minimal conflicts. These extensions cause drastic additional effort, because there can be exponentially more kernel diagnoses than minimal diagnoses, and finding irredundant sets of prime implicants is NPhard. Therefore, [de Kleer et al., 1992] characterise two assumptions under which the kernel diagnoses are identical to the minimal diagnoses. The kernel diagnoses are identical to the minimal diagnoses if all conflicts contain only fault components. In this case, there is again only one irredundant set of minimal diagnoses (the set containing all minimal diagnoses). The two assumptions that can ensure these properties are the ignorance of abnormal behavior assumption and the limited knowledge of abnormal behavior assumption. The ignorance of abnormal behavior assumption excludes knowledge about faulty behavior and thus characterises the original situation of GDE. The limited knowledge of abnormal behavior assumption states that the knowledge of abnormal behavior does not rule out any diagnosis indicating a set of faulty components, if there exist a valid diagnosis indicating a subset of them as faulty components, and if the additionat components assumed faulty are not inconsistent with the observations and the system description. 8 The latter assumption is a refinement of the former, that is, the truth of the ignorance of abnormal behavior assumption implies the truth of the limited knowledge of abnormal behavior assumption. A similar type of assumption is used by [Bylander et al., 1991] to characterise different complexity classes of component-based diagnosis. In general, finding one or all diagnoses is intractable. The independent and monotonic assumption, which have the same effect as the limited knowledge of abnormal behavior assumption, require that each super-set of a diagnosis indicating a set of faulty components is also a diagnosis. 9 In this case, the worstcase complexity of finding one minimal diagnosis grows polynomially with the square of the number of components. However, the task of finding all minimal diagnoses is still NP-hard in the number of components. This corresponds to the fact that the minimality assumption of GDE (i.e., the ignorance of abnormal behavior and limited knowledge of abnormal behavior assumptions), that searches for all diagnoses, does not change the worst-case but only the 7. See [McCluskey, 1956]. An implicant is a conjunction of positive and negative literals. Without fault models, minimal hypotheses contain only negative literals ( ok(c i )). In the case of fault models we have positive and negative literals (ok(c i ) and ok(c i )) in the hypotheses. Therefore, minimality cannot be simply defined by set inclusion of the literals of a conjunction. 8. [McIlraith, 1994] generalizes these assumptions for the dual case of diagnosing a minimal set of components proven to be correct and applies these assumptions for characterizing minimal abductive diagnoses. 9. More precisely, the explanatory power of a hypothesis increases monotonously by adding fault or correct components.

Problem-Solving Methods: Making Assumptions for Efficiency Reasons

Problem-Solving Methods: Making Assumptions for Efficiency Reasons Problem-Solving Methods: Making Assumptions for Efficiency Reasons Dieter Fensel & Remco Straatman 1 Department of Social Science Informatics (SWI), University of Amsterdam, The Netherlands {fensel remco}@swi.psy.uva.nl,

More information

A Software Architecture for Knowledge-Based Systems

A Software Architecture for Knowledge-Based Systems 1 A Software Architecture for Knowledge-Based Systems Dieter Fensel 1 and Rix Groenboom 2 1 University of Karlsruhe, Institut AIFB, D-76128 Karlsruhe, Germany, fensel@aifb.uni-karsruhe.de 2 University

More information

Increasing the Intelligence of Virtual Sales Assistants through Knowledge Modeling Techniques

Increasing the Intelligence of Virtual Sales Assistants through Knowledge Modeling Techniques International Conference on Intelligent Agents, Web Technology and Internet Commerce - IAWTIC'2001. Las Vegas (USA) Sept. 2001. Increasing the Intelligence of Virtual Sales Assistants through Knowledge

More information

Compositional Verification of Knowledge-Based Systems: a Case Study for Diagnostic Reasoning

Compositional Verification of Knowledge-Based Systems: a Case Study for Diagnostic Reasoning Compositional Verification of Knowledge-Based Systems: a Case Study for Diagnostic Reasoning Frank Cornelissen, Catholijn M. Jonker, Jan Treur Vrije Universiteit Amsterdam Department of Mathematics and

More information

Acknowledgement References

Acknowledgement References [Puerta et al., 1992] A. R. Puerta, J. W. Egar, S. W. Tu, and M. A. Musen: A Multiple-Method Knowledge Acquisition Shell for the Automatic Generation of Knowledge Acquisition Tools. In: Knowledge Acquisition,

More information

Par-KAP: a Knowledge Acquisition Tool for Building Practical Planning Systems

Par-KAP: a Knowledge Acquisition Tool for Building Practical Planning Systems Par-KAP: a Knowledge Acquisition Tool for Building Practical Planning Systems Leliane Nunes de Barros and James Hendler* V. Richard Benjamins + University of Maryland Articificial Intelligence Research

More information

Modeling Commercial Knowledge to Develop Advanced Agent-based Marketplaces for E-commerce

Modeling Commercial Knowledge to Develop Advanced Agent-based Marketplaces for E-commerce Modeling Commercial Knowledge to Develop Advanced Agent-based Marketplaces for E-commerce Martin Molina Department of Artificial Intelligence, Technical University of Madrid Campus de Montegancedo s/n,

More information

Constraint-based Preferential Optimization

Constraint-based Preferential Optimization Constraint-based Preferential Optimization S. Prestwich University College Cork, Ireland s.prestwich@cs.ucc.ie F. Rossi and K. B. Venable University of Padova, Italy {frossi,kvenable}@math.unipd.it T.

More information

Book Outline. Software Testing and Analysis: Process, Principles, and Techniques

Book Outline. Software Testing and Analysis: Process, Principles, and Techniques Book Outline Software Testing and Analysis: Process, Principles, and Techniques Mauro PezzèandMichalYoung Working Outline as of March 2000 Software test and analysis are essential techniques for producing

More information

Introduction to Software Engineering

Introduction to Software Engineering Introduction to Software Engineering (CS350) Lecture 16 Jongmoon Baik Software Testing Strategy 2 What is Software Testing? Testing is the process of exercising a program with the specific intent of finding

More information

Evaluating Workflow Trust using Hidden Markov Modeling and Provenance Data

Evaluating Workflow Trust using Hidden Markov Modeling and Provenance Data Evaluating Workflow Trust using Hidden Markov Modeling and Provenance Data Mahsa Naseri and Simone A. Ludwig Abstract In service-oriented environments, services with different functionalities are combined

More information

Requirements Analysis: Evaluating KAOS Models

Requirements Analysis: Evaluating KAOS Models J. Software Engineering & Applications, 2010, 3, 869-874 doi:10.4236/jsea.2010.39101 Published Online September 2010 (http://www.scirp.org/journal/jsea) 869 Faisal Almisned, Jeroen Keppens King s College,

More information

R.POONKODI, ASSISTANT PROFESSOR, COMPUTER SCIENCE AND ENGINEERING, SRI ESHWAR COLLEGE OF ENGINEERING, COIMBATORE.

R.POONKODI, ASSISTANT PROFESSOR, COMPUTER SCIENCE AND ENGINEERING, SRI ESHWAR COLLEGE OF ENGINEERING, COIMBATORE. R.POONKODI, ASSISTANT PROFESSOR, COMPUTER SCIENCE AND ENGINEERING, SRI ESHWAR COLLEGE OF ENGINEERING, COIMBATORE. UNIT I INTRODUCTION Testing as an Engineering Activity Testing as a Process Testing axioms

More information

Computational Complexity and Agent-based Software Engineering

Computational Complexity and Agent-based Software Engineering Srinivasan Karthikeyan Course: 609-22 (AB-SENG) Page 1 Course Number: SENG 609.22 Session: Fall, 2003 Course Name: Agent-based Software Engineering Department: Electrical and Computer Engineering Document

More information

Verification of Quality Requirement Method Based on the SQuaRE System Quality Model

Verification of Quality Requirement Method Based on the SQuaRE System Quality Model American Journal of Operations Research, 2013, 3, 70-79 http://dx.doi.org/10.4236/ajor.2013.31006 Published Online January 2013 (http://www.scirp.org/journal/ajor) Verification of Requirement Method Based

More information

FRAM: Four Principles. Erik Hollnagel

FRAM: Four Principles. Erik Hollnagel FRAM: Four Principles Erik Hollnagel hollnagel.erik@gmail.com www.safetysynthesis.com Models and methods An analysis of something inevitably involves some assumptions about how that something happens.

More information

Knowledge Level Planning in the Search and Rescue Domain

Knowledge Level Planning in the Search and Rescue Domain Knowledge Level Planning in the Search and Rescue Domain Version 9. Hugh Cottam, Nigel Shadbolt, John Kingston, Howard Beck, Austin Tate. University of Nottingham, AI Group, Dept of Psychology, University

More information

Automated Black Box Testing Using High Level Abstraction SUMMARY 1 INTRODUCTION. 1.1 Background

Automated Black Box Testing Using High Level Abstraction SUMMARY 1 INTRODUCTION. 1.1 Background Automated Black Box Testing Using High Level Abstraction Dake Song, MIRSE, USA Dr Uli Dobler, FIRSE, Germany Zach Song, EIT, Canada SUMMARY One of the big bottlenecks of modern signalling projects lies

More information

An image-based reasoning model for rock interpretation

An image-based reasoning model for rock interpretation To appear in Proceedings of the IJCAI-03 Workshop on Intelligent Computing in Petroleum Industry (ICPI-03), Acapulco, Mexico, August 2003. An image-based reasoning model for rock interpretation Luís A.

More information

Towards a Conceptual Framework for Expert System Validation

Towards a Conceptual Framework for Expert System Validation Towards a Conceptual Framework for Expert System Validation Pedro Meseguer IIIA CEAB-CSIC Cami Sta. Barbara, 17300 Blanes (Girona) SPAIN pedro@ceab.es Abstract. In this paper we address a number of fundamental

More information

Evaluating Quality-in-Use Using Bayesian Networks

Evaluating Quality-in-Use Using Bayesian Networks Evaluating Quality-in-Use Using Bayesian Networks M.A Moraga 1, M.F. Bertoa 2, M.C. Morcillo 3, C. Calero 1, A. Vallecillo 2 1 Alarcos Research Group Institute of Information Technologies & Systems. Dept.

More information

The Illusion of Certainty

The Illusion of Certainty The Illusion of Certainty Grady Campbell CMU Software Engineering Institute 4301 Wilson Blvd., Suite 200 Arlington, VA 22203 703-908-8223 ghc@sei.cmu.edu Abstract Acquisition policy and, even more so,

More information

CLASS/YEAR: II MCA SUB.CODE&NAME: MC7303, SOFTWARE ENGINEERING. 1. Define Software Engineering. Software Engineering: 2. What is a process Framework? Process Framework: UNIT-I 2MARKS QUESTIONS AND ANSWERS

More information

WE consider the general ranking problem, where a computer

WE consider the general ranking problem, where a computer 5140 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 11, NOVEMBER 2008 Statistical Analysis of Bayes Optimal Subset Ranking David Cossock and Tong Zhang Abstract The ranking problem has become increasingly

More information

Modeling the responsibility relationship in the REA Business Ontology using Petri Nets

Modeling the responsibility relationship in the REA Business Ontology using Petri Nets Modeling the responsibility relationship in the REA Business Ontology using Petri Nets Hans Weigand 1, Paul Johannesson 2, Birger Andersson 2 1 University of Tilburg, P.O.Box 90153, 5000 LE Tilburg, The

More information

Software Quality. A Definition of Quality. Definition of Software Quality. Definition of Implicit Requirements

Software Quality. A Definition of Quality. Definition of Software Quality. Definition of Implicit Requirements Definition of Software Quality Software Quality The Ultimate Goal of Software Engineering Software must conformance to explicit and implicit requirements if it is to be considered to be of good quality.

More information

A Goals-Means Task Analysis method 1 Erik Hollnagel

A Goals-Means Task Analysis method 1 Erik Hollnagel A Goals-Means Task Analysis method 1 Erik Hollnagel (This is a text recovered from a report to ESA-ESTEC in a project on Task Analysis Methods 1991. ) 1. The Logic Of Task Analysis The purpose of a task

More information

Examining and Modeling Customer Service Centers with Impatient Customers

Examining and Modeling Customer Service Centers with Impatient Customers Examining and Modeling Customer Service Centers with Impatient Customers Jonathan Lee A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR OF APPLIED SCIENCE DEPARTMENT

More information

POINT ELASTICITY VERSUS ARC ELASTICITY: ON DIFFERENT APPROACHES TO TEACHING ELASTICITY IN PRINCIPLES COURSES

POINT ELASTICITY VERSUS ARC ELASTICITY: ON DIFFERENT APPROACHES TO TEACHING ELASTICITY IN PRINCIPLES COURSES POINT ELASTICITY VERSUS ARC ELASTICITY: ON DIFFERENT APPROACHES TO TEACHING ELASTICITY IN PRINCIPLES COURSES Dmitry Shishkin, Georgia Gwinnett College Andrei Olifer, Georgia Gwinnett College ABSTRACT While

More information

Software Safety Assurance What Is Sufficient?

Software Safety Assurance What Is Sufficient? Software Safety Assurance What Is Sufficient? R.D. Hawkins, T.P. Kelly Department of Computer Science, The University of York, York, YO10 5DD UK Keywords: Software, Assurance, Arguments, Patterns. Abstract

More information

On Optimal Multidimensional Mechanism Design

On Optimal Multidimensional Mechanism Design On Optimal Multidimensional Mechanism Design YANG CAI, CONSTANTINOS DASKALAKIS and S. MATTHEW WEINBERG Massachusetts Institute of Technology We solve the optimal multi-dimensional mechanism design problem

More information

Introduction to Artificial Intelligence. Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST

Introduction to Artificial Intelligence. Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST Introduction to Artificial Intelligence Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST Chapter 2 Rule-based expert systems Demonstration of rule-based expert system MEDIA ADVISOR: a demonstration

More information

Shewhart and the Probability Approach. The difference is much greater than how we compute the limits

Shewhart and the Probability Approach. The difference is much greater than how we compute the limits Quality Digest Daily, November 2, 2015 Manuscript 287 The difference is much greater than how we compute the limits Donald J. Wheeler & Henry R. Neave In theory, there is no difference between theory and

More information

Verification and Validation

Verification and Validation System context Subject facet Usage facet IT system facet Development facet Validation Core activities Elicitation Negotiation Context of consideration Execution of RE activities Created requirements artefacts

More information

PMT A LEVEL ECONOMICS. ECON1/Unit 1 Markets and Market Failure Mark scheme June Version 0.1 Final

PMT A LEVEL ECONOMICS. ECON1/Unit 1 Markets and Market Failure Mark scheme June Version 0.1 Final A LEVEL ECONOMICS ECON1/Unit 1 Markets and Market Failure Mark scheme 2140 June 2014 Version 0.1 Final Mark schemes are prepared by the Lead Assessment Writer and considered, together with the relevant

More information

A RFBSE model for capturing engineers useful knowledge and experience during the design process

A RFBSE model for capturing engineers useful knowledge and experience during the design process A RFBSE model for capturing engineers useful knowledge and experience during the design process Hao Qin a, Hongwei Wang a*, Aylmer Johnson b a. School of Engineering, University of Portsmouth, Anglesea

More information

7 Conclusions. 7.1 General Discussion

7 Conclusions. 7.1 General Discussion 146 7 Conclusions The last chapter presents a final discussion of the results and the implications of this dissertation. More specifically, this chapter is structured as follows: The first part of this

More information

Mission Planning Systems for Earth Observation Missions

Mission Planning Systems for Earth Observation Missions Mission Planning Systems for Earth Observation Missions Marc Niezette Anite Systems GmbH Robert Bosch StraJ3e 7 Darmstadt, Germany Marc.Niezette@AniteSystems.de Abstract This paper describes two different

More information

REASONING ABOUT CUSTOMER NEEDS IN MULTI-SUPPLIER ICT SERVICE BUNDLES USING DECISION MODELS

REASONING ABOUT CUSTOMER NEEDS IN MULTI-SUPPLIER ICT SERVICE BUNDLES USING DECISION MODELS REASONING ABOUT CUSTOMER NEEDS IN MULTI-SUPPLIER ICT SERVICE BUNDLES USING DECISION MODELS Sybren de Kinderen, Jaap Gordijn and Hans Akkermans The Network Institute, VU University Amsterdam, The Netherlands

More information

A Knowledge-Based Framework for Quantity Takeoff and Cost Estimation in the AEC Industry Using BIM

A Knowledge-Based Framework for Quantity Takeoff and Cost Estimation in the AEC Industry Using BIM A Knowledge-Based Framework for Quantity Takeoff and Cost Estimation in the AEC Industry Using BIM S. Aram a, C. Eastman a and R. Sacks b a College of Architecture, Georgia Institute of Technology, USA

More information

10.2 Correlation. Plotting paired data points leads to a scatterplot. Each data pair becomes one dot in the scatterplot.

10.2 Correlation. Plotting paired data points leads to a scatterplot. Each data pair becomes one dot in the scatterplot. 10.2 Correlation Note: You will be tested only on material covered in these class notes. You may use your textbook as supplemental reading. At the end of this document you will find practice problems similar

More information

e Government Are Public Data really Open and Clear to Citizens?

e Government Are Public Data really Open and Clear to Citizens? Workshop: Government and the Internet: Participation, Expression and Control European University Institute, 8 9 March 2011 e Government Are Public Data really Open and Clear to Citizens? Maria Angela Biasiotti,

More information

Software Quality. Unit 6: System Quality Requirements

Software Quality. Unit 6: System Quality Requirements Software Quality Unit 6: System Quality Requirements System Requirements Best products, from users point of view, are those which have been developed considering organizational needs, and how product is

More information

TIMETABLING EXPERIMENTS USING GENETIC ALGORITHMS. Liviu Lalescu, Costin Badica

TIMETABLING EXPERIMENTS USING GENETIC ALGORITHMS. Liviu Lalescu, Costin Badica TIMETABLING EXPERIMENTS USING GENETIC ALGORITHMS Liviu Lalescu, Costin Badica University of Craiova, Faculty of Control, Computers and Electronics Software Engineering Department, str.tehnicii, 5, Craiova,

More information

Towards problem solving methods in multi-agent systems

Towards problem solving methods in multi-agent systems University of Wollongong Research Online Faculty of Engineering and Information Sciences - Papers: Part A Faculty of Engineering and Information Sciences 2009 Towards problem solving methods in multi-agent

More information

Quality Control and Reliability Inspection and Sampling

Quality Control and Reliability Inspection and Sampling Quality Control and Reliability Inspection and Sampling Prepared by Dr. M. S. Memon Dept. of Industrial Engineering & Management Mehran UET, Jamshoro, Sindh, Pakistan 1 Chapter Objectives Introduction

More information

Applying PSM to Enterprise Measurement

Applying PSM to Enterprise Measurement Applying PSM to Enterprise Measurement Technical Report Prepared for U.S. Army TACOM by David Card and Robert MacIver Software Productivity Consortium March 2003 SOFTWARE PRODUCTIVITY CONSORTIUM Applying

More information

THE RELATIONSHIP BETWEEN FUNCTIONS AND REQUIREMENTS FOR AN IMPROVED DETECTION OF COMPONENT LINKAGES

THE RELATIONSHIP BETWEEN FUNCTIONS AND REQUIREMENTS FOR AN IMPROVED DETECTION OF COMPONENT LINKAGES INTERNATIONAL DESIGN CONFERENCE - DESIGN 2008 Dubrovnik - Croatia, May 19-22, 2008. THE RELATIONSHIP BETWEEN FUNCTIONS AND REQUIREMENTS FOR AN IMPROVED DETECTION OF COMPONENT LINKAGES P. Boersting, R.

More information

A Logic-Oriented Wafer Fab Lot Scheduling Knowledge-Based System

A Logic-Oriented Wafer Fab Lot Scheduling Knowledge-Based System A Logic-Oriented Wafer Fab Lot Scheduling Knowledge-Based System LIANG-CHUNG HUANG 1, SHIAN-SHYONG TSENG 1,2,*, YIAN-SHU CHU 1 1 Department of Computer Science National Chiao Tung University 1001 Ta Hsueh

More information

Understanding UPP. Alternative to Market Definition, B.E. Journal of Theoretical Economics, forthcoming.

Understanding UPP. Alternative to Market Definition, B.E. Journal of Theoretical Economics, forthcoming. Understanding UPP Roy J. Epstein and Daniel L. Rubinfeld Published Version, B.E. Journal of Theoretical Economics: Policies and Perspectives, Volume 10, Issue 1, 2010 Introduction The standard economic

More information

The LexiCon: structuring semantics

The LexiCon: structuring semantics The LexiCon: structuring semantics Author: Kees Woestenenk Institution: STABU foundation E-mail: kwoestenenk@stabu.nl Abstract: The ISO/PAS 12006-3:2000 Framework for object oriented information exchange

More information

Introduction to Software Testing

Introduction to Software Testing Introduction to Software Testing Introduction Chapter 1 introduces software testing by : describing the activities of a test engineer defining a number of key terms explaining the central notion of test

More information

Systems Engineering (SE)

Systems Engineering (SE) Topic Outline Underpinnings of Systems Engineering Requirements: foundation for Systems Engineering work Introduction to Systems Engineering design methodologies Designing systems for their life cycle

More information

Belize 2010 Enterprise Surveys Data Set

Belize 2010 Enterprise Surveys Data Set I. Introduction Belize 2010 Enterprise Surveys Data Set 1. This document provides additional information on the data collected in Belize between August and October 2011 as part of the Latin America and

More information

DESIRE: MODELLING MULTI-AGENT SYSTEMS IN A COMPOSITIONAL FORMAL FRAMEWORK *

DESIRE: MODELLING MULTI-AGENT SYSTEMS IN A COMPOSITIONAL FORMAL FRAMEWORK * DESIRE: MODELLING MULTI-AGENT SYSTEMS IN A COMPOSITIONAL FORMAL FRAMEWORK * FRANCES M.T. BRAZIER Department of Mathematics and Computer Science, Vrije Universiteit Amsterdam De Boelelaan 1081a, 1081 HV

More information

The Bahamas 2010 Enterprise Surveys Data Set

The Bahamas 2010 Enterprise Surveys Data Set I. Introduction The Bahamas 2010 Enterprise Surveys Data Set 1. This document provides additional information on the data collected in the Bahamas between April 2011 and August 2011 as part of the Latin

More information

The Quality Paradigm. Quality Paradigm Elements

The Quality Paradigm. Quality Paradigm Elements The Quality Paradigm We shall build good ships here; at a profit if we can, at a loss if we must, but always good ships. motto used at Newport News Shipbuilding Quality Paradigm Elements Defining the nature

More information

This chapter illustrates the evolutionary differences between

This chapter illustrates the evolutionary differences between CHAPTER 6 Contents An integrated approach Two representations CMMI process area contents Process area upgrades and additions Project management concepts process areas Project Monitoring and Control Engineering

More information

Test Management: Part I. Software Testing: INF3121 / INF4121

Test Management: Part I. Software Testing: INF3121 / INF4121 Test Management: Part I Software Testing: INF3121 / INF4121 Summary: Week 6 Test organisation Independence Tasks of the test leader and testers Test planning and estimation Activities Entry and exit criteria

More information

Finding Compensatory Pathways in Yeast Genome

Finding Compensatory Pathways in Yeast Genome Finding Compensatory Pathways in Yeast Genome Olga Ohrimenko Abstract Pathways of genes found in protein interaction networks are used to establish a functional linkage between genes. A challenging problem

More information

ASSESSMENT OF THE ISI SIMULATION PART OF THE ENIQ PILOT STUDY

ASSESSMENT OF THE ISI SIMULATION PART OF THE ENIQ PILOT STUDY EUROPEAN COMMISSION DG-JRC Institute for Advanced Materials Joint Research Centre ASSESSMENT OF THE ISI SIMULATION PART OF THE ENIQ PILOT STUDY December 1999 ENIQ Report nr. 17 EUR 19025 EN Approved by

More information

Multiagent Systems: Spring 2006

Multiagent Systems: Spring 2006 Multiagent Systems: Spring 2006 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss (ulle@illc.uva.nl) 1 Combinatorial Auctions In a combinatorial auction, the

More information

Efficient Business Service Consumption by Customization with Variability Modelling

Efficient Business Service Consumption by Customization with Variability Modelling Efficient Business Service Consumption by Customization with Variability Modelling Michael Stollberg and Marcel Muth SAP Research, Chemnitzer Str. 48, 01187 Dresden, Germany (michael.stollberg,marcel.muth)@sap.com

More information

Software Reliability

Software Reliability www..org 55 Software Reliability Sushma Malik Assistant Professor, FIMT, New Delhi sushmalik25@gmail.com Abstract Unreliability of any product comes due to the failures or presence of faults in the system.

More information

Before You Start Modelling

Before You Start Modelling Chapter 2 Before You Start Modelling This chapter looks at the issues you need to consider before starting to model with ARIS. Of particular importance is the need to define your objectives and viewpoint.

More information

Content - Overview Motivation - Introduction. Part 1. Overview. Definition: Planning. Planning Plan Management

Content - Overview Motivation - Introduction. Part 1. Overview. Definition: Planning. Planning Plan Management Content - Overview Motivation - Introduction Diagnostic-Therapeutic Cycle Medical Therapy Planning Guideline Repositories Guideline Development (CBO) Instruments for Quality of Guidelines Agree Instrument

More information

The Job Assignment Problem: A Study in Parallel and Distributed Machine Learning

The Job Assignment Problem: A Study in Parallel and Distributed Machine Learning The Job Assignment Problem: A Study in Parallel and Distributed Machine Learning Gerhard Weiß Institut für Informatik, Technische Universität München D-80290 München, Germany weissg@informatik.tu-muenchen.de

More information

Requirements Verification and Validation

Requirements Verification and Validation SEG3101 (Fall 2010) Requirements Verification and Validation SE502: Software Requirements Engineering 1 Table of Contents Introduction to Requirements Verification and Validation Requirements Verification

More information

Principles of Verification, Validation, Quality Assurance, and Certification of M&S Applications

Principles of Verification, Validation, Quality Assurance, and Certification of M&S Applications Introduction to Modeling and Simulation Principles of Verification, Validation, Quality Assurance, and Certification of M&S Applications OSMAN BALCI Professor Copyright Osman Balci Department of Computer

More information

Darshan Institute of Engineering & Technology for Diploma Studies Rajkot Unit-1

Darshan Institute of Engineering & Technology for Diploma Studies Rajkot Unit-1 Failure Rate Darshan Institute of Engineering & Technology for Diploma Studies Rajkot Unit-1 SOFTWARE (What is Software? Explain characteristics of Software. OR How the software product is differing than

More information

Solutions Manual. Object-Oriented Software Engineering. An Agile Unified Methodology. David Kung

Solutions Manual. Object-Oriented Software Engineering. An Agile Unified Methodology. David Kung 2 David Kung Object-Oriented Software Engineering An Agile Unified Methodology Solutions Manual 3 Message to Instructors July 10, 2013 The solutions provided in this manual may not be complete, or 100%

More information

Abstract. Introduction

Abstract. Introduction Enhancing Resource-Leveling via Intelligent Scheduling: Turnaround & Aerospace Applications Demonstrating 25%+ Flow-Time Reductions Robert A. Richards, Ph.D. Project Manager, Stottler Henke Associates,

More information

L2 The requirement study. Requirement Engineering. Fang Chen

L2 The requirement study. Requirement Engineering. Fang Chen L2 The requirement study Fang Chen Requirement Engineering Requirement are ubiquitous part of our lives Understand the requirement through communication People are hard to understand! Requirement Creation

More information

mywbut.com Software Reliability and Quality Management

mywbut.com Software Reliability and Quality Management Software Reliability and Quality Management 1 Software Reliability Issues 2 Specific Instructional Objectives At the end of this lesson the student would be able to: Differentiate between a repeatable

More information

WEB SERVICES COMPOSING BY MULTIAGENT NEGOTIATION

WEB SERVICES COMPOSING BY MULTIAGENT NEGOTIATION Jrl Syst Sci & Complexity (2008) 21: 597 608 WEB SERVICES COMPOSING BY MULTIAGENT NEGOTIATION Jian TANG Liwei ZHENG Zhi JIN Received: 25 January 2008 / Revised: 10 September 2008 c 2008 Springer Science

More information

Certified Business Analysis Professional - Introduction

Certified Business Analysis Professional - Introduction Certified Business Analysis Professional - Introduction COURSE STRUCTURE Business Analysis Monitoring and Planning Module 1 Elicitation and Collaboration Module 2 Requirement Lifecycle Management Module

More information

THE EFFECTS OF FULL TRANSPARENCY IN SUPPLIER SELECTION ON SUBJECTIVITY AND BID QUALITY. Jan Telgen and Fredo Schotanus

THE EFFECTS OF FULL TRANSPARENCY IN SUPPLIER SELECTION ON SUBJECTIVITY AND BID QUALITY. Jan Telgen and Fredo Schotanus THE EFFECTS OF FULL TRANSPARENCY IN SUPPLIER SELECTION ON SUBJECTIVITY AND BID QUALITY Jan Telgen and Fredo Schotanus Jan Telgen, Ph.D. and Fredo Schotanus, Ph.D. are NEVI Professor of Public Procurement

More information

2 The Action Axiom, Preference, and Choice in General

2 The Action Axiom, Preference, and Choice in General Introduction to Austrian Consumer Theory By Lucas Engelhardt 1 Introduction One of the primary goals in any Intermediate Microeconomic Theory course is to explain prices. The primary workhorse for explaining

More information

Statistical Sampling in Healthcare Audits and Investigations

Statistical Sampling in Healthcare Audits and Investigations Statistical Sampling in Healthcare Audits and Investigations Michael Holper SVP Compliance and Audit Services Trinity Health Stefan Boedeker Managing Director Berkley Research Group LLC HCCA Compliance

More information

Project Summary. Acceptanstest av säkerhetskritisk plattformsprogramvara

Project Summary. Acceptanstest av säkerhetskritisk plattformsprogramvara Project Summary Acceptanstest av säkerhetskritisk plattformsprogramvara 2 AcSäPt Acceptanstest av säkerhetskritisk plattformsprogramvara The Project In this report we summarise the results of the FFI-project

More information

Introduction to software testing and quality process

Introduction to software testing and quality process Introduction to software testing and quality process Automated testing and verification J.P. Galeotti - Alessandra Gorla Engineering processes Engineering disciplines pair construction activities activities

More information

Intelligent Agents. Multi-Agent Planning. Ute Schmid. Applied Computer Science, Bamberg University. last change: 17. Juli 2014

Intelligent Agents. Multi-Agent Planning. Ute Schmid. Applied Computer Science, Bamberg University. last change: 17. Juli 2014 Intelligent Agents Multi-Agent Planning Ute Schmid Applied Computer Science, Bamberg University last change: 17. Juli 2014 Ute Schmid (CogSys) Intelligent Agents last change: 17. Juli 2014 1 / 38 Working

More information

AIRBORNE SOFTWARE VERIFICATION FRAMEWORK AIMED AT AIRWORTHINESS

AIRBORNE SOFTWARE VERIFICATION FRAMEWORK AIMED AT AIRWORTHINESS 27 TH INTERNATIONAL CONGRESS OF THE AERONAUTICAL SCIENCES AIRBORNE SOFTWARE VERIFICATION FRAMEWORK AIMED AT AIRWORTHINESS Yumei Wu*, Bin Liu* *Beihang University Keywords: software airworthiness, software

More information

The Four Levels of Requirements Engineering for and in Dynamic Adaptive Systems

The Four Levels of Requirements Engineering for and in Dynamic Adaptive Systems The Four Levels of Requirements Engineering for and in Dynamic Adaptive Systems Daniel M. Berry, U Waterloo Betty H.C. Cheng, Michigan State U Ji Zhang, Michigan State U 2005 D.M. Berry, B.H.C. Cheng,

More information

SOFTWARE FAILURE MODES EFFECTS ANALYSIS OVERVIEW

SOFTWARE FAILURE MODES EFFECTS ANALYSIS OVERVIEW SOFTWARE FAILURE MODES EFFECTS ANALYSIS OVERVIEW Copyright, Ann Marie Neufelder, SoftRel, LLC, 2010 amneufelder@softrel.com www.softrel.com This presentation may not be copied in part or whole without

More information

Generating Value Models using Skeletal Design Techniques

Generating Value Models using Skeletal Design Techniques Generating Value Models using Skeletal Design Techniques Iván S. Razo-Zapata, Ania Chmielowiec, Jaap Gordijn, Maarten van Steen, and Pieter De Leenheer VU University Amsterdam De Boelelaan 1081 1081 HV,

More information

LOGISTICAL ASPECTS OF THE SOFTWARE TESTING PROCESS

LOGISTICAL ASPECTS OF THE SOFTWARE TESTING PROCESS LOGISTICAL ASPECTS OF THE SOFTWARE TESTING PROCESS Kazimierz Worwa* * Faculty of Cybernetics, Military University of Technology, Warsaw, 00-908, Poland, Email: kazimierz.worwa@wat.edu.pl Abstract The purpose

More information

Indian Res. J. Ext. Edu. 12 (3), September, Maize AGRIdaksh: A Farmer Friendly Device

Indian Res. J. Ext. Edu. 12 (3), September, Maize AGRIdaksh: A Farmer Friendly Device Indian Res. J. Ext. Edu. 12 (3), September, 2012 13 Maize AGRIdaksh: A Farmer Friendly Device V.K. Yadav 1, Sudeep Marwaha 2, Sangit Kumar 3, P. Kumar 4, Jyoti Kaul 5, C.M. Parihar 6 and P. Supriya 7 1.

More information

PRODUCTIVITY AND ACCURACY IN PLASTIC INJECTION MOULD QUOTATION

PRODUCTIVITY AND ACCURACY IN PLASTIC INJECTION MOULD QUOTATION 5th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING ADDING INNOVATION CAPACITY OF LABOUR FORCE AND ENTREPRENEURS" 20 22 April 2006, Tallinn, Estonia PRODUCTIVITY AND ACCURACY IN PLASTIC INJECTION

More information

Requirement Engineering. L3 The requirement study. Change is constant. Communication problem? People are hard to understand!

Requirement Engineering. L3 The requirement study. Change is constant. Communication problem? People are hard to understand! Requirement Engineering L3 The requirement study Fang Chen Requirement are ubiquitous part of our lives Understand the requirement through communication Requirement Creation Communication problem? People

More information

Agent Based Reasoning in Multilevel Flow Modeling

Agent Based Reasoning in Multilevel Flow Modeling ZHANG Xinxin *, and LIND Morten * *, Department of Electric Engineering, Technical University of Denmark, Kgs. Lyngby, DK-2800, Denmark (Email: xinz@elektro.dtu.dk and mli@elektro.dtu.dk) 1 Introduction

More information

An Agent-Based Concept for Problem Management Systems to Enhance Reliability

An Agent-Based Concept for Problem Management Systems to Enhance Reliability An Agent-Based Concept for Problem Management Systems to Enhance Reliability H. Wang, N. Jazdi, P. Goehner A defective component in an industrial automation system affects only a limited number of sub

More information

PRACTICAL EXPERIENCES FROM RUNNING A LARGE SCADA/EMS/BMS PROJECT P FORSGREN

PRACTICAL EXPERIENCES FROM RUNNING A LARGE SCADA/EMS/BMS PROJECT P FORSGREN 21, rue d'artois, F-75008 Paris http://www.cigre.org D2-101 Session 2004 CIGRÉ PRACTICAL EXPERIENCES FROM RUNNING A LARGE SCADA/EMS/BMS PROJECT P FORSGREN J-O LUNDBERG * Royal Institute of Technology Svenska

More information

MARK SCHEME for the October/November 2015 series 9708 ECONOMICS

MARK SCHEME for the October/November 2015 series 9708 ECONOMICS CAMBRIDGE INTERNATIONAL EXAMINATIONS Cambridge International Advanced Level MARK SCHEME for the October/November 2015 series 9708 ECONOMICS 9708/42 Paper 4 (Data Response and Essays Supplement), maximum

More information

A Unified Theory of Software Testing Bret Pettichord 16 Feb 2003

A Unified Theory of Software Testing Bret Pettichord 16 Feb 2003 A Unified Theory of Software Testing Bret Pettichord 16 Feb 2003 This paper presents a theory, or model, for analyzing and understanding software test techniques. It starts by developing a theory for describing

More information

Platform-Based Design of Heterogeneous Embedded Systems

Platform-Based Design of Heterogeneous Embedded Systems Platform-Based Design of Heterogeneous Embedded Systems Ingo Sander Royal Institute of Technology Stockholm, Sweden ingo@kth.se Docent Lecture August 31, 2009 Ingo Sander (KTH) Platform-Based Design August

More information

Bridging the Gap between Business Strategy and Software Development

Bridging the Gap between Business Strategy and Software Development Bridging the Gap between Business Strategy and Software Development Victor R. Basili University of Maryland and Fraunhofer Center - Maryland Why Measurement? What is not measurable make measurable. Galileo

More information

Platform-Based Design of Heterogeneous Embedded Systems

Platform-Based Design of Heterogeneous Embedded Systems Platform-Based Design of Heterogeneous Embedded Systems Ingo Sander Royal Institute of Technology Stockholm, Sweden ingo@kth.se Docent Lecture August 31, 2009 Ingo Sander (KTH) Platform-Based Design August

More information

3. Theoretical Background of Competency based Recruitment and Selection 1

3. Theoretical Background of Competency based Recruitment and Selection 1 3. Theoretical Background of Competency based Recruitment and Selection 1 In this chapter we discuss the concepts and models that will come up hereinafter and the advantages and challenges of their application

More information

A Fine-Grained Analysis on the Evolutionary Coupling of Cloned Code

A Fine-Grained Analysis on the Evolutionary Coupling of Cloned Code A Fine-Grained Analysis on the Evolutionary Coupling of Cloned Code Manishankar Mondal Chanchal K. Roy Kevin A. Schneider Software Research Laboratory, Department of Computer Science, University of Saskatchewan,

More information