PATROL: a comprehensive reputation-based trust model. Ayman Tajeddine, Ayman Kayssi, Ali Chehab* and Hassan Artail

Size: px
Start display at page:

Download "PATROL: a comprehensive reputation-based trust model. Ayman Tajeddine, Ayman Kayssi, Ali Chehab* and Hassan Artail"

Transcription

1 108 Int. J. Internet Technology and Secured Transactions, Vol. 1, Nos. 1/2, 2007 PATROL: a comprehensive reputation-based trust model Ayman Tajeddine, Ayman Kayssi, Ali Chehab* and Hassan Artail Electrical and Computer Engineering Department, American University of Beirut, Beirut , Lebanon ast03@aub.edu.lb ayman@aub.edu.lb chehab@aub.edu.lb hartail@aub.edu.lb *Corresponding author Abstract: In this paper, we present PATROL, a general and comprehensive reputation-based trust model for distributed computing. The proposed model is an enhancement over our previous model, TRUMMAR, and aims at achieving a truly unique model that incorporates most concepts that are essential to determining trust-based decisions. Among the concepts upon which the trust model is based are reputation values, direct experiences, trust in the credibility of a host to give recommendations, decay of information with time based on a dynamic decay factor, first impressions, similarity, popularity, activity, cooperation between hosts, in addition to a hierarchy of host systems. The simulations performed on this model confirm its correctness and its adaptability to different environments and situations. Keywords: trust; reputation; distributed computing; network security. Reference to this paper should be made as follows: Tajeddine, A., Kayssi, A., Chehab, A. and Artail, H. (2007) PATROL: a comprehensive reputation-based trust model, Int. J. Internet Technology and Secured Transactions, Vol. 1, Nos. 1/2, pp Biographical notes: A. Tajeddine received his Bachelor Degree in Computer and Communications Engineering (CCE) from the American University of Beirut (AUB) in He received his Master s Degree in CCE from AUB in His research interests are network security and mobile agents. A. Kayssi received his BE with Distinction in 1987 from the American University of Beirut, Lebanon and the MSE in 1989 and PhD in 1993 from the University of Michigan, Ann Arbor, USA, all in Electrical Engineering. He is currently a Professor and Chairman of Electrical and Computer Engineering at the American University of Beirut, where he has been working since His research and teaching interests are in the areas of internet engineering, wireless networking, CAD for VLSI, modelling and simulation. A. Chehab received his Bachelor Degree in EE from the American University of Beirut (AUB) in 1987, the Master s Degree in EE from Syracuse University and the PhD Degree in ECE from the University of North Carolina at Charlotte, in From 1989 to 1998, he was a Lecturer in the ECE Department at AUB. He rejoined the ECE Department at AUB as an Assistant Professor in His research interests are VLSI design and test, mobile agents and information security. Copyright 2007 Inderscience Enterprises Ltd.

2 PATROL: a comprehensive reputation-based trust model 109 H. Artail worked as a System Development Supervisor at the Scientific Labs of DaimlerChrysler, Michigan before joining AUB in At DaimlerChrysler, he worked for 11 years in the field of software and system development for vehicle testing applications, covering the areas of instrument control, computer networking, distributed computing, data acquisition and data processing. He obtained a BS and MS in Electrical Engineering from the University of Detroit in 1985 and 1986, respectively, and a PhD from the Wayne State University in His research is in the areas of internet and mobile computing, distributed systems, ad hoc networks, data management, in addition to computer and network security. 1 Introduction One of the most critical issues in distributed systems is security. For an entity to interact with another, it should trust the other entity to ensure the correctness and credibility of its responses. The ideal solution to this concern is to have an environment that is fully trusted by all its entities. However, because such a solution can not be achieved, research has focused on trust and reputation as means to secure distributed systems. Our proposed approach requires that a host asks about the reputation of a target host that it wants to interact with. It calculates a reputation value based on its previous experiences and the gathered reputation values from other hosts, and then it decides whether to interact with the target host or not. The initiator also evaluates the credibility of hosts providing reputation values by estimating the Similarity (Sim), the Activity (Act), the Popularity (Pop) and the Cooperation (Co) of the queried hosts. Moreover, each host uses different dynamic decay factors that depend on the consistency of the interaction results of a certain host. The rest of the paper is organised as follows: Section 2 surveys the previous work in the area of trust and reputation. Section 3 presents the proposed trust model, PATROL (comprehensive reputation-based TRust model), with all its parameters. The simulation results are presented in Section 4. Section 5 shows the system overhead and presents some security issues. Finally, Section 6 presents some conclusions. 2 Related work Trust and reputation mechanisms have been proposed in various fields such as distributed computing, agent technology, grid computing, economics and evolutionary biology. In this section, we review the recent work done in reputation-based trust. Xiong and Liu (2004) present a reputation-based trust supporting framework. They introduce three basic parameters and two adaptive parameters. They incorporate the concepts of a trust value and the similarity with oneself to compute credibility and satisfaction.

3 110 A. Tajeddine, A. Kayssi, A. Chehab and H. Artail Wang and Vassileva (2003) propose a Bayesian network-based trust model that uses reputation built on recommendations. They differentiate between two types of trust: trust in the host s capability to provide the service and trust in the host s reliability in providing recommendations. Selcuk et al. (2004) proposed a reputation-based management protocol in which the reliability of peers is calculated through the outcomes of past interactions and saved in trust vectors. Gilbert et al. (2004) focused on the Grid and developed a trust model based on reputation systems and feedback mechanisms. The paper distinguishes three types of reputation systems, namely the positive systems, negative systems and hybrid systems. They stress on the advantage of using hybrid systems to maintain data integrity. Dewan (2004) stresses on the need to introduce external motivation for peers to cooperate and be trustworthy and recommends the use of digital reputations, which represent the online transaction history of a host. Bearly and Kumar (2004) propose a Bayesian network based, user extensible and universal trust set. The paper develops a general model using activity-related information to understand the behaviour and position of a peer. Falcone and Caselfranchi (2004) study two main trust dynamics: the direct experiences, and the situation in which an entity is trusted. Yu and Singh (2002) present a mathematical theory of evidence to evaluate and spread trustworthiness ratings of agents. Mui et al. (2002) propose a computational model based on reciprocity, trust and reputation. In their other work, Mui et al. (2003) also propose an intuitive classification summarising different notions of reputation that have been studied across diverse disciplines. Cubaleska and Schneider (2002) propose a method for a posteriori identification of malicious hosts to build a trust policy. Tran and Cohen (2003) propose a reputation-oriented reinforcement learning algorithm for buying agents in electronic market environments, taking into account the fact that the quality of a good offered by different selling agents may not be the same and that a selling agent may alter the quality of its goods. Falcone et al. (2004) present a random trustier, a statistical one and a cognitive one and compare their effectiveness based on direct experiences in addition to some environmental features. The TRUMMAR model was developed by Derbas et al. (2004) for mobile agents. TRUMMAR forms the basis for our present work, and will be reviewed in Section 3.3. In Table 1, we summarise the different features incorporated in the models discussed above. We also show how the model proposed in this paper, PATROL, is comprehensive and includes all the features listed in the table. Note that PATROL is an extended version of our previous reputation-based trust model described by Tajeddine et al. (2005).

4 PATROL: a comprehensive reputation-based trust model 111 Table 1 Summary comparison of previous work with the proposed model Hierarchy of trust Position of member in community Prior-derived reputation Trust propagation First impression Result of interaction Fixed decay factor [1] X X X X [2] X X X X [3] X X X [4] X X X [5] X X X [6] X X X [7] X X X [9] X X X X [10] X X X [11] X X X [12] X X [13] X X [14] X X X X X X X X PATROL X X X X X X X X X X X X X X X Dynamic decay factor Trusting vs. suspicious Similarity Activity Popularity Cooperation Number of interactions Fuzzy techniques 3 PATROL: a comprehensive reputation-based trust model In this section we present PATROL, which is an enhancement over a previous trust model, TRUMMAR, for the calculation of reputation values and the determination of trust decisions. PATROL is a unique model that incorporates concepts that are essential to determining trust-based decisions. 3.1 The model flow The procedure used in PATROL is best explained using an example: Consider the case when a host X wants to interact with another host Y. First, host X will calculate the time since it last inquired about the reputation of Y. If this time was found to be greater than a predefined time interval T A, host X will involve other hosts in the system, who are cooperative (i.e., whose cooperation value is greater than the cooperation threshold), about what they know concerning Y (through their reputation vectors). The queried hosts will decay their saved reputation values and send them along with their reputation vectors to host X, which will in turn, calculate the model parameters (explained in Section 3.2) similarity, activity, popularity and cooperation of the queried hosts. Now host X will decay its old reputation values and incorporate all this information to calculate the reputation value of host Y. Note that if the time calculated at the beginning by host X does not exceed T A, X will directly decay its old reputation values, skipping all the steps in between.

5 112 A. Tajeddine, A. Kayssi, A. Chehab and H. Artail After the reputation of Y is calculated, a decision has to be made by host X: if the reputation of Y is less than an absolute mistrust level (φ), Y will not be trusted and no interaction will take place; otherwise, if the reputation of Y is greater than an absolute trust level ( ), host X will trust and interact with host Y. However, if the reputation of host Y is between φ and, it is considered to be in the grey probabilistic region. In this case, host X will probabilistically decide whether to trust host Y or not. After the interaction takes place, host X will calculate the Result of Interaction (RI), which is an indicator about how good or bad host Y performed during this interaction. Finally, host X will calculate the new decay factor (τ) for host Y based on the difference in Y s reputation values between successive interactions. Figure 1 illustrates the flow of events in PATROL. Figure 1 Flowchart of PATROL 3.2 Trust metrics In PATROL, we differentiate between two types of trust that affect the final decision of a host to whether interact with another target host or not. The first type is the trust in the competence of a host and the second is the trust in the host s credibility to give trusted advice (Dewan, 2004). The first type of trust is in the competence of a host to perform a specific task up to the expectations of the initiator host. After every interaction, the initiator host evaluates the interaction results and saves the level of competency of the target host. This saved

6 PATROL: a comprehensive reputation-based trust model 113 value is used as the direct experience in the host s own trust decisions, and as a feedback reputation value to other hosts to be incorporated in their trust decisions. The second type of trust is in the confidence in a host s consistency and credibility in giving trusted advice and feedback. This type is based on several traits: Activity. The activity of a host depends on its total number of interactions with other hosts. This trait reflects the activity level of a host and how much its information is reliable and up-to-date. Similarity. The similarity of a host with another host is based on the similarity of their reputation values. The closer their values, the closer their evaluation procedures and thus the more one s recommendation will be credible to the other. Popularity. A host s popularity is based on the number of interactions hosts have had with this specific host. A host s popularity determines the degree to which this host is well liked and accepted among other hosts. Cooperation. The cooperation of a host depends on the number of times this host interacted as compared to the total number of times it was asked for a service. This trait will determine the willingness of a host to interact and provide services to others. 3.3 The computational model (Derbas et al., 2004) In this section, we review the TRUMMAR model, on which PATROL is based. Consider again the situation where host X wants to interact with host Y in order to accomplish a certain task. Host X will not interact unless it is sure that host Y is trustworthy, or in other words, host Y will not alter, corrupt, manipulate, delete, or delay the exchanged data. In order to find out whether host Y is trustworthy or not, host X calculates a reputation value for host Y, as a combined result of previous reputation information calculated and stored by host X, and inquires about host Y s reputation from neighbours of host X, friends of host X, and other hosts willing to volunteer reputation information about host Y. The first step in making decisions is to trust one s own information, i.e., using previous information available at host X about the reputation of the destination, host Y. Then host X goes further out to trusting other hosts on its own network that are under the same administrative control (neighbours). These hosts are assumed to be as vulnerable to an attack as host X on the network. Friends hosts from different networks that are under different, but trusted administrative control follow neighbours in this hierarchy of trust. Finally, stranger hosts that are willing to volunteer information come in last in this hierarchy of trust. We define the term interaction as a process that involves a source that interacts with a desired destination to accomplish a certain task, and define the RI as the degree of success in accomplishing this task. The reputation value is calculated based on the following formula:

7 114 A. Tajeddine, A. Kayssi, A. Chehab and H. Artail repy/x before _ int = ArepY/X+ B α rep Y/X β rep Y/X j + C + D β i i i i i δ rep Y/Z j j l l l j j α l δ l (1) rep Y/X before_int represents the value that is being calculated now for the reputation of Y at X before their interaction, since as we will see later, reputation values change with time. rep Y/X represents the last calculated reputation of Y with respect to X, modified to account for the time interval since the last time that host X was interested in finding host Y s reputation. α rep Y/X. i i i This represents the weighted sum of reputations of Y as reported by the neighbours of X (X i ). β rep Y/X. j j j This represents the weighted sum of reputations of Y as reported by the friends of X (X j ). δ rep Y/Z. l l l This represents the weighted sum of reputations of Y as reported by strangers (Z l ) in the host space that volunteer to provide information about the reputation of Y. i, j, l are weighting factors, which depend on the reputation of the individual neighbours, friends and strangers in the host space, respectively. These numbers are calculated as we will show in Section 3.5 and are based on the factors listed in Section 3.2. A, B, C and D are weighting factors for the respective reputation of Y with respect to neighbours of X, reputation of Y with respect to friends of X, and reputation of Y with respect to strangers in the host space. These factors are empirically determined constants that should satisfy the constraint A > B > C > D. Note that reputation values are restricted to values between 0 and k, where k is the predefined maximum reputation value, such that 0 < rep Y /X < k. To achieve this condition the constraint A + B + C + D = 1 should be satisfied. As time progresses, a host reputation with respect to other hosts change to an unknown state if little or no interaction occurs between them, meaning that reputation information is lost with time. It is important to have the reputation of a host (whether good or bad) converge to a neutral value as time passes by and no interactions take place. Thus, a bad host does not remain bad for life and a good host is not considered good forever. To achieve this, the reputation values are modified with time according to the following formula: rep Y/Z( t) = final_value +(initial_value ( 0 ) final_value)e t t τ (2) the initial value is the reputation value at time t = t 0 the final value is a neutral value

8 PATROL: a comprehensive reputation-based trust model 115 t 0 is the last time host Z computed the reputation of host Y τ is the decay factor that determines how quickly or slowly reputation information becomes invalid. This value is calculated on a per-host basis, as will be shown in Section 3.6. The decay equation relies on a single time constant to model the change of reputation information with time. The exponential equation was empirically chosen; however, other decay modes, such as a saturating straight line, can be used as well. The step that follows the calculation of the reputation of a certain host is to determine trust, by associating to the host the label trustworthy or untrustworthy. This can be determined by introducing two threshold values and φ, referred to as the absolute trust and absolute mistrust thresholds, respectively. Three cases are considered: If rep Y/X θ Y can be trusted If rep Y/X ϕ Y can not be trusted If ϕ < rep Y/X< θ can be considered as either trustworthy or untrustworthy depending on how paranoid or trusting host X is. If host X decides that host Y is trustworthy, it will interact with Y in order to accomplish the specified task. When the interaction is over, X recalculates the reputation of Y using: rep Y/X (0) = ξ rep Y/X + (1 ξ) RI (3) before _ int RI is the result of the interaction as perceived by X. RI is a number in the same range of a reputation value. RI can be determined according to different criteria such as the time it takes to complete the specified task, the correctness and accuracy of the results, etc. is a parameter typically in the range , which gives more weight to the results of the interaction that just took place and less weight to previous reputation. We note that no matter how large the number of hosts in the system becomes, we can limit the number of hosts to be contacted for reputation information to a percentage of the total number of hosts. If the number of hosts is below a defined threshold, we request information from all hosts in the system. However, if it exceeds this threshold, we can select a percentage of the hosts in the system (e.g., 10%). This process can be taken a step further by defining an upper limit to the number of hosts, thus controlling the number of requests being issued in case the system grows to a very large size. 3.4 First Impression (FI) Consider as a special case, that host Z is newly added to the system. In this case, all the other hosts will not have interacted with it and thus will not have any idea about its competence and trustworthiness. For this special case, hosts will subject host Z to an initial test period during which they will send it nonessential test data, with known results and expected times of completion. All services and tasks fulfilled by this host will be considered unreliable during this period. The hosts interacting with host Z will keep on calculating Z s reputation values and check its trustworthiness until the reputation

9 116 A. Tajeddine, A. Kayssi, A. Chehab and H. Artail information stabilises and the FI value is set. Note that this initial test period is host dependent, i.e., every host will have its own specific period to test the trustworthiness of a newly added host. The duration of this period depends on how paranoid or trusting a host is. This way, newly added hosts can not guess the duration of this test period to show good behaviour for its duration, and then misbehave after other hosts start to trust them. If a certain host Q does not want to wait for this test period to complete, it can set a predefined random FI value for host Z. This random FI will depend on how trusting or paranoid host Q is. This aspect of PATROL is the same as in TRUMMAR and is consistent with Falcone s statement (Falcone et al., 2003) that interaction-based models must assume an initial default reputation that may result in unfair losses to the trusting host. However, these losses are reduced in PATROL by subjecting new hosts to a preliminary test period as explained above. 3.5 Weighting factors In PATROL, every reputation value, which is obtained from another host, is multiplied by the corresponding weighting factor (either i, j, or l ). These weighting factors represent the trust in the capability of a host to give a valid and dependable recommendation about other hosts. The weighting factors depend on the similarity (Sim), the activity (Act) and the popularity (Pop) of a certain host. The value of i, for example, is calculated as follows: (note that exactly the same equation is used for the other weighting factors j and l.) α i = a Sim + b Act + c Pop (4) where a, b and c are factors that give the relative importance of a specific parameter with respect to others. These values are host specific and have to be consistently used in all the calculations of the weighting factors The similarity value The similarity value determines the similarity of two hosts in their evaluation procedures and their reputation values. The more similar two hosts are, the more credible their recommendation will be with respect to each other. The similarity value between two hosts is calculated as the difference of their reputation information (represented as vectors) as follows: 2 ( ui vi) Sim = 1 for all common hosts i (5) 25n u i is the reputation value of host i in the initiator host reputation vector v i is the reputation value of host i in the target host reputation vector n is the total number of hosts appearing in the reputation vectors of both the initiator and target hosts.

10 PATROL: a comprehensive reputation-based trust model 117 Note that the factor 25 is used to normalise similarity, since the values of u i and v i can be between 0 and k (we use k = 5). Therefore, similarity assumes values between 0 and 1, with 1 being given to the most similar host The activity value The activity value reflects the level of activity of a certain host in the past interval of time T I. The more active a host is, the more up-to-date and accurate are its reputation values from its direct experiences and from its received reputation values. Activity is calculated as the fraction of all interactions a host performed in the past time interval T I with other hosts; however, to keep hosts from giving false information, we will calculate the total number of interactions of a host from the other hosts reputation vectors. In other words, to calculate the number of interactions done by host X, we will add the number of interactions done from host X in every other host. So the activity of a host X will be: Act( X ) = Int _ from Int _ from X hosts (6) Int _ from X is the sum of all the interactions done by host X in the past time interval T I, as reported by other hosts Int _ fromhosts is the sum of all the interactions done by all the hosts in the past time interval T I. Note that activity assumes values between 0 and 1 with 1 being a limit value that can be approached when a host is interacting much more frequently than other hosts The popularity value The popularity value is a measure of how much a host is liked and how much its services are asked for in the system. The popularity of a host is calculated as the fraction of all interactions other hosts have done with this specific host (Int_with): Pop( X ) = Int _ with Int _ with X hosts (7) Int _ with X is the sum of all the interactions done with host X in the past time interval T I Int _ withhosts is the sum of all the interactions done by all the hosts in the past time interval T I. Note also that popularity can have values between 0 and 1 with 1 being a limit value that can be approached if a host is being interacted with, much more frequently than other hosts.

11 118 A. Tajeddine, A. Kayssi, A. Chehab and H. Artail 3.6 Decay factor Each host (including host X) keeps a decay factor for every host it is interacting with. The decay factor Y is a dynamic value specific to a host Y. Initially, the decay factors are set to a nominal value and then each Y will change (between a predefined maximum and minimum values, max and min ) based on the consistency of host Y s reputation values. Every time host X interacts with host Y, X compares the reputation values of host Y after the preceding interaction and after the current interaction, denoted by Old and New, respectively. As the Old and New values are closer to each other, i.e., the difference New Old is less than a certain threshold (chosen empirically to be k/5 = 1) in absolute value, host Y is giving consistent results; thus X increases its remembrance time of Y s reputation values by increasing the decay factor Y. On the other hand, whenever host Y is giving fluctuating results, and New Old is larger than 1, host X will penalise host Y for its inconsistency in reputation values: if the new reputation value is lower than the old one, host X will increase Y in order to remember its bad interaction for a longer time. If the new value is higher than the old one, host X will decrease Y in order not to hastily trust and remember the good interaction of this fluctuating host. The decay factor in PATROL is calculated according to the following empirically chosen procedure: calculate the difference D = New Old (8) if D 1 then Y = Y round(5 4 D ) (9) if D > 1 then Y = Y /(2 round D ) (10) if D > 1 then Y = Y round D (11) limit the new value of Y to the range [ min, max ]. When D 1, we use the constants 5 and 4 to keep the multiplicative factor between 1 and 5. In the last equation since D > 1, but D < 5 the multiplicative factor is kept in the same range [1, 5]. For the case where D > 1, is divided by a factor between 1 and 10. These constants depend on the specific choice of the values of k, min and max (equal to 5, 1000 cycles, and 10,000 cycles, respectively, in our implementation). 3.7 Gathering/saving reputation values Each host maintains a table similar to the one shown in Table 2. The table contains the following information: A unique host identifier, which may be an IP address, a URL, etc., the status of each host being a neighbour, a friend, or a stranger; the calculated reputation value of each host; the calculated weighting factors α i, j, l ; the decay factors i ; the number of interactions the host has done with every other host in the last time interval T l. This time interval is common to all hosts in the system; the number of interactions a host has done with this host in the last time interval T I ; the cumulative number of attempts the host tried to interact with any other host in the last time interval T I. This number is needed to calculate the cooperation value.

12 PATROL: a comprehensive reputation-based trust model 119 Table 2 Reputation table Host ID Status Reputation value Weighting factors Decay FACTOR ( i ) Interactions with (Int_with) Interactions from (Int_from) Attempts (Att) As mentioned in Section 3.3, every time a host X wants to interact with another host Y, it will ask other hosts to give it their reputation vectors. The reputation vector is only a subset of the reputation table containing the identifier, the reputation values and the number of interactions with and from a host during the last T I. However, host X will not ask about the reputation vector more than once every time interval T A (see Section 3.7.1). In addition, host X will only ask the hosts that have their cooperation value (defined in Section 3.7.2), above the cooperation threshold according to the flowchart in Figure 2. Figure 2 Hosts inferred based on their cooperation If the number of attempts to host M, in the last time interval T l is below a certain threshold, the cooperation value will be considered inaccurate and host M will be queried. However, when the number of attempts exceeds the threshold, host X will have gathered enough information to calculate a valid cooperation value of host M. So, if the cooperation value is above the cooperation threshold, host M will be queried; otherwise it will not be. Whenever asked for their reputation vectors, the hosts will decay all their reputation values based on their specific i and send their reputation vectors to host X. Host X will now calculate the weighting factors, the reputation values, and the decay factors i of all the queried hosts. Host X then checks the reputation value of host Y and decides whether to interact with it or not The T A time interval If host X wants to interact with another host in the next time interval T A, and in order to reduce network traffic, it will not gather reputation vectors from all the hosts again because these vectors are already saved at host X with minimal changes. This time interval T A is predefined at each host based on how up-to-date it needs its data to be. There is a trade-off between sending more information on the network and getting the most recent reputation information.

13 120 A. Tajeddine, A. Kayssi, A. Chehab and H. Artail The cooperation value The cooperation value reflects the willingness of a host to cooperate and give services to other hosts. This value depends on the number of times a host responded to an interaction with respect to the total number of interactions that it was asked for in the last T I (referred to as Att in equation (12)). The cooperation of a host Y with respect to host X is calculated based on direct experiences as follows: Int_withY Co( Y / X ) =. (12) Att Y cooperation can take values between 0 and 1, with 1 being most cooperative. The default value is 0.5 when a host has never attempted to interact with that specific host. This cooperation value will decrease the possibility of hosts ignoring inquiries about reputation vectors; and thus cooperation will help save the network bandwidth and save the time a host will wait for an uncooperative host. 4 Simulation verification of PATROL 4.1 Simulation setup In order to evaluate our trust model and prove its effectiveness and reliability in distributed computing, we have simulated a network of ten randomly interacting hosts (two hosts at a time) using Microsoft Visual C++. Only one interaction can occur during any one cycle. The reputation value can range from 0 to k = 5, with values below φ = 1 considered as bad interactions and values above = 4 considered as good interactions. For reputation values falling in the probabilistic region between φ and, the choice, which is affected by how much host X is trusting or paranoid, will be based on a probabilistic approach where the decision to trust or not to trust is biased on how close the reputation value is to φ or, respectively. Hosts X and Y are set as bad hosts, thus giving results of interactions randomly distributed between 0 and φ, while all other hosts are good hosts giving good results of interactions between and k. The constants of our model are shown in Table 3. Note that in the simulation, all hosts are considered as neighbours under the same administrative domain; thus there is no need for the weighting factors C and D. We used a neutral FI value indicating no knowledge about the other hosts (FI = 2.5) and an initial decay factor 0 = 5000 cycles for all hosts. In addition, we set a = b = 0.4 and c = 0.2 to give more weight for the similarity and activity at the expense of popularity. Table 3 Constant values in the simulation Parameter A B φ FI ξ a b c 0 min max Value General results We run the simulation for 20,000 cycles and set the activity level to be relatively high in the first 10,000 cycles and low in the last 10,000 cycles. The simulation results show that

14 PATROL: a comprehensive reputation-based trust model 121 the reputation values of good hosts directly increase and settle in the absolute trust region (above ). However, as the activity level decreases in the last 10,000 cycles, their reputation values started decaying towards the value of 2.5. Bad hosts experience the opposite effect. Their reputation values decrease below φ and thus no interactions will take place with them until reputation enters the probabilistic trust region above φ. In this region, a host M may decide to interact with the bad host X; however, host X consistently provides bad service causing its reputation value at host M to go back below φ. In the last 10,000 cycles and as the activity level drops, the reputation values of bad hosts also tend to increase towards the neutral value of 2.5, thus giving a chance for a bad host to prove itself worthy of trust. 4.3 Decay factor variations Another aspect that we evaluated using simulation was the variations of the decay factor. As mentioned earlier, the decay factor depends on the difference between the new reputation value after the current interaction and the old reputation value after the previous interaction (without being decayed). When a host is considered not trustworthy, no interaction occurs and, thus the decay factor will not change. For this experiment, we changed the characteristics of three hosts, namely T, Q and S. Host T is usually a good host performing only 10% bad interactions; whereas Host Q is usually a bad host performing good interactions 10% of the time. As for Host S, it is an inconsistent host giving fluctuating interaction results, i.e., 50% good and 50% bad. For the first case and as shown in Figures 3 and 4, Host T initially made a very good interaction and received a high reputation value. As a result, we see the decay factor of T, decreased from 5000 cycles (the initial value) to 1250 cycles because the RI of Host T has increased from the initial value of 2.5 to around 4. Then T continues making good interactions and thus the decay factor increased to 3750 then to the maximum of 10,000 cycles where it settles for as long as Host T is performing good interactions. At cycle 6087, Host T performs a bad interaction that decreases its reputation value to around 0.8. With the decay factor of Host T at the maximum value of 10,000 cycles, Host M punishes Host T for its bad interaction and keeps its decay factor at 10,000 cycles. Figure 3 Reputation of a usually food host (after interactions)

15 122 A. Tajeddine, A. Kayssi, A. Chehab and H. Artail Figure 4 The decay factor variations of a usually good host Subsequently, Host T directly returns to its normal state of being good and starts receiving high reputation values. At first Host M questions the stability of Host T and decreases its decay factor to around 1666 at cycle Then as Host T maintains stability at the high reputation values, Host M regains its trust in Host T and continues increasing its decay factor until it reaches the maximum again. In the second case, shown in Figures 5 and 6, Host Q starts off by performing a bad interaction that decreased its reputation value to the absolute mistrust region and increased its decay factor from 5000 to 10,000 cycles. At cycle 3986, Host T performs a good interaction that raises its reputation value around 3.6. Here Host M was sceptical and questions the intentions of Host Q and thus decreases its decay factor to around 1666 cycles. However, Host Q returns to its bad interactions and Host Y increases its decay factor to the maximum value again. At cycle 11214, Host Q performs a very good interaction again that raises its reputation value to around 4.1. Again Host M decreases the decay factor of Host T to 1250 cycles. As a result in the next attempt to interact with T, at cycle 12684, the reputation value of T decreases quickly to around 2.5 and it is considered not trustworthy. Finally, Host T returns to its bad nature and its decay factor starts to increase again. Figure 5 Reputation of a usually bad host (after interactions)

16 PATROL: a comprehensive reputation-based trust model 123 Figure 6 The decay factor variations of a usually bad host In the third case, shown in Figures 7 and 8, Host S gives alternatively good and bad interactions. Host M increases the decay factor when the interaction is bad and decreases it when the interaction is good, as Host M could not trust the stability and intentions of Host S. Figure 7 Reputation of an alternating host (after interactions) Figure 8 The decay factor variations of an alternating host At the start, Host S makes a bad interaction with M increasing its decay factor to the maximum value of 10,000 cycles. Directly after that, Host S makes some good

17 124 A. Tajeddine, A. Kayssi, A. Chehab and H. Artail interactions raising its reputation value to above 4. As a result, Host M decreases the decay factor of S and then raises it to the maximum, and benefits from the good services of Host S. At cycle 3275, Host S performs a bad interaction decreasing its reputation value to around 0.8. As a result, Host M keeps the decay factor of S at the maximum and does not interact with it in the next attempt. At the following attempt, Host S performs a good interaction causing Host M to decrease its decay factor to around 1666 cycles, thus decaying the reputation value of Host S quickly. Now as Host S performs a bad interaction, Host M increases the decay factor to 5000 and stops interacting with Host S for about 3700 cycles until its reputation value moves up (decays) from the absolute mistrust region. Similar observations can be made on the following cycles between Host S and Host M. 4.4 Similarity, activity and popularity In this part, we evaluated the effect of similarity, activity and popularity on the weighting factors: i, j, l. We changed some of the parameters of one of the hosts (host N) and studied their effects on the weighting factor with respect to host M. We compared the results with those of host O with respect to M. We first study the effect of similarity. We let host N give interaction results between 3 and 4 for good interactions and values between 1 and 2 for bad ones; as opposed to the rest of the hosts that give good values between 4 and 5 and bad values between 0 and 1. We did this experiment to account for the case where different hosts have different evaluation procedures. We noticed that host M likes host O more than host N and this was clear from the average similarity value that M gives to both; N received an average similarity value around 0.85 while O received a value higher than 0.9. This difference is reflected in the weighting factor i that M assigns to hosts N and O. As shown in Figure 9, the average that M assigned to N was 0.4 while that assigned to O was around 0.43, which means that M has more trust in the credibility of O to give recommendations. Figure 9 Effect of RI values The second parameter we studied was the activity of host N. Here we decreased the N s activity by decreasing its interactions, in order to account for the case where a host is not as active as others are and thus, does not have the most recent reputation values.

18 PATROL: a comprehensive reputation-based trust model 125 In this case, host O had an activity value slightly higher than 0.1, which is the average value for 10 hosts interacting uniformly. However, host N had a much lower activity value that never exceeded 0.007, which reflects the much smaller number of interactions that host N is performing compared to others. Again, this is reflected clearly in the weighting factor i that M assigned for N and O. From Figure 10, we can see that M has a higher trust in O s recommendations reflected in an average value of 0.42, as opposed to a lower value for N, of around In this case, M is giving a higher weight to the more active host that will have more recent and up-to-date reputation information. Figure 10 Effect of activity In the third case, we decreased the popularity factor of host N by decreasing the probability of other hosts to interact with it. The more a host is interacted with, the more it is considered to be reliable and trustworthy. Host N had popularity values between and 0.016, which are much less than the average popularity values of host O, which were around This difference is also reflected in the weighting factor i, however, to a lesser degree. We see from Figure 11 that M gives N an value of around 0.4 and gives O an average of This slight difference in the effect of the different popularity factors is due to the fact that the popularity factor is multiplied by a smaller factor (c) as compared to the a and b factors that multiply similarity and activity, respectively, in the calculation of the weighting factor i. Note that the decrease in the alpha value of O at the end is due to the decreased activity of the whole system in the last 5000 cycles. Figure 11 Effect of popularity

19 126 A. Tajeddine, A. Kayssi, A. Chehab and H. Artail 4.5 Comprehensive case Finally, we repeated the simulation with all the above parameters changed simultaneously in host N. Since host N has different evaluation criteria, is less sociable, and is less popular than the other hosts, host M gave it a weighting factor that is less than 0.3, as compared to the higher value of more than 0.4 that M gave to the of O (Figure 12). This difference in the weighting factor, reaching 0.2 at some instances, reflects the additional trust that host M has in the credibility of the recommendations of host O as compared to host N. Figure 12 Combined effect of weighting factor parameters To evaluate the effect of the dynamic calculation as compared to a constant assignment of the values, we run the preceding simulation twice, first with the values calculated as indicated above, then with constant values of 0.5. We calculated the number of times host M wanted to interact with one of the bad hosts (X or Y) when their reputation values were in the probabilistic region. This way, we can estimate how much host M is protected against other malicious hosts. For the constant simulation, host M wanted to interact with hosts X or Y 36 times, out of which they had eight times a reputation value in the absolute mistrust region. With the new proposed model, better results are obtained. From the 35 times M wanted to interact with X or Y, they had reputation values in the absolute mistrust region ten times. Thus with our model, we obtained an increase of 6.6% in definite mistrust decisions (8 out of 36 is 22% and 10 out of 35 is 28.6%) for every host. To obtain more general results, we repeated the calculations for all the hosts wanting to interact with malicious hosts and got similar results. For the constant case, 26% of the times any host wanted to interact with X or Y (42 out of 164 times), the malicious hosts received reputation values in the absolute mistrust region. As for the dynamic calculation case, a higher percentage was achieved, where 31% of the times any host wanted to interact with X or Y (87 out of 282 times), the bad hosts were identified directly by reputation values below the absolute mistrust level. Thus, with this dynamic calculation, we were able to achieve better results in protecting hosts from interacting with malicious hosts. While these improvements may seem minor, they are beneficial second-order effects on the model s general performance.

20 PATROL: a comprehensive reputation-based trust model Variations of T A In this section, we study the effect of varying the waiting time T A, in which a host is not allowed to query other hosts more than once about their reputation values. To allow for clearer results, we decrease the interaction level of hosts to allow reputation values to be decayed before the time of the next query. This decrease will cause a decrease in the final reputation value of the target host, i.e., its reputation value will barely reach the absolute trust region. In this experiment, Host M tries to interact with Host N five times. In the second and the third interactions, M considers N to be not trustworthy and did not interact with it. This is the reason of the down slope at the second point of the plots in Figure 13. Figure 13 Effect of T A variation The first time M wanted to interact with N, we notice that in the plots with a lower T A value, M has a higher and more representative reputation value of N. As T A increases, the reputation value of N tends to the neutral value of 2.5. In the decay time between the second point and the fourth point, we notice that when T A is equal to zero, there was no decrease in the reputation value of N and this is due to the fact that Host M is being updated regularly from the other hosts in the system. As T A increases, the decrease in the reputation value of N becomes more significant. After the fourth point, Host M considers Host N to be trustworthy again. This is clear from the increase in the reputation value of N in the graph. We notice that the plots of lower T A are still closer to the truth than those with higher T A. It is clear from Figure 13 that the plots with lower T A values are more stable and almost horizontal at the final value, unlike the fluctuations of the high T A plots around the true value. It is noticed also that the plots of T A = 5000 and above are almost overlapping; this is expected because at T A = 5000, Host M is almost acting alone in the system, especially that this system is stable, i.e., there are no sudden changes in the personalities of hosts. So, Host M will depend on its own interactions more than it does on the rest of the system. At lower values of T A, Host M will gain much from other hosts in the system as they may have acquired more accurate reputation values of Host N.

21 128 A. Tajeddine, A. Kayssi, A. Chehab and H. Artail 5 System overhead and security issues 5.1 System overhead In order to estimate the communication overhead of PATROL, we evaluate the size of the data that travels over the network to exchange the reputation information. The vector contains the host identifier (IP address), the reputation values, and the number of interactions with and from a host during the last T I. An IP address is four bytes long, the reputation values are in the range 0 to k = 5 and can be quantised to 256 levels, thus requiring a single byte. For the number of interactions, two bytes are needed for each to store a maximum of 65,536 interactions within the period T I. The reputation vector of each host is therefore 9 bytes. With the overhead of packet headers, a total vector size of around 64 bytes is needed. If we consider that every host wants to inquire about all the other reputation vectors every T A seconds, which is a worst case scenario, the minimum allowed T A for an overhead bandwidth utilisation of 100 kbps and 1 Mbps will be as shown in Table 4. Table 4 Allowed T A for a given overhead No. of hosts T A for an overhead of 100 Kbps (s) T A for an overhead of 1 Mbps (s) Data acquisition For the similarity value, a host X may want to increase its similarity with another host M. To do so, host X alters its reputation values to coincide with M s values; or simply it can just return M s vector to M. This way M will see X as perfectly similar to it (perfect match); however, X will not benefit from this attack since effectively, M is only increasing the weighting factor of its opinion and in a way discarding X s real opinions. This effect may become harmful as the number of hosts answering M with its own reputation vector increases, because the colluding hosts will be excluding M from the exchange of reputation in the host community. As for activity and popularity, they are calculated using values reported by other hosts and thus a host can not improve its activity or popularity unless it is one of many colluding hosts. However, in PATROL, a host collects reputation vectors from a subset of the whole community; thus the effect of colluding hosts decreases. An additional improvement, which decreases the effect of altering data, may be to calculate activity and popularity based on the number of hosts interacted with, rather than the number of interactions. 5.3 System failure We test the effect of dishonest hosts on the system in order to check the limit at which PATROL will fail. We repeat the simulation of 10 hosts, where X and Y are bad hosts.

22 PATROL: a comprehensive reputation-based trust model 129 The reputation of host Y with respect to host M is calculated as the number of dishonest hosts increase. As shown in Figure 14, if all hosts are honest, rep Y/M is in the absolute mistrust region, where it should be. However, as the number of dishonest hosts increase, the average rep Y/M starts increasing until it reaches an average of around 4, with peaks to almost 4.5, when all the eight hosts are lying to host M about Y s reputation. Even though rep Y/M is mostly in the probabilistic region, we consider the system to have failed if the reputation value exceeds 2.5. The system will therefore fail when the number of dishonest hosts is more than four hosts out of eight. Consequently, the system can endure 50% dishonest hosts before it fails. Figure 14 System failure Table 5 shows the percentage of interactions that host M performs with Y after considering Y to be trustworthy, out of the total number of attempts. When all the queried hosts in the system are honest, M interacts with Y almost 23% of the attempts. As the number of dishonest hosts increase in the system, this percentage also increases until it reaches almost 80%. One can see from the table that as the number of dishonest hosts exceeds four, the probability of M interacting with Y > 60%; thus we consider that the system has failed. Even with four dishonest hosts (50% of queried hosts), the system is almost at the edge of failure because M interacted with Y almost 46% of the time. Table 5 Percentage of interactions from M to Y Dishonest hosts Attempts Interactions Percent interactions Conclusions We presented in this paper PATROL, a new model for reputation-based trust that is general and comprehensive and can be used in any distributed infrastructure. The model is unique in integrating various concepts that are important to the calculation of reputation values and the corresponding trust decisions. Among the incorporated

Extensive Experimental Validation of a Personalized Approach for Coping with Unfair Ratings in Reputation Systems

Extensive Experimental Validation of a Personalized Approach for Coping with Unfair Ratings in Reputation Systems Extensive Experimental Validation of a Personalized Approach for Coping with Unfair Ratings in Reputation Systems Nanyang Technological University, School of Computer Engineering, zhangj@ntu.edu.sg Received

More information

A Reliable Trust Model for Grid Based on Reputation

A Reliable Trust Model for Grid Based on Reputation A Reliable Trust Model for Grid Based on Reputation Vivekananth.p Lecturer-IT Department St Joseph College of Engineering and Technology Dar-Es-Salaam, Tanzania vivek.jubilant@gmail.com Abstract- Grid

More information

ENHANCEMENTS TO REPUTATION BASED TRUST MODELS FOR IMPROVED RELIABILITY IN GRID COMPUTING

ENHANCEMENTS TO REPUTATION BASED TRUST MODELS FOR IMPROVED RELIABILITY IN GRID COMPUTING ENHANCEMENTS TO REPUTATION BASED TRUST MODELS FOR IMPROVED RELIABILITY IN GRID COMPUTING SRIVARAMANGAI.P, RENGARAMANUJAM SRINIVASAN Senior Lecturer, Retd.Prof /CSE Dept Botho College, BSA University, Chennai

More information

A Model for Providing List of Reliable Providers for Grid Computing

A Model for Providing List of Reliable Providers for Grid Computing A for Providing List of Reliable for Grid Computing Srivaramangai.P Senior Lecturer Botho College Gaborone Botswana Rengaramanujam Srinivasan Retd. Professor BSA University Chennai, India ABSTRACT Grid

More information

Perseus A Personalized Reputation System

Perseus A Personalized Reputation System Perseus A Personalized Reputation System Petteri Nurmi Helsinki Institute for Information Technology HIIT petteri.nurmi@cs.helsinki.fi Introduction Internet provides many possibilities for online transactions

More information

An Adaptive Pricing Scheme for Content Delivery Systems

An Adaptive Pricing Scheme for Content Delivery Systems An Adaptive Pricing Scheme for Content Delivery Systems Srinivasan Jagannathan & Kevin C. Almeroth Department of Computer Science University of California Santa Barbara, CA 936-5 fjsrini,almerothg@cs.ucsb.edu

More information

Recommendation-based trust model in P2P network environment

Recommendation-based trust model in P2P network environment Recommendation-based trust model in P2P network environment Yueju Lei, Guangxi Chen (Guangxi Key Laboratory of Trusted Software Guilin University of Electronic Technology, Guilin 541004, China) chgx@guet.edu.cn,

More information

Bazaar: Strengthening user reputations in online marketplaces

Bazaar: Strengthening user reputations in online marketplaces Bazaar: Strengthening user reputations in online marketplaces Ansley Post * Vijit Shah Alan Mislove Northeastern University MPI-SWS/Rice University * Now at Google March 31, 2011, NSDI 11 Online marketplaces

More information

Reasoning about user trustworthiness with non-binary advice from peers

Reasoning about user trustworthiness with non-binary advice from peers Reasoning about user trustworthiness with non-binary advice from peers J. Finnson 1, R. Cohen 1 J. Zhang 2, T. Tran 3, U.F. Minhas 1, U. Waterloo, Canada 1 ; Nanyang Tech. U., Singapore 2 ; U. Ottawa,

More information

A Context-Aware Framework for Detecting Unfair Ratings in an Unknown Real Environment

A Context-Aware Framework for Detecting Unfair Ratings in an Unknown Real Environment A Context-Aware Framework for Detecting Unfair Ratings in an Unknown Real Environment Cheng Wan, Jie Zhang and Athirai A. Irissappane School of Computer Science and Engineering, Southeast University, China.{chengwan@njmu.edu.cn}

More information

An Adaptive Pricing Scheme for Content Delivery Systems

An Adaptive Pricing Scheme for Content Delivery Systems An Adaptive Pricing Scheme for Content Delivery Systems Srinivasan Jagannathan & Kevin C. Almeroth Department of Computer Science University of California Santa Barbara, CA 936-5 jsrini,almeroth @cs.ucsb.edu

More information

Towards Effective and Efficient Behavior-based Trust Models. Klemens Böhm Universität Karlsruhe (TH)

Towards Effective and Efficient Behavior-based Trust Models. Klemens Böhm Universität Karlsruhe (TH) Towards Effective and Efficient Behavior-based Trust Models Universität Karlsruhe (TH) Motivation: Grid Computing in Particle Physics Physicists have designed and implemented services specific to particle

More information

COMBINING TRUST MODELING AND MECHANISM DESIGN FOR PROMOTING HONESTY IN E-MARKETPLACES

COMBINING TRUST MODELING AND MECHANISM DESIGN FOR PROMOTING HONESTY IN E-MARKETPLACES Computational Intelligence, Volume 28, Number 4, 212 COMBINING TRUST MODELING AND MECHANISM DESIGN FOR PROMOTING HONESTY IN E-MARKETPLACES JIE ZHANG, 1 ROBIN COHEN, 2 AND KATE LARSON 2 1 School of Computer

More information

Traffic Shaping (Part 2)

Traffic Shaping (Part 2) Lab 2b Traffic Shaping (Part 2) Purpose of this lab: This lab uses the leaky bucket implementation (from Lab 2a) for experiments with traffic shaping. The traffic for testing the leaky bucket will be the

More information

Trust-based Service Management of Internet of. Things Systems and Its Applications

Trust-based Service Management of Internet of. Things Systems and Its Applications Trust-based Service Management of Internet of Things Systems and Its Applications Jia Guo Dissertation submitted to the faculty of the Virginia Polytechnic Institute and State University in partial fulfillment

More information

Trust-based IoT Participatory Sensing for Hazard Detection and Response

Trust-based IoT Participatory Sensing for Hazard Detection and Response Trust-based IoT Participatory Sensing for Hazard Detection and Response Jia Guo 1, Ing-Ray Chen 1, Jeffrey J.P. Tsai 2, Hamid Al-Hamadi 3 1 Virginia Tech {jiaguo,irchen}@vt.edu 2 Asia University jjptsai@gmail.com

More information

A Reputation-Oriented Reinforcement Learning Strategy for Agents in Electronic Marketplaces

A Reputation-Oriented Reinforcement Learning Strategy for Agents in Electronic Marketplaces A Reputation-Oriented Reinforcement Learning Strategy for Agents in Electronic Marketplaces Thomas Tran and Robin Cohen Department of Computer Science University of Waterloo Waterloo, ON, N2L 3G1, Canada

More information

Intelligent Agents in Mobile Vehicular Ad-Hoc Networks: Leveraging Trust Modeling Based on Direct Experience with Incentives for Honesty

Intelligent Agents in Mobile Vehicular Ad-Hoc Networks: Leveraging Trust Modeling Based on Direct Experience with Incentives for Honesty 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology Intelligent Agents in Mobile Vehicular Ad-Hoc Networks: Leveraging Trust Modeling Based on Direct Experience

More information

Multi-Level µtesla: A Broadcast Authentication System for Distributed Sensor Networks

Multi-Level µtesla: A Broadcast Authentication System for Distributed Sensor Networks Multi-Level µtesla: A Broadcast Authentication System for Distributed Sensor Networks Donggang Liu Peng Ning Cyber Defense Laboratory Department of Computer Science North Carolina State University Raleigh,

More information

Trust-Networks in Recommender Systems

Trust-Networks in Recommender Systems San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research 2008 Trust-Networks in Recommender Systems Kristen Mori San Jose State University Follow this and additional

More information

Models in Engineering Glossary

Models in Engineering Glossary Models in Engineering Glossary Anchoring bias is the tendency to use an initial piece of information to make subsequent judgments. Once an anchor is set, there is a bias toward interpreting other information

More information

State-Dependent Pricing and Its Economic Implications 1

State-Dependent Pricing and Its Economic Implications 1 Telecommunication Systems Journal, Vol. 18, No. 4, pp. 315-29, Dec. 2001 State-Dependent Pricing and Its Economic Implications 1 Qiong Wang 2 and Jon Peha 3 Abstract: In a packet-switched integrated-services

More information

A Trust Evaluation Model for Social Commerce Based on BP Neural Network

A Trust Evaluation Model for Social Commerce Based on BP Neural Network Journal of Data Analysis and Information Processing, 2016, 4, 147-158 http://www.scirp.org/journal/jdaip ISSN Online: 2327-7203 ISSN Print: 2327-7211 A Trust Evaluation Model for Social Commerce Based

More information

Incentives and Truthful Reporting in Consensus-centric Crowdsourcing

Incentives and Truthful Reporting in Consensus-centric Crowdsourcing Incentives and Truthful Reporting in Consensus-centric Crowdsourcing Ece Kamar, Eric Horvitz {eckamar,horvitz}@microsoft.com February 2012 Technical Report MSR-TR-2012-16 We address the challenge in crowdsourcing

More information

A Systematic Approach to Performance Evaluation

A Systematic Approach to Performance Evaluation A Systematic Approach to Performance evaluation is the process of determining how well an existing or future computer system meets a set of alternative performance objectives. Arbitrarily selecting performance

More information

Choosing Trust Models for Different E-Marketplace Environments 1

Choosing Trust Models for Different E-Marketplace Environments 1 Choosing Trust Models for Different E-Marketplace Environments 1 Athirai Aravazhi Irissappane, Phd student, IMI & SCE, NTU Supervisor: Assistant Professor Jie Zhang Co-Supervisor: Professor Nadia Magnenat

More information

Determining the Effectiveness of Specialized Bank Tellers

Determining the Effectiveness of Specialized Bank Tellers Proceedings of the 2009 Industrial Engineering Research Conference I. Dhillon, D. Hamilton, and B. Rumao, eds. Determining the Effectiveness of Specialized Bank Tellers Inder S. Dhillon, David C. Hamilton,

More information

THE RELIABILITY AND ACCURACY OF REMNANT LIFE PREDICTIONS IN HIGH PRESSURE STEAM PLANT

THE RELIABILITY AND ACCURACY OF REMNANT LIFE PREDICTIONS IN HIGH PRESSURE STEAM PLANT THE RELIABILITY AND ACCURACY OF REMNANT LIFE PREDICTIONS IN HIGH PRESSURE STEAM PLANT Ian Chambers Safety and Reliability Division, Mott MacDonald Studies have been carried out to show how failure probabilities

More information

Performance management using Influence Diagrams: The case of improving procurement

Performance management using Influence Diagrams: The case of improving procurement Performance management using Influence Diagrams: The case of improving procurement Mohammad Hassan Abolbashari, Elizabeth Chang and Omar Khadeer Hussain School of Business University of New South Wales

More information

On Optimal Tiered Structures for Network Service Bundles

On Optimal Tiered Structures for Network Service Bundles On Tiered Structures for Network Service Bundles Qian Lv, George N. Rouskas Department of Computer Science, North Carolina State University, Raleigh, NC 7695-86, USA Abstract Network operators offer a

More information

Lecture 18: Toy models of human interaction: use and abuse

Lecture 18: Toy models of human interaction: use and abuse Lecture 18: Toy models of human interaction: use and abuse David Aldous November 2, 2017 Network can refer to many different things. In ordinary language social network refers to Facebook-like activities,

More information

Justifying Simulation. Why use simulation? Accurate Depiction of Reality. Insightful system evaluations

Justifying Simulation. Why use simulation? Accurate Depiction of Reality. Insightful system evaluations Why use simulation? Accurate Depiction of Reality Anyone can perform a simple analysis manually. However, as the complexity of the analysis increases, so does the need to employ computer-based tools. While

More information

Increasing Wireless Revenue with Service Differentiation

Increasing Wireless Revenue with Service Differentiation Increasing Wireless Revenue with Service Differentiation SIAMAK AYANI and JEAN WALRAND Department of Electrical Engineering and Computer Sciences University of California at Berkeley, Berkeley, CA 94720,

More information

The Impact of Rumor Transmission on Product Pricing in BBV Weighted Networks

The Impact of Rumor Transmission on Product Pricing in BBV Weighted Networks Management Science and Engineering Vol. 11, No. 3, 2017, pp. 55-62 DOI:10.3968/9952 ISSN 1913-0341 [Print] ISSN 1913-035X [Online] www.cscanada.net www.cscanada.org The Impact of Rumor Transmission on

More information

Guided Study Program in System Dynamics System Dynamics in Education Project System Dynamics Group MIT Sloan School of Management 1

Guided Study Program in System Dynamics System Dynamics in Education Project System Dynamics Group MIT Sloan School of Management 1 Guided Study Program in System Dynamics System Dynamics in Education Project System Dynamics Group MIT Sloan School of Management 1 Solutions to Assignment #12 January 22, 1999 Reading Assignment: Please

More information

Guided Study Program in System Dynamics System Dynamics in Education Project System Dynamics Group MIT Sloan School of Management 1

Guided Study Program in System Dynamics System Dynamics in Education Project System Dynamics Group MIT Sloan School of Management 1 Guided Study Program in System Dynamics System Dynamics in Education Project System Dynamics Group MIT Sloan School of Management 1 Solutions to Assignment #12 January 22, 1999 Reading Assignment: Please

More information

Quality Control and Reliability Inspection and Sampling

Quality Control and Reliability Inspection and Sampling Quality Control and Reliability Inspection and Sampling Prepared by Dr. M. S. Memon Dept. of Industrial Engineering & Management Mehran UET, Jamshoro, Sindh, Pakistan 1 Chapter Objectives Introduction

More information

Supporting Peer-To-Peer Collaboration Through Trust

Supporting Peer-To-Peer Collaboration Through Trust Supporting Peer-To-Peer Collaboration Through Trust Nathan Griffiths and Shanghua Sun Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK nathan@dcs.warwick.ac.uk Abstract Distributed

More information

EFFICACY OF ROBUST REGRESSION APPLIED TO FRACTIONAL FACTORIAL TREATMENT STRUCTURES MICHAEL MCCANTS

EFFICACY OF ROBUST REGRESSION APPLIED TO FRACTIONAL FACTORIAL TREATMENT STRUCTURES MICHAEL MCCANTS EFFICACY OF ROBUST REGRESSION APPLIED TO FRACTIONAL FACTORIAL TREATMENT STRUCTURES by MICHAEL MCCANTS B.A., WINONA STATE UNIVERSITY, 2007 B.S., WINONA STATE UNIVERSITY, 2008 A THESIS submitted in partial

More information

Security and internet banking

Security and internet banking Security and internet banking how satisfied are users with internet banking security? Alfred Johansson Högskolan i Halmstad Abstract In this paper, the security among internet banking and the perception

More information

THE WORLD OF ORGANIZATION

THE WORLD OF ORGANIZATION 22 THE WORLD OF ORGANIZATION In today s world an individual alone can not achieve all the desired goals because any activity requires contributions from many persons. Therefore, people often get together

More information

Spreadsheets in Education (ejsie)

Spreadsheets in Education (ejsie) Spreadsheets in Education (ejsie) Volume 2, Issue 2 2005 Article 5 Forecasting with Excel: Suggestions for Managers Scott Nadler John F. Kros East Carolina University, nadlers@mail.ecu.edu East Carolina

More information

Passenger Batch Arrivals at Elevator Lobbies

Passenger Batch Arrivals at Elevator Lobbies Passenger Batch Arrivals at Elevator Lobbies Janne Sorsa, Juha-Matti Kuusinen and Marja-Liisa Siikonen KONE Corporation, Finland Key Words: Passenger arrivals, traffic analysis, simulation ABSTRACT A typical

More information

Limits of Software Reuse

Limits of Software Reuse Technical Note Issued: 07/2006 Limits of Software Reuse L. Holenderski Philips Research Eindhoven c Koninklijke Philips Electronics N.V. 2006 Authors address: L. Holenderski WDC3-044; leszek.holenderski@philips.com

More information

Analysing Clickstream Data: From Anomaly Detection to Visitor Profiling

Analysing Clickstream Data: From Anomaly Detection to Visitor Profiling Analysing Clickstream Data: From Anomaly Detection to Visitor Profiling Peter I. Hofgesang and Wojtek Kowalczyk Free University of Amsterdam, Department of Computer Science, Amsterdam, The Netherlands

More information

Analysis of Shear Wall Transfer Beam Structure LEI KA HOU

Analysis of Shear Wall Transfer Beam Structure LEI KA HOU Analysis of Shear Wall Transfer Beam Structure by LEI KA HOU Final Year Project report submitted in partial fulfillment of the requirement of the Degree of Bachelor of Science in Civil Engineering 2013-2014

More information

A Learning Algorithm for Agents in Electronic Marketplaces

A Learning Algorithm for Agents in Electronic Marketplaces From: AAAI Technical Report WS-02-10. Compilation copyright 2002, AAAI (www.aaai.org). All rights reserved. A Learning Algorithm for Agents in Electronic Marketplaces Thomas Tran and Robin Cohen Department

More information

Optimization of the NAS Battery Control System

Optimization of the NAS Battery Control System Optimization of the NAS Battery Control System Background PG&E has purchased a 4MW, 28MWh sodium-sulfur (NAS) battery to be installed October, 2010 in San Jose at Hitachi headquarters. The site was chosen

More information

Efficiency and Robustness of Binary Online Feedback Mechanisms in Trading Environments with Moral Hazard

Efficiency and Robustness of Binary Online Feedback Mechanisms in Trading Environments with Moral Hazard Efficiency and Robustness of Binary Online Feedback Mechanisms in Trading Environments with Moral Hazard Chris Dellarocas MIT Sloan School of Management dell@mit.edu Introduction and Motivation Outline

More information

An Approach to Predicting Passenger Operation Performance from Commuter System Performance

An Approach to Predicting Passenger Operation Performance from Commuter System Performance An Approach to Predicting Passenger Operation Performance from Commuter System Performance Bo Chang, Ph. D SYSTRA New York, NY ABSTRACT In passenger operation, one often is concerned with on-time performance.

More information

Mixed-integer linear program for an optimal hybrid energy network topology Mazairac, L.A.J.; Salenbien, R.; de Vries, B.

Mixed-integer linear program for an optimal hybrid energy network topology Mazairac, L.A.J.; Salenbien, R.; de Vries, B. Mixed-integer linear program for an optimal hybrid energy network topology Mazairac, L.A.J.; Salenbien, R.; de Vries, B. Published in: Proceedings of the 4th International Conference on Renewable Energy

More information

A Network For Rewarding Truth Whitepaper Draft Version

A Network For Rewarding Truth Whitepaper Draft Version A Network For Rewarding Truth Whitepaper Draft Version TRUTHEUM NETWORK 3 Background 3 System Overview 4 Request For Truth 4 Voting 4 Reward After 5 No Revealer 5 Potential Attacks 5 Voting Attack 5 Non-voting

More information

Since the introduction of the 1985

Since the introduction of the 1985 IComparing the 1985 HCM and the ICU Methodologies BY JOHN F GOULD Since the introduction of the 1985 High way Capacity Manual, there has been much discussion concerning its application. This is especially

More information

The State of the Art in Trust and Reputation Systems: A Framework for Comparison

The State of the Art in Trust and Reputation Systems: A Framework for Comparison The State of the Art in Trust and Reputation Systems: A Framework for Comparison Zeinab Noorian 1 and 2 University of New Brunswick, Faculty of Computer Science 1 z.noorian@unb.ca, 2 ulieru@unb.ca Abstract

More information

WEB SERVICES COMPOSING BY MULTIAGENT NEGOTIATION

WEB SERVICES COMPOSING BY MULTIAGENT NEGOTIATION Jrl Syst Sci & Complexity (2008) 21: 597 608 WEB SERVICES COMPOSING BY MULTIAGENT NEGOTIATION Jian TANG Liwei ZHENG Zhi JIN Received: 25 January 2008 / Revised: 10 September 2008 c 2008 Springer Science

More information

Usage-sensitive Pricing in Multi-service Networks

Usage-sensitive Pricing in Multi-service Networks Usage-sensitive Pricing in Multi-service Networks Yuhong Liu and David W. Petr Aug.2, 2000 1 Contents Pricing schemes for multi-service networks Influence of Pricing on PVC vs. SVC Service Preference Service

More information

Forecasting Introduction Version 1.7

Forecasting Introduction Version 1.7 Forecasting Introduction Version 1.7 Dr. Ron Tibben-Lembke Sept. 3, 2006 This introduction will cover basic forecasting methods, how to set the parameters of those methods, and how to measure forecast

More information

Approaches to Communication Organization Within Cyber-Physical Systems

Approaches to Communication Organization Within Cyber-Physical Systems Approaches to Communication Organization Within Cyber-Physical Systems Ilya I. Viksnin,Nikita D. Schcepin, Roman O. Patrikeev, Andrei A. Shlykov, Igor I. Komarov Saint-Petersburg National Research University

More information

Research on Optimization of Delivery Route of Online Orders

Research on Optimization of Delivery Route of Online Orders Frontiers in Management Research, Vol. 2, No. 3, July 2018 https://dx.doi.org/10.22606/fmr.2018.23002 75 Research on Optimization of Delivery Route of Online Orders Zhao Qingju School of Information Beijing

More information

Optimizing appointment driven systems via IPA

Optimizing appointment driven systems via IPA Optimizing appointment driven systems via IPA with applications to health care systems BMI Paper Aschwin Parmessar VU University Amsterdam Faculty of Sciences De Boelelaan 1081a 1081 HV Amsterdam September

More information

Ant Colony Optimisation

Ant Colony Optimisation Ant Colony Optimisation Alexander Mathews, Angeline Honggowarsito & Perry Brown 1 Image Source: http://baynature.org/articles/the-ants-go-marching-one-by-one/ Contents Introduction to Ant Colony Optimisation

More information

Business Intelligence, 4e (Sharda/Delen/Turban) Chapter 2 Descriptive Analytics I: Nature of Data, Statistical Modeling, and Visualization

Business Intelligence, 4e (Sharda/Delen/Turban) Chapter 2 Descriptive Analytics I: Nature of Data, Statistical Modeling, and Visualization Business Intelligence, 4e (Sharda/Delen/Turban) Chapter 2 Descriptive Analytics I: Nature of Data, Statistical Modeling, and Visualization 1) One of SiriusXM's challenges was tracking potential customers

More information

Consumer Referral in a Small World Network

Consumer Referral in a Small World Network Consumer Referral in a Small World Network Tackseung Jun 1 Beom Jun Kim 2 Jeong-Yoo Kim 3 August 8, 2004 1 Department of Economics, Kyung Hee University, 1 Hoegidong, Dongdaemunku, Seoul, 130-701, Korea.

More information

SOCIAL MEDIA MINING. Behavior Analytics

SOCIAL MEDIA MINING. Behavior Analytics SOCIAL MEDIA MINING Behavior Analytics Dear instructors/users of these slides: Please feel free to include these slides in your own material, or modify them as you see fit. If you decide to incorporate

More information

A LATENT SEGMENTATION MULTINOMIAL LOGIT APPROACH TO EXAMINE BICYCLE SHARING SYSTEM USERS DESTINATION PREFERENCES

A LATENT SEGMENTATION MULTINOMIAL LOGIT APPROACH TO EXAMINE BICYCLE SHARING SYSTEM USERS DESTINATION PREFERENCES A LATENT SEGMENTATION MULTINOMIAL LOGIT APPROACH TO EXAMINE BICYCLE SHARING SYSTEM USERS DESTINATION PREFERENCES Ahmadreza Faghih-Imani, McGill University Naveen Eluru, University of Central Florida Introduction

More information

Bidding for Sponsored Link Advertisements at Internet

Bidding for Sponsored Link Advertisements at Internet Bidding for Sponsored Link Advertisements at Internet Search Engines Benjamin Edelman Portions with Michael Ostrovsky and Michael Schwarz Industrial Organization Student Seminar September 2006 Project

More information

Weighing the Benefits of a Paperless Office

Weighing the Benefits of a Paperless Office Weighing the Benefits of a Paperless Office The complete decision-making guide for real-estate business owners ramu@paperlesspipeline.com www.paperlesspipeline.com page 1 of 11 Weighing the Benefits of

More information

TOLERANCE ALLOCATION OF MECHANICAL ASSEMBLIES USING PARTICLE SWARM OPTIMIZATION

TOLERANCE ALLOCATION OF MECHANICAL ASSEMBLIES USING PARTICLE SWARM OPTIMIZATION 115 Chapter 6 TOLERANCE ALLOCATION OF MECHANICAL ASSEMBLIES USING PARTICLE SWARM OPTIMIZATION This chapter discusses the applicability of another evolutionary algorithm, named particle swarm optimization

More information

The Robustness Of Non-Decreasing Dynamic Pricing Laura van der Bijl Research Paper Business analytics Aug-Oct 2017 Prof.

The Robustness Of Non-Decreasing Dynamic Pricing Laura van der Bijl Research Paper Business analytics Aug-Oct 2017 Prof. The Robustness Of Non-Decreasing Dynamic Pricing Laura van der Bijl Research Paper Business Analytics Aug-Oct 2017 The Robustness Of Non-Decreasing Dynamic Pricing Laura van der Bijl Laura.vanderbijl5@gmail.com

More information

Introduction to E-Business I (E-Bay)

Introduction to E-Business I (E-Bay) Introduction to E-Business I (E-Bay) e-bay is The worlds online market place it is an inexpensive and excellent site that allows almost anyone to begin a small online e-business. Whether you are Buying

More information

SENSOR NETWORK SERVICE INFRASTRUCTURE FOR REAL- TIME BUSINESS INTELLIGENCE

SENSOR NETWORK SERVICE INFRASTRUCTURE FOR REAL- TIME BUSINESS INTELLIGENCE SENSOR NETWORK SERVICE INFRASTRUCTURE FOR REAL- TIME BUSINESS INTELLIGENCE A. Musa and Y. Yusuf Institute of Logistics and Operations Management University of Central Lancashire, Preston 2 Our interest

More information

An Empirical Study on Customers Satisfaction of Third-Party Logistics Services (3PLS)

An Empirical Study on Customers Satisfaction of Third-Party Logistics Services (3PLS) International Conference on Education, Management and Computing Technology (ICEMCT 2015) An Empirical Study on Customers Satisfaction of Third-Party Logistics Services (3PLS) YU LIU International Business

More information

A Dynamics for Advertising on Networks. Atefeh Mohammadi Samane Malmir. Spring 1397

A Dynamics for Advertising on Networks. Atefeh Mohammadi Samane Malmir. Spring 1397 A Dynamics for Advertising on Networks Atefeh Mohammadi Samane Malmir Spring 1397 Outline Introduction Related work Contribution Model Theoretical Result Empirical Result Conclusion What is the problem?

More information

DEVELOPMENT OF A NEURAL NETWORK MATHEMATICAL MODEL FOR DEMAND FORECASTING IN FLUCTUATING MARKETS

DEVELOPMENT OF A NEURAL NETWORK MATHEMATICAL MODEL FOR DEMAND FORECASTING IN FLUCTUATING MARKETS Proceedings of the 11 th International Conference on Manufacturing Research (ICMR2013), Cranfield University, UK, 19th 20th September 2013, pp 163-168 DEVELOPMENT OF A NEURAL NETWORK MATHEMATICAL MODEL

More information

P2P reputation management: Probabilistic estimation vs. social networks

P2P reputation management: Probabilistic estimation vs. social networks Computer Networks 50 (2006) 485 500 www.elsevier.com/locate/comnet P2P reputation management: Probabilistic estimation vs. social networks Zoran Despotovic *, Karl Aberer Ecole Polytechnique Fédérale de

More information

Training Transfer. What we know about what works and what doesn t? Dr Shaun Ridley, Deputy Executive Director

Training Transfer. What we know about what works and what doesn t? Dr Shaun Ridley, Deputy Executive Director Training Transfer What we know about what works and what doesn t? Dr Shaun Ridley, Deputy Executive Director Australian Institute of Management - Western Australia Copyright AIM WA 2011 RETURN ON INVESTMENT

More information

A RFID Explicit Tag Estimation Scheme for Dynamic Framed- Slot ALOHA Anti-Collision

A RFID Explicit Tag Estimation Scheme for Dynamic Framed- Slot ALOHA Anti-Collision A RFID Explicit Tag Estimation Scheme for Dynamic Framed- Slot ALOHA Anti-Collision Author Pupunwiwat, Prapassara, Stantic, Bela Published 2010 Conference Title 6th International Conference on Wireless

More information

Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 24 Sequencing and Scheduling - Assumptions, Objectives and Shop

More information

Simulating a Real-World Taxi Cab Company using a Multi Agent-Based Model

Simulating a Real-World Taxi Cab Company using a Multi Agent-Based Model LIMY Simulating a Real-World Taxi Cab Company using a Multi Agent-Based Model Matz Johansson Bergström Department of Computer Science Göteborgs Universitet Email:matz.johansson@gmail.com or matz.johansson@chalmers.se

More information

Bayesian Network Based Trust Management

Bayesian Network Based Trust Management Bayesian Network Based Trust Management Yong Wang,2, Vinny Cahill, Elizabeth Gray, Colin Harris, and Lejian Liao 3 Distributed Systems Group, Department of Computer Science, Trinity College Dublin, Dublin

More information

There has been a lot of interest lately

There has been a lot of interest lately BENCHMARKING SALES FORECASTING PERFORMANCE MEASURES By Kenneth B. Kahn Although different companies use different measures to evaluate the performance of forecasts, mean absolute % error (MAPE) is the

More information

An Evolutionary Model for Constructing Robust Trust Networks

An Evolutionary Model for Constructing Robust Trust Networks An Evolutionary Model for Constructing Robust Trust Networks Siwei Jiang Jie Zhang Yew-Soon Ong School of Computer Engineering Nanyang Technological University, Singapore, 639798 {sjiang, zhangj, asysong}@ntu.edu.sg

More information

Business Intelligence, 4e (Sharda/Delen/Turban) Chapter 2 Descriptive Analytics I: Nature of Data, Statistical Modeling, and Visualization

Business Intelligence, 4e (Sharda/Delen/Turban) Chapter 2 Descriptive Analytics I: Nature of Data, Statistical Modeling, and Visualization Business Intelligence Analytics and Data Science A Managerial Perspective 4th Edition Sharda TEST BANK Full download at: https://testbankreal.com/download/business-intelligence-analytics-datascience-managerial-perspective-4th-edition-sharda-test-bank/

More information

CeDEx Discussion Paper Series ISSN Discussion Paper No Johannes Abeler, Juljana Calaki, Kai Andree and Christoph Basek June 2009

CeDEx Discussion Paper Series ISSN Discussion Paper No Johannes Abeler, Juljana Calaki, Kai Andree and Christoph Basek June 2009 Discussion Paper No. 2009 12 Johannes Abeler, Juljana Calaki, Kai Andree and Christoph Basek June 2009 The Power of Apology CeDEx Discussion Paper Series ISSN 1749 3293 The Centre for Decision Research

More information

Lab: Response Time Analysis using FpsCalc Course: Real-Time Systems Period: Autumn 2015

Lab: Response Time Analysis using FpsCalc Course: Real-Time Systems Period: Autumn 2015 Lab: Response Time Analysis using FpsCalc Course: Real-Time Systems Period: Autumn 2015 Lab Assistant name: Jakaria Abdullah email: jakaria.abdullah@it.uu.se room: 1235 Introduction The purpose of this

More information

THE RATIONAL METHOD FREQUENTLY USED, OFTEN MISUSED

THE RATIONAL METHOD FREQUENTLY USED, OFTEN MISUSED THE RATIONAL METHOD FREQUENTLY USED, OFTEN MISUSED Mark Pennington, Engineer, Pattle Delamore Partners Ltd, Tauranga ABSTRACT The Rational Method has been in use in some form or another at least since

More information

Bayesian-based Preference Prediction in Bilateral Multi-issue Negotiation between Intelligent Agents

Bayesian-based Preference Prediction in Bilateral Multi-issue Negotiation between Intelligent Agents Bayesian-based Preference Prediction in Bilateral Multi-issue Negotiation between Intelligent Agents Jihang Zhang, Fenghui Ren, Minjie Zhang School of Computer Science and Software Engineering, University

More information

ordering and gradual information disclosure

ordering and gradual information disclosure Enhancing comparison shopping agents through ordering and gradual information disclosure Chen Hajaj, Noam Hazon, David Sarne Auton Agent Multi-Agent Syst (2017) 31:696-714 Team Dishonest Agents Arna Ganguly,

More information

BEGINNER S GUIDE TO ISO 9001 : Quality Management System Requirements Explained

BEGINNER S GUIDE TO ISO 9001 : Quality Management System Requirements Explained BEGINNER S GUIDE TO ISO 9001 : 2015 Quality Management System Requirements Explained What is ISO 9001 : 2015? Why use it? ISO 9001 is an internationally recognised standard in quality. It is a guide to

More information

Using Application Response to Monitor Microsoft Outlook

Using Application Response to Monitor Microsoft Outlook Using Application Response to Monitor Microsoft Outlook Microsoft Outlook is one of the primary e-mail applications in use today. If your business depends on reliable and prompt e-mail service, you need

More information

Reward Backpropagation Prioritized Experience Replay

Reward Backpropagation Prioritized Experience Replay Yangxin Zhong 1 Borui Wang 1 Yuanfang Wang 1 Abstract Sample efficiency is an important topic in reinforcement learning. With limited data and experience, how can we converge to a good policy more quickly?

More information

Introduction to Artificial Intelligence. Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST

Introduction to Artificial Intelligence. Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST Introduction to Artificial Intelligence Prof. Inkyu Moon Dept. of Robotics Engineering, DGIST Chapter 9 Evolutionary Computation Introduction Intelligence can be defined as the capability of a system to

More information

Optimization of Click-Through Rate Prediction in the Yandex Search Engine

Optimization of Click-Through Rate Prediction in the Yandex Search Engine ISSN 5-155, Automatic Documentation and Mathematical Linguistics, 213, Vol. 47, No. 2, pp. 52 58. Allerton Press, Inc., 213. Original Russian Text K.E. Bauman, A.N. Kornetova, V.A. Topinskii, D.A. Khakimova,

More information

2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising

More information

Contact: Version: 2.0 Date: March 2018

Contact: Version: 2.0 Date: March 2018 Survey Sampling Contact: andrew.ballingall@fife.gov.uk Version: 2.0 Date: March 2018 Sampling allows you to draw conclusions about a particular population by examining a part of it. When carrying out a

More information

Weka Evaluation: Assessing the performance

Weka Evaluation: Assessing the performance Weka Evaluation: Assessing the performance Lab3 (in- class): 21 NOV 2016, 13:00-15:00, CHOMSKY ACKNOWLEDGEMENTS: INFORMATION, EXAMPLES AND TASKS IN THIS LAB COME FROM SEVERAL WEB SOURCES. Learning objectives

More information

Product Design and Development Dr. Inderdeep Singh Department of Mechanical and Industrial Engineering Indian Institute of Technology, Roorkee

Product Design and Development Dr. Inderdeep Singh Department of Mechanical and Industrial Engineering Indian Institute of Technology, Roorkee Product Design and Development Dr. Inderdeep Singh Department of Mechanical and Industrial Engineering Indian Institute of Technology, Roorkee Lecture - 06 Value Engineering Concepts [FL] welcome to the

More information

Networked Life (CSE 112)

Networked Life (CSE 112) Networked Life (CSE 112) Prof. Michael Kearns Final Examination May 3, 2006 The final exam is closed-book; you should have no materials present other than the exam and a pen or pencil. NAME: PENN ID: Exam

More information

FAQ: Ethics, Data, and Strategic Planning

FAQ: Ethics, Data, and Strategic Planning Question 1: What are ethics, morals, and values, and how do they affect business decision making? Answer 1: Ethics refer to a code of conduct for an individual or a group. Morals are habits that people

More information

Integrated Mine Planner

Integrated Mine Planner Integrated Mine Planner Yuki Osada Lyndon While School of Computer Science and Software Engineering Luigi Barone CEED Client: SolveIT Software Abstract Planning is an important process in mining extraction

More information