Position-based Trust Update in Delegation Chains

Size: px
Start display at page:

Download "Position-based Trust Update in Delegation Chains"

Transcription

1 Position-based Trust Update in Delegation Chains Chris Burnett and Nir Oren University of Aberdeen {cburnett, Abstract. Trust mechanisms allow a trustor to identify the most trustworthy trustee with which to interact. Such interactions can take the form of task assignments, in which case the trustor can be seen to delegate the task to the trustee. Now this process of delegation can operate recursively, with a task being repeatedly sub-delegated until some agent acts upon it. Such delegation chains present a problem for current trust evaluation mechanisms, which reward or penalise a single agent, as responsibility for the outcome of a task must be shared among all members of the delegation chain. As a result, current approaches lead to agents being unfairly penalised (or rewarded) during trust update, adversely affecting the quality of the model s evaluations, and through this, future agent interactions. In this paper we investigate the effects of sub-delegation on a probabilistic trust model and propose a model of weighting trust updates based on shared responsibility. We evaluate this model in the context of a simulated multi-agent system and describe how different weighting strategies can affect probabilistic trust updates when sub-delegation is possible. 1 Introduction Marsh s seminal thesis [8] identified the existence of an implicit trust relationship in multi-agent systems (MASs), and since then, researchers have investigated mechanisms for computing and acting based on different trust levels between agents [10 12]. Such systems have consistently been shown to improve the overall utility of the MAS, reducing the potential harm untrustworthy agents can cause to the system. Trust is central to delegation, whereby one agent (the trustor) requests that some other agent (the trustee) execute a task on the first agent s behalf. Most existing trust mechanisms assume that any interaction is inherently binary, involving only the truster and trustee, and ignore the possibility of a delegation chain. Such delegation chains form when a task is delegated from one agent to another, who in turn delegates the task to a third agent, and so on. Delegation chains are ubiqutous in real-world complex systems, such as supply chains and organisational hierarchies, and can also form in a variety of MAS environments. For example, work such as [4, 9] described how virtual organisations can form in response to a delegation request which an individual agent could not fulfil. Such a VO might in turn outsource the task to another agent, who in turn could form another VO, and this process could continue until some agent attempts to fulfil the original request. It may not be possible for the originating agent to know, with certainty, whether sub-delegation occurred, or prevent it from occurring. Therefore, it is important to consider the possibility, and implications, of sub-delegation on trust

2 evaluation mechanisms. For example, if the agent at the end of a delegation chain fails in achieving the delegated task, all agents in the chain must share some of the blame. However, several intuitively obvious ways of apportioning this blame exist, and we seek to investigate the effects of each of these approaches on the system as a whole. As a concrete example, consider the situation where Alice asks Bob to book a hotel at a conference. Bob, not being familiar with hotels, asks Charlie to perform the booking. Charlie delegates this request to Debbie, who books a bad hotel, upsetting Alice. Should Alice ever ask Bob to book a hotel for her again? Intuitively, Bob has done nothing wrong as he did not know that Charlie would delegate the request to Debbie. The delegation therefore means that Alice s trust in Bob should be affected to a lesser degree than Bob and Alice s trust in Charlie, and in turn, by Alice, Bob and Charlie s trust in Debbie. The core question we seek to address in this paper is how the process of delegation (and sub-delegation) should affect trust measures. In seeking to answer this question our main contribution is to describe and evaluate a model for the updating of trust in the face of such delegation. In this paper, we explore weighting schemes which discount the decrease (or increase) in trust placed in an agent, based on the outcome of a delegated task, and the agent s position in the delegation chain. In this way, we seek to capture the notion that, when sub-delegation is possible, the various involved agents in a delegation chain are responsible for the final task outcome to varying extents. In the next section we formalise our model of the environment, describe our trust model, and detail some potential weighting functions. Section 3 evaluates our trust update functions, and Section 4 discusses our results in the light of existing work and proposes additional avenues for research. We conclude in Section 5. 2 Approach In this section, we describe our delegation model, and the trust model we employ to evaluate our approach. 2.1 Agents and Capabilities We consider a system containing a set A of n agents A 1,... A n. Each agent communicates with others through a set of reflexive and symmetric communications links S A A. Individual agents are associated with a set of capabilities. Each task is associated with some capability, and in order to attempt task execution, the agent must have a matching capability within its capability set. We write C to represent all the capabilities found within the system. Some agents can be more competent than others with regards to their capabilities. An agent therefore consists of a set of capability-competence pairs: A i = {(C i1, P i1 ),..., (C in, P in )} over all capabilities in the system. Here P ij [0..1] represents the agent s competence as the probability of success in achieving tasks requiring capability C ij. We write cap(a i ) = {C ij C ij > 0} to represent A i s capabilities.

3 2.2 Trust Model In this paper, we adapt Josang s widely used Subjective Logic based trust model [5, 6]. While many more complex trust models exist, the use of a relatively straight-forward trust model simplifies our experiments and allows us to highlight our approach. Note that in this paper we do not include a reputational dimension, i.e. we do not allow agents to obtain third-party opinions through communication, leaving this for future work. Opinion Representation An opinion held by an agent x about agent y performing a task requiring competence c is encoded in Subjective Logic through the following tuple. ω x y:c = b x y:c, d x y:c, u x y:c, a x y:c where b x y:c + d x y:c + u x y:c = 1, and a x y:c [0, 1]. (1) Here, b x y:c, d x y:c and u x y:c respectively represent the degrees of belief, disbelief, uncertainty regarding the proposition that y will successfully perform the task. In each case, the superscript identifies the opinion owner, and the subscript represents the opinion target. The base rate parameter a x y:c represents the a priori degree of confidence x has about y performing the task, before any evidence has been received. Trust Update Opinions are formed and updated based on observations of past performance. based on two parameters r x y:c and s x y:c, representing the number of positive and negative experiences, respectively, observed by x about y in tasks requiring competence c. With these two parameters, x s opinion about y regarding its competence with respect to c is computed as follows. b x y:c = r x y:c (r x y:c + s x y:c + 2) d x s x y:c y:c = (ry:c x + s x y:c + 2) u x 2 y:c = (ry:c x + s x y:c + 2) (2) Trust Ratings Given an opinion computed through Equation 2, a single valued trust rating which can be used to rank and compare agents can be obtained as follows. P (ω x y:c) = b x y:c + a x y:c u x y:c (3) We write P (ω x y:c r x y:c, s x y:c, a x y:c) to denote a rating that has been derived from r and s evidence parameters and an a priori rating a x y:c using Equations 2 and 3. When deciding whether to interact, agents select the candidate with the highest trust rating, provided that it is above some delegation threshold common to all agents. With such a decision model, agents with no experiences are initially indifferent to any particular candidate, and will select one at random.

4 Algorithm 1 Sub-delegation process. Require: A i The current agent attempting to delegate the task Require: S the set of communication links Require: c the capability of the task to be delegated Require: L, a delegation chain 1: function HANDLEREQUEST(A i, S, c, L) 2: Ags = [] 3: for all A j such that (A i, A j) S do 4: add (t i(a j, c), A j) to Ags 5: end for 6: sort Ags by trust 7: for all (t, A j) Ags where A j / L do 8: if t < threshold then 9: return reject 10: end if 11: if A j == A i then 12: L = L, A j 13: return perform(a j ) 14: end if 15: (outcome, L ) = handlerequest(a j, S, c, L, A i ) 16: if outcome!=reject then 17: return (outcome, L ) 18: end if 19: end for 20: end function 2.3 Sub-delegation An agent (the trustor) can request that another agent A i (the the trustee) perform a specific task requiring some capability c on its behalf. In such a situation, the trustee can act in one of three ways: 1. It can refuse to execute the request. 2. It can sub-delegate the request to another agent. 3. If c cap(a i ), it can attempt to fulfil the request. If the trustee sub-delegates the request (choice 2), then this process repeats itself, with the trustee taking on the role of the trustor in the context of the sub-delegation, and the new agent becoming the trustee. The sub-delegation process can potentially recurse through all agents in the system. We trace the path of recursive sub-delegations using a data structure which we call a delegation chain. A delegation chain D = (L, O) is defined as an ordered list of agents L and an outcome O, where O {fail, success} (e.g. ( A 2, A 7, A 4, fail)). We refer to the nth element of L as L[n]. A specific task cannot revisit an agent through the process of sub-delegation, and therefore a delegation chain cannot have any cycles, i.e. if D = (L, O) then L[i], L[j] such that i j and L[i] = L[j]. Given a delegation

5 chain L = A 1,..., A n, we write L, A i as an abbreviation for the longer chain A 1,..., A n, A i. We associate each agent A i with a trust model t i : A C R which implements the model described in Section 2.2. Given an agent and a capability, this trust model returns a real-valued trust rating. Algorithm 1 describes the process by which an agent A i handles a request for task execution. The agent begins by associating a trust rating with all agents it can communicate with, and placing these associations into a sorted list (lines 2 6). The agent then iterates over this list, attempting to delegate the task (the main for loop, lines 7 19). This process repeats itself until either no agents with a trust rating above the threshold can execute the task (lines 8 10), the task is executed by the agent (lines 11 14), or the task is successfully delegated (lines 15 18). Note that since our communication links are symmetric, the agent is able to delegate the task to itself, in which case it will execute the task. Agents are allowed to refuse to perform delegated tasks if they believe that they are not the most suitable agent. If agents are forced to either sub-delegate or perform, they cannot be held entirely responsible for the eventual task outcome. For example, an agent Alice may receive a request and, being incapable of fulfilling the request directly, choose instead to sub-delegate the task to another agent. However, the agent may not know of any suitable candidates, and may then be forced to delegate the task to a randomly chosen agent Bob. That Bob subsequently fails the task cannot be considered wholly the responsibility of Alice, but of the preceding agent in the chain who chose Alice. By requiring that agents accept (sub)delegation, we ensure that all agents in the chain can be considered, to some degree, responsible for the final task outcome. Example Consider the example presented in Figure 1. Here, an agent A wants to delegate some task c to another agent. Using its trust model, A first tries agent F. However, F does not know of any potential candidates (including itself) that are trusted beyond F s delegation threshold, causing F to reject the task. Next, A tries agent B, who believes that C is best suited for the task, with rating 0.7, and so B accepts the task and sub-delegates it to C. In turn, C believes that D is best suited, with a trust rating of 0.74 and therefore takes c on, in turn sub delegating it to D. Finally, D believes that it is the best agent, with a self-trust rating of 0.8, and so performs the task, yielding the delegation chain A, B, C, D. 2.4 Weighting Functions When updating probabilistic trust models after observing an outcome, existing trust models implicitly assume that responsibility for the outcome lies with one individual. However, when sub-delegation occurs, this assumption can no longer be made. In such cases it is appropriate to update our trust in the various agents in a delegation chain to different degrees, to reflect the fact that a particular outcome should not reflect equally on all the agents responsibilities for this outcome, and therefore on their trustworthiness. Using the model described in Section 2.2, we can achieve this distribution of responsibility by weighting the amount added to the r and s evidence parameters for a

6 F A bestagent: A rating:0.4 C bestagent: B rating:0.6 B bestagent: D rating: 0.74 D bestagent: C rating: 0.7 bestagent: D rating: 0.8 Fig. 1: Example delegation chain particular trustee given the trustee s position in the delegation chain. Similar weightings are employed in existing trust models to discount trust based on evidence from possibly unreliable or deceptive sources [5, 13], We introduce a weighting function W : L 2 R, so that an agent a n L will have an associated weight w n W (L). We require that L i=1 w i = 1, so the full trust update value of 1 is distributed among all agents in L, which will be applied to the evidence parameters for those agents. Given this weighting, we update r x y:c and s x y:c as follows. { (r (r y:c, x s x x y:c) = y:c + w y, s x y:c) if O = success (ry:c, x s x y:c + w y ) otherwise (4) In this section, we present a number of weighting functions that could be used to distribute responsibility in a delegation chain. Uniform Weighting This model divides the weight evenly among the agents in the chain, holding them all equally responsible for the task outcome, i.e. w i = 1/ L. All-First Weighting This weighting function applies all weight to the first (subsequent) agent in the delegation chain. All other agents receive zero weight. This function models the case where sub-delegation is hidden, as each trustor is not aware of any delegations beyond its own, or which agent actually performed the task. Therefore, trust update proceeds as if the subsequent agent performed the task. The agent who performs the task updates its belief in its own competence with a weight of 1. All-Last Weighting This function applies all weight to the agent that actually performed the task. Now, trustors do not consider intermediate trustees responsible for the behaviour of the last agent in the chain. Increasing/Decreasing Weighting These functions apply a decreasing (or increasing) proportion of weight along the delegation chain. For example, a decreasing function will give most weight to the first agent in the chain (i.e. the initial trustee), and a decreasing weight thereafter, giving least weight to the agent who actually performed the task. For

7 Profile Id Mean StDev SelfPrior Count Good Under-estimator Over-estimators Bad Random Table 1: Agent Profiles an agent a n L, we compute the weight given using 2n/( L 2 + L ) for an increasing function, or 2( L n)/( L 2 + L ) for a decreasing function. Full Weighting This model simply applies a weight of 1 to every agent in the chain, regardless of their position. It is the only weighting function which does not satisfy the requirement that all weights must sum to 1, and therefore does not share responsibility among agents. 3 Evaluation 3.1 Experimental Framework We evaluated this model within a simulated multi-agent system. Each experiment ran for 600 rounds, and contained 70 performer agents, as well as 50 task owner agents, who initiate delegation. In each round, each task owner decides to delegate some task to another performer agent in the society, following the decision process described in Section 2. Note, that we do not allow task owners to perform tasks themselves, unless no candidate can be found to accept the delegation. In this way, we ensure that task owners always take some action during a time step. Delegated agents subsequently follow the process described in Algorithm 1, until an agent opts to perform the task. At this point, the task is immediately performed and the outcome returned, along the chain, to the originating agent. If a task owner ultimately executes the task (due to no other suitable agents being found), only its trust rating in itself is updated, minimally perturbing the system. Task owners receive a utility increase of 1 in the event that the task is successful, otherwise 0. Agents are generated in several broad groups, which we refer to as profiles. These profiles specify the agent s competencies. Competency in a particular task is represented as a pair of Gaussian parameters (i.e. mean and variance of a normal distribution). In addition, agents also possess an initial a priori belief about their own competence, P (selfprior), which acts as an initial trust rating for the agent in itself with regard to all capabilities. The profiles used to evaluate the model are given in Table 1. It is not realistic to assume that agents within the society will be fully connected and visible to each other. Therefore, each agent is randomly assigned 5 communication links with other agents, as described in Section 2.1, creating a social graph structure. Sub-delegation is only allowed to take place along these links. This means that some agents will be more capable sub-delegators than others, thanks to being connected to

8 25 Control AllFirst Average Global Utility Gain Decreasing AllFull Uniform AllLast AllFirst Increasing Average Delegation Chain Length Uniform Decreasing AllFull AllLast Increasing Interaction Step Interaction Step Fig. 2: Average task owner utility gain Fig. 3: Average chain length more competent agents. Furthermore, this motivates sub-delegation, as a number of delegations may be required for a task to reach a suitable (i.e. capable) candidate. In order to simulate dynamism within the society, each agent will be replaced in a particular time point with probability In evaluating our approach, we are concerned with how weighting functions affect the performance of a MAS, and how different weighting functions influence the emergence of delegation chains. Intuitively, we would expect that weighting functions which apply more weight to agents earlier in the chain will result in shorter delegation chains over time, as task owners prefer to delegate directly to task performers than to those who have previously sub-delegated. Conversely, we expect that models that apply more weight to agents later in the chain will yield longer delegation chains, as task owners prefer interactions with sub-delegators than task performers. Hypothesis 1 Trust models employing weighted update functions based on shared responsibility (i.e. Uniform, Increasing, Decreasing) will perform better than those which do not involve a division of responsibility. Hypothesis 2 Weighting models that apply greater weight to agents towards the end of the chain (i.e. AllLast, Increasing) will yield longer chains than those that apply greater weight to agents earlier in the chain (i.e. AllFirst, Decreasing). 3.2 Results The results presented in this section are based on the average data from 20 identical runs, to avoid artefacts resulting from the stochastic nature of the simulation environment. In addition, we include a control condition which always assigns a weight of zero to all trust updates, effectively disabling the trust model. This allows us to observe the performance that agents would achieve without using a trust model. Trust Model Performance Figure 2 shows the performance of the task owners when using different weighting functions. Performance is measured in terms of the average utility gain of all interacting task owners per time step. Two distinct classes of performance are immediately clear. The AllLast, AllFirst and Increasing models all converge

9 to around 0.76, while Decreasing, AllFull and Uniform models attain a higher performance level of around This appears to support our first hypothesis, while suggesting that the Increasing function is a poor method of distributing responsibility. Interestingly, the performance of the Uniform model lies between these groups, converging to around These results appear to suggest a spectrum of performance levels, with extreme approaches performing poorly. As expected, the control condition performs no better than chance, remaining at 0.45 throughout the experiment. To allow for clearer presentation, it has been omitted from Figure 2. In all cases, models begin to converge at around the hundredth interaction, which suggests that the choice of weighting model has either no or only a small effect on the rate at which trust models improve. We conducted a one-way ANOVA test to compare the effect of the different weighting models on the global average task owner performance. The choice of weighting model was found to have a significant effect on performance at the p < level. Post hoc comparisons using the Tukey HSD test indicated that the mean score for the Increasing condition was not significantly different from that of AllLast (p = 0.99). The AllFull and Decreasing conditions were also not significantly different (p = 0.81). Delegation Chain Length Figure 3 shows how the choice of weighting function affects the average length of delegation chains. As before, three levels of performance are apparent. The poorest of these contains the AllFirst and Uniform conditions, which converge very slowly (relative to the other conditions) to an average chain length of This is to be expected, as the AllF irst model will always motivate agents to (sub)delegate to others who will also likely sub-delegate. The Decreasing, AllFull and AllLast models performed better, due to their degree of bias towards task performers, achieveing an average chain length of Finally, the Increasing model attained very low average chain length. Interestingly, the Increasing model yields shorter chains than the AllLast model, which places all weight on the task performer and therefore should encourage task owners to bypass intermediaries altogether. In a social network environment, the AllLast model prevents task owners from learning which agents should be asked to ensure that tasks are effectively sub-delegated to known performers, and so still relies heavily on exploration. On the other hand, the Increasing model applies most weight to the task performer, but also updates trust in the intermediaries. The control condition produces consistently large chains of around size 22. This supports our hypothesis that models which place more weight on later agents in the chain will outperform other models which do not. Again, a one-way ANOVA was conducted to compare the effect of weighting models on chain length. Weighting model choice was found to have a significant effect on the average chain length at the p level. Post hoc comparisons using the Tukey HSD test showed that only the AllLast and AllFull conditions were found not to be significantly different (p = 0.027). Deceptive Agents We may assume that task owners can only learn about the path of a delegated task through a network of sub-delegators via self-reporting. In the case of task success, in-chain agents will prefer to report that no sub-delegation occurred, thereby taking the credit and obtaining a larger positive trust increase. In the case of task failure, the opposite should be the case. If each in-chain agent behaves in this way,

10 the result is that all positive trust updates are applied to the first delegated agent (i.e. equivalent to AllFirst), while negative updates are distributed among all in-chain agents (equivalent to Decreasing). We evaluated the performance of agents who behave in this way (deceptive) against those who report honestly (honest) in the Decreasing condition. We found that there is no significant difference between the performance of deceptive and honest agents when using this weighting function, which shows that the model is resilient to this kind of deceptive misreporting of chain information. 4 Discussion In this paper, we have considered the case where the entire delegation chain is visible to the task owner. On receiving the result, the task owner can observe the path the delegation took, what decisions were taken by each agent, and who eventually carried out the task. We can consider three visibility conditions: Global knowledge - the topology of the delegation chain is visible to all agents. For example, all agents know who actually performed the task, where the task originated, and the chain of delegation that led to a particular agent performing the task Local knowledge - each agent in the delegation chain knows which agent delegated the task to it, and knows whether the subsequent agent (if a did not perform the task itself) sub-delegated or performed the task No knowledge - each agent knows nothing about whether the subsequent agent performed the task or sub-delegated. However, sub-delegated agents do know which agent delegated the task to them. While our weighting functions go some way to representing each of these cases, a more refined model of visibility is left as an avenue for future research. While it appears attractive, the AllFull model is problematic as it is inherently unfair; intermediate agents are penalised (or rewarded) as if they performed the task alone. While this leads to a rapid convergence of performance, strategically minded agents would be capable of colluding to abuse this approach. For example, agents pass a task around unnecessarily within a group, before finally delegating to a highly trusted individual, so that each agent in the group receives a full positive trust update without having to perform the task. The Decreasing model, on the other hand, prevents this possibility, as each sub-delegation reduces the weight applied to each agent in the chain. We may also consider sub-delegation chains that are more complex, involving task decomposition. For example, in [9] agents within a virtual organisation were able to delegate different portions of a composite task to multiple agents. These agents could sub-delegate further, resulting in a tree of sub-delegation, rather than a chain. This has implications for trust update also, as it is not clear how individually responsible the group members are for the final outcome. We are currently in the process of investigating how trust in such complex delegation environments can be modelled.

11 We note that, while sharing some similarities, this problem differs from that of trust propagation (e.g. [2, 3]), which concerns the computation of a trust rating for a particular target agent using testimony about that agent which has been obtained via a series of intermediaries. The trust in each of the intermediate agents as a recommender is used to compute a direct trust rating between the evaluating agent and the target. This is then used to decide whether to interact with the target directly. In contrast, we are interested in situations where control is transferred to an agent who may or may not sub-delegate. Therefore, the chosen agent may not actually perform the task, and the trustor may not have access to full knowledge about the intermediate agents or the identity of the final task performer. We are, however, investigating ways in which discounting approaches from propagation literature may be used in the context of delegation chains. The approach we have presented here is simple, in that it considers only the position of an agent in the delegation chain, and the overall outcome, when deciding how much weight to apply. However, we can consider more sophisticated approaches. Task owners could evaluate how good an observed sub-delegation decision was, from their perspective, and use this evaluation to derive a weight. For example, if the task owner observes that an agent a sub-delegated a task to another agent b known to be untrustworthy, then a should bear more of the responsibility for the task outcome than other agents in the chain. Such an approach becomes problematic in large and dynamic societies, as task owners may not have sufficient experiences within the society to make such evaluations. For these purposes, trust propagation mechanisms [1, 2] may play a role in supporting more sophisticated weighting functions. It can be argued that trustworthiness in sub-delegating and in actually performing a task are separate issues and should thus be considered separately by a trust model. However, this distinction does not affect the trustor s delegation choice, since the trustor does not know in advance which action (i.e. perform or sub-delegate) the trustee will take. Therefore, in our present model, we treat agents as black boxes with regards to this distinction. An important feature of our approach is that it is places few constraints on the particular trust model in use, requiring only that the model permits discounted or weighted update. This is already an important feature of many prominent trust models, which use discounting to reduce the impact of older experiences on trust assessments, allowing trust models to cope with dynamic behaviour. 5 Conclusions This research addresses a new and exciting aspect of trust in multi-agent systems, namely how trust should be updated in the context of sub-delegation. Such an approach has many practical applications. For example, sub-delegation plays a critical role in virtual organisations, and a more refined model of trust could enhance their functionality. In human domains, one could ask how contractors should trust each other when tasks may be outsourced to other parties, and when trustors may be unable to control or observe this outsourcing process. In such situations, our model allows one to allocate responsibility between all parties, and thereby update trust in individuals. We have shown that the choice of weighting function significantly affects the performance and

12 size of delegation chains. Our model forms only a first step in investigating this aspect of trust, and many exciting avenues of future research remain open. 6 Acknowledgements We would like to thank the anonymous reviewers, and Mike Just (Glasgow Caledonian University) for their helpful and insightful comments. References 1. R. Guha, R. Kumar, P. Raghavan, and A. Tomkins. Propagation of trust and distrust. In Proceedings of the 13th international conference on World Wide Web, WWW 04, pages , New York, NY, USA, ACM. 2. C. W. Hang, Y. Wang, and M. Singh. Operators for propagating trust and their evaluation in social networks. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems, volume 2, pages , C.-W. Hang, Z. Zhang, and M. Singh. Generalized trust propagation with limited evidence. Computer, R. Hermoso, H. Billhardt, and S. Ossowski. Integrating trust in virtual organisations. Coordination, Organizations, Institutions, and Norms in Agent Systems II, pages 19 31, A. Jøsang and R. Ismail. The Beta Reputation System. In Proceedings of the 15th Bled Electronic Commerce Conference, A. Jøsang, S. Pope and S. Marsh. Exploring Different Types of Trust Propagation. In Proceedings of the 4th International Conference on Trust Management (itrust 06), A. Jøsang, R. Ismail, and C. Boyd. A survey of trust and reputation systems for online service provision. Decision Support Systems, 43(2): , S. Marsh. Formalizing Trust as a Computational Concept. PhD thesis, University of Stirling, J. Patel, W. T. L. Teacy, N. R. Jennings, M. Luck, S. Chalmers, N. Oren, T. J. Norman,A. Preece, P. M. D. Gray, G. Shercliff, P. J. Stockreisser, J. Shao, W. A. Gray, N. J. Fiddian and S. Thompson. Agent-Based Virtual Organisations for the Grid. Int. Journal of Multi-Agent and Grid Systems, 1 (4). pp , J. Sabater and C. Sierra. Review on Computational Trust and Reputation Models. Artificial Intelligence Review, 24(1):33 60, W. Teacy, J. Patel, N. Jennings, and M. Luck. Travos: Trust and reputation in the context of inaccurate information sources. Autonomous Agents and Multi-Agent Systems, 12(2): , Y. Wang and M. P. Singh. Formal trust model for multiagent systems. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI), pages , A. Whitby, A. Jøsang, and J. Indulska. Filtering out unfair ratings in bayesian reputation systems. The Icfain Journal of Management Research, 4(2):48 64, 2005.