Examining and Modeling Customer Service Centers with Impatient Customers

Size: px
Start display at page:

Download "Examining and Modeling Customer Service Centers with Impatient Customers"

Transcription

1 Examining and Modeling Customer Service Centers with Impatient Customers Jonathan Lee A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR OF APPLIED SCIENCE DEPARTMENT OF MECHANICAL AND INDUSTRIAL ENGINEERING UNIVERSITY OF TORONTO SUPERVISOR: PROFESSOR BARIŞ BALCIOĞLU MARCH 2009

2 Abstract In this report I propose and evaluate default static prioritization policies for all different M/M/N+M server systems with exponential arrival, abandonment and service rates over basic first come first serve policies. I also compare the success of static heuristics developed by former engineering science thesis student Henry Lo with choosing the correct default policy, and suggest which heuristics should be most consistently used. Lastly I study the sensitivity in system performance based on adjusting client parameters to suggest the stability of the proposed defaulted static prioritizations. 2

3 Acknowledgements I would like to thank my supervisor Professor Balcioğlu for his guidance and support, his patience and his availability to work with me on this thesis. I would also like to thank Simul8 corporation and Professor Frances for providing me with free access of the educational version Simul8 simulation software. 3

4 Table of Contents Abstract... 2 Acknowledgements... 3 Table of ContentsList of Terminology... 4 List of Terminology... 5 List of Figures... 6 List of Tables... 6 Introduction and Background... 7 Introduction and Background... 7 Case Development... 9 Methodology Building and Running Simulation Models Assumptions in Simulation Assembly and Runtime Design of Experiments Metrics of Measurement Results and Discussions General results Case Results Simple Cases: Equalizing on two paired parameter sets Moderate Cases: Equalizing on one paired parameter set Difficult Cases: Equalizing on zero paired parameter sets Further Discussion of Parameter Sets and Prioritization Choice Comparison of outcome choice in implementing static heuristics Introduction to static heuristics Using heuristics to prioritize the correct Customer Class Parameter Sensitivity Analysis Challenges and Sources of Error Suggested Future Work Conclusion Appendix A Modified Single Server First Come First Serve Model Appendix B Single and 10 Server System Parameters Appendix C1 Single Server Paired T-Test Results Appendix D1 10 Server Paired T-Test Results Appendix E1 Comparison of averages between single and 10 server results Appendix F1 - Side by side single and 10 server comparison of paired t-limits

5 List of Terminology Customer (Ci) A simple reference acronym to reference a Customer of Class/Type i. Arrival Rate ( i ) The rate at which a customer of type i enter the system. Abandonment Rate (R i ) The rate at which a customer of type i leave the system queue. Service Rate ( i ) The rate at which a customer of type I will be serviced by a server. FCFS The acronym used to represent the First Come First Serve policy M/M/N+M A queuing system with exponential arrival, abandonment, and service rates with N servers in Parallel. Prioritization Policy If available, one class of customers is always served before another the other class. Inconclusiveness A term used to signify that no conclusions can be substantiated Completely Different A term used to reference customer classes when there are no sets of parameter rates between two customer classes that are the same. 5

6 List of Figures Figure 1 - Simul8 Arrival Icon Figure 2 - Simul8 Queue Icon Figure 3 - Simul8 Work Center Icon Figure 4 - Simul8 Work Exit Icon List of Tables Table 1 - Simple Case Results Table 2: Moderate Case Results Table 3 - Difficult Case Results Table 4 - Heuristic Results Comparison Table 5 - Case 9 Sensitivity Analysis for R A = Table 6 - Case 8 Sensitivity Analysis for R A =

7 Introduction and Background In a post-industrial economy where the service sector is globally seeing massive growth not only in first world economies but second and in some instances third world countries as well, the study of queuing and the ability to statistically model customer and consumer behaviour has been of growing interest for management scientists. Though different organizations implement management policies with different goals in mind; some might aim for cost reduction, others might aim for increased customer serviceability, the study and method for how queuing and servicing affects performance remains with only constraints (as defined by the organization) affecting the bordering choices. Though this paper focuses specifically on call centers, its application of study is not limited to this industry, and in fact can be applied to essentially any scenario with a queue. Queuing is extensively studied in health care where system performance can determine literal life or death situations in which the goal is reducing customer abandonment, telecommunication networks in which the goal is reducing the usage of network bandwidth (hence reduce customer queue system wait time), and numerous facets of service industries. This paper focuses on investigating and evaluating the performance of simplified single and multi-server call centers with two customer class types with the overall goal of reducing the percentage of total customer abandonment/failure. The call centers are operated with exponentially distributed arrival, abandonment and service rates. In the cases studied, only static policies are implemented and compared with basic first come first serve policies. Static refers to the systems preset choice of servicing priority which 7

8 remains unchanged for the length of study as long as there remain work items (customers) of the higher priority in the queue. Otherwise the servers choose to work on the lower prioritized customers. A set of 13 generalized cases based on the 3 sets of different possible parameter rates are studied, and the performance and sensitivity of the system is evaluated and compared with the decisions that the static heuristics that former engineering science student Henry Lo developed. 8

9 Case Development In analyzing a multiple server system with two different customer types, there are 13 general cases in which two customers can be different based on their arrival, abandonment and service rates. For organizational and simplicity, each case is broken down and referred to the different customer types as Customer A (C A ) and Customer B (C B ). Though for some of these cases, there are definite preferences for nearly all instances of customer types classified as C A and C B, this is not so for all cases but is dependent on how different the customer rates for arrival, abandonment and service are. Generally speaking, unless the rates are equally pivoted across two sets of paired parameters (two out of either abandonment, arrival or service), one can not conclude before examining numerical system parameters which customer class to favour due to the complexity of inter parameter interaction. Case 1: C A has same arrival rates ( A = B ) but service rates ( A > B ) and higher abandonment (R A > R B ) Case 2: C A has the same arrival rates ( A = B ) and higher service rate ( A > B ) but fewer abandonment rate (R A < R B ) Case 3: C A has the same arrival ( A = B ) but greater service rates ( A > B ) and equal abandonment rates (R A = R B ) Case 4: C A has greater arrival ( A > B ), service rates ( A > B ) and abandonment (R A > R B ) 9

10 Case 5: C A has greater arrival ( A > B ) and service rates ( A > B ) but same abandonment rates (R A = R B ) Case 6: C A has greater arrival rates ( A > B ) but fewer service rates ( A < B ) and same abandonment rates (R A = R B ) Case 7: C A has greater arrival ( A > B ) but same service rates ( A = B ) and abandonment rates (R A = R B ) Case 8: C A has greater arrival rates ( A > B ) but fewer service rates ( A < B ) and greater abandonment rates (R A > R B ) Case 9: C A has greater arrival rates ( A > B ) and the same service rates ( A = B ) and greater abandonment rates (R A > R B ) Case 10: C A has greater arrival rates ( A > B ) but same service rates ( A = B ) and fewer abandonment rates (R A < R B ) Case 11: C A has greater arrival rates ( A > B ) but fewer service rates ( A < B ) and abandonment rates (R A < R B ) Case 12: C A has greater arrival ( A > B ) and service rates ( A > B ) but fewer abandonment rates (R A < R B ) Case 13: C A has the same arrival ( A = B ) and service rates ( A = B ) but higher abandonment rates (R A > R B ) 10

11 Methodology Building and Running Simulation Models In analyzing cases, a simulation package called Simul8 by Simul8 Corp was used to model call centers. Though not difficult to use, the simulation program has bugs which made development and consistency somewhat of a challenge and prevented the results from being trimmer based on inconsistent random number generation and lengthy simulation runtimes. The models used in the simulation runs and replications were based off of the original simulation models developed by Henry Lo. Though Lo s models were relatively basic and accurate, in order to properly analyze and segregate the results the model needed to be expanded into two sets of work completions- Type 1 and Type 2, so the models were modified to simplify trial creation and results. A screen shot of the modified single server version of the basic First Come First Serve can be viewed in Appendix A. Two types of models were used to investigate cases, the first being a single queue first come first serve model with two different customer types and the second being a prioritization model with two queues for each customer type. The simulation models were built using an inputs spreadsheet where all the specific parameter rates for both model types were called from. The input spreadsheet for the prioritization models also contained a prioritization type cell, where the prioritization for the customer type was specified and when called in the server Before Selecting visual logic, would prioritize the correct type. 11

12 Figure 1 - Simul8 Arrival Icon Within Simul8, as work items (customers) come in through the work entry point icon, they are stamped with four different labels for analysis and to ensure proper functionality of the simulation models. They are stamped with the following labels: - Customer Type Label: a simple static 1 or 2 label stamp so the results can be segregated and analyzed by customer type. - Abandonment Time Label: the specific abandonment time bestowed to the work item as determined by the set exponentially distributed abandonment rate - Start Time Label: a label that associates a simulation run time for when the work item (or customer) is created, this stamp is used purely for analysis purposes so total service and abandonment times can be calculated. - Service Time Label: the specific service time bestowed to the work item as determined by the set exponentially distributed service rate Figure 2 - Simul8 Queue Icon Once customers are created, they are passed into the queue icon where they either wait until they are serviced or abandon the queue once they have reached the shelf life associated through the specific abandonment time associated with the abandonment time label. Figure 3 - Simul8 Work Center Icon 12

13 If chosen as a work item, the customer is passed off onto a server work center where they are held for the time specified through the work item s service time label. It should be noted that on the modified simulation models used, a number of dummy work centers were used to route both expired customers and completely served customers for results collection and segregation. None of these dummy work centers add any processing time onto the work items or simulation times so the results are not skewed. Figure 4 - Simul8 Work Exit Icon Once customers have expired or work is completed by the server they are routed and collected and pooled to work exit points. Assumptions in Simulation Assembly and Runtime A number of assumptions were made in modeling the simulation call centers. These were done partially to simplify the simulation assembly operation and reduce total simulation runtime, but also to control the number of adjustable parameters and be able to deduce stronger conclusions for simplistic systems. 1. All simulation models were operated with complete staff for the complete simulation period and were simulated to operate for the complete simulation period (41700 minutes). In real-life models most simulation call centers will not operate with complete staff for a whole 24 hour, 7-day week period due to the variability in incoming traffic to the call centers, and the lack of necessity or urgency of requiring assistance from a call center. 13

14 2. All simulation model arrival, service and abandonment rates stayed consistent regardless of time in the simulation run. Similar to business or operating centers in the service sector, expected call center arrival rates and to a similar (but lesser) degree service and abandonment rates do not stay uniform across the times of the day, but instead follow rush hour periods with high incoming traffic volume and downtime periods with lower traffic. 3. Once a customer begins servicing they will not abandon the system and will remain until the completion of their service time. This was implemented to reduce system complexity as the further abandonment of servicing customers is not in the scope of this study. 4. If a customer is chosen to be served, their issue will be resolved. Though unrealistic in the real world, the resolution of issues is significantly variant on the quality of the call center and the complexity of the industry that the center is servicing. Therefore to reduce system parameters and complexity this assumption was made. Design of Experiments Each trial case was run for a trial with n = 5 replications so results could be generated and conclusions could be made with 95% certainty ( = 5). All results and analyses of these trial replications were built using T-Confidence intervals and Paired T- tests since it is assumed that none of these results are normally distributed. All simulation models were run for a length of 695 hours or minutes with all simulation parameters being defined in minutes. 14

15 Results and comparisons from Lo s paper on patient triage indicated that the difference of implementing warm up periods over running models without run times is less than 1 percent across averages and variability, so to reduce run times, no warm up periods were implemented. From the simulation replications figures were exported to an excel file where performance units of measurement were calculated to determine the overall system performance with the goal of reducing the total system abandonment rate. Each case was examined closely from simple cases where customer classes were equalized on two paired parameter sets producing nearly identical customer classes, to moderate cases where customer classes were equalized on only one paired parameter set, to difficult cases where customer classes were completely different. Suggestions for prioritization were slowly substantiated from being built on first simple then to moderate all the way to difficult cases. The customer parameter rates were generated to approximate actual client service times so service and abandonment rates were approximated below 10 minutes. Every parameter set generated for each case set is related through the choice of only two specific rates for each paired parameter set, so the relationships between each case could be examined and to open the possibility for conclusion substantiation. For the arrival rates ( ) the options were either a rate of 4.5 or a rate of 3. For the abandonment rates (R) the options were either a rate of 1.5 or 0.6. For the service rates ( ) the options were either a rate of 9 or 7. For cases where the both rates for C A and C B are equal the lower system rate was chosen. 15

16 For a list of the single server customer parameter rates for all 13 cases, refer to Appendix B. For a list of the ten server customer parameter rates for all 13 cases, refer to Appendix B. It should be noted that both the single server and 10 server parameter set of results were heavily related. The 10 server parameters used were identical to the single server parameters except for the service rates for the 10 server system were divided by n=10 to approximate and generalize each set used for the cases generated. If a completely new set for each case was used, it might result in inconclusive results which is why both single and 10 server sets had to be related. Metrics of Measurement In measuring the system performance the goal was to reduce the percentage of total failure in the system based on simply calculating the proportion of the total number of abandonments over the total number of arrivals in the system. This was the most significant and beneficial metric for measurement with our goal, but additional metrics were calculated to assure the simulation models were working properly and to examine the overall breakdown of specifically how effective prioritization was towards the system performance. Other metrics calculated and collected include the percentage of time the servers work since another common goal for in the design of efficiency and management of call centers is ensuring that employees are working as close to 100% of the time as possible, the average waiting times for both successfully served customers and abandoned customers. 16

17 Paired t-tests over an = 0.05 (95%) with n=5 replications are used to compare the performance of different policies in prioritization and in first come first serve for each specific case. Depending on how loose the paired t-test intervals are, and whether the confidence interval contains 0, conclusions for the preference of the policy are substantiated. 17

18 Results and Discussions General results In investigating the initial results generated from the simulation models for both the single and 10 server systems, none of the related 10 server cases substantiated conflicting results in the choice of prioritization suggested from the results of the single server systems, however the 10 server systems were not nearly as conclusive in the paired t-test results as the single server systems. Frequently the 10 server system paired t-tests generated confidence intervals were much tighter but often contain intervals that include 0. This might be attributed to the fact that the parameters for the related cases for the single and 10 server systems were very closely related. There is an average increase in overall server utilization but a decrease in average total percentage failure for the 10 server systems from the single server systems. This is likely attributed to an increase in available servers at a moment in time with the 10 server over the single server, as expected which is confirmed with the decrease in average queue waiting times for all successful customer types. Appendix E compares averages from each set of cases, and illustrates that failure and waiting times are lower for nearly all cases from single to 10 server systems. 18

19 Case Results Results Notation > Other than in the case description, in comparing policies > means one policy performs better than another NOT one policy produces higher failure. I The I is short for inconclusive and is used in policy comparison when no conclusions can be drawn. Simple Cases: Equalizing on two paired parameter sets These three sets are relatively easy to model and understand. Both customer types are identical except for one pair of parameters. To maximize performance one would generally expect to prioritize the class of customer with the higher parameter rate from the other class of customer, however results from the single and 10 server models suggest otherwise. Case Arrival = C A > C B = Service C B > C A = = Abandonment = = C A > C B Single Server Results C B - FCFS C B > FCFS C B > FCFS C B I FCFS C A - FCFS C A > FCFS C A < FCFS C A > FCFS C B - C A C B I C A C B > CA C B < C A Best Policy I C B C A 10 Server Results C B - FCFS C B I FCFS C B I FCFS C B < FCFS C A - FCFS C A I FCFS C A I FCFS C A > FCFS C B - C A C B I C A C B I C A C B < C A Best Policy I I C A Table 1 - Simple Case Results 19

20 Case 3: Neither the single nor 10 server systems suggest advantages for prioritizing one customer class over another. The single server concludes that prioritization produces less failure than first come first serve policies, but does not suggest a bias towards the prioritization of one customer class. The 10 server does not conclude any advantage for any policies. Case 7: The single server system suggests advantages for prioritizing the customer class type with the lower arrival rate as performing the best, however the 10 server systems remains inconclusive for all prioritization and FCFS policy comparisons. Case 13: Both single and 10 server systems suggest advantages for prioritizing the customer class with the higher abandonment rate, concluding that prioritizing the aforementioned class will produce less overall system failures. The arrangement and conclusiveness of results suggest that there is an advantage in determining the prioritization from rates that remains between different server numbers. From the suggestions of rates in determining prioritization from strongest to weakest are Abandonment Arrival Service rates. Moderate Cases: Equalizing on one paired parameter set Adding in another pair of differing parameters creates more complex cases as the way the parameters between classes of customers coupled with the way they interact with other 20

21 set of parameters greatly affects each instantiation of a case. Still there are certain generalizations that can be made under certain case conditions. Case Arrival = = C A > C B C A > C B C A > C B C A > C B Service C B > C A C B < C A C A > C B C A < C B = = Abandonment C B > C A C B > C A = = C A > C B C A < C B Single Server Results C B - FCFS C B > FCFS C B > FCFS C B > FCFS C B > FCFS C B > FCFS C B > FCFS C A - FCFS C A > FCFS C A > FCFS C A I FCFS C A < FCFS C A I FCFS C A < FCFS C B - C A C B > C A C B I C A C B > C A C B > C A C B > C A C B > C A Best Policy C B I C B C B C B C B 10 Server Results C B - FCFS C B > FCFS C B I FCFS C B > FCFS C B I FCFS C B > FCFS C B > FCFS C A - FCFS C A < FCFS C A > FCFS C A I FCFS C A < FCFS C A < FCFS C A < FCFS C B - C A C B > C A C B I C A C B > C A C B > C A C B > C A C B > C A Best Policy C B I C B I C B C B Table 2: Moderate Case Results Case 1: Both single and 10 server systems rank the customer class with the greater service and abandonment rates as the best performer. From the simple cases, this goes to suggest that higher abandonment rates affect the decision to prioritize over service rates. Case 2: Neither single nor 10 server systems conclude best policies in prioritization, but suggest that both customer classes are better performers than the FCFS policy in the single server model, and attempts to in the 10 server model. In comparing with case 1, we might infer that while abandonment will affect the decision to prioritize more than service, it does not completely overrule other rates as seen by the inconclusive outcomes. From Appendix F1 we can see that the comparison test limits are tight. 21

22 Case 5: Both single and 10 server systems rank the customer class with the lower service and abandonment rates as the best performer. From the simple cases, this further suggests that in general customer classes with lower arrival and service rates should take priority. Case 6: The single server system suggests a advantages for prioritizing the customer class type with the lower arrival and higher service rate as performing the best, however the 10 server systems remains inconclusive for all prioritization and FCFS policy comparisons. Case 9: Both single and 10 server systems rank the customer class with lower service and abandonment rate as the best performer. While higher abandonment might suggest the preference for prioritization, it appears that paired with arrival rates, it might not suggest so, but if we examine the average differences in Appendix, they are quite small between prioritizing C A and C B. Case 10: Both single and 10 server systems rank the customer class with lower service and higher abandonment rate as the best performer. In pairing both of these rates together we would expect an outcome to prioritize C B, since both rates more positively suggest this outcome from the outcomes of the simple cases. From Appendix C3/C4 the average difference is larger than for case 9. 22

23 Difficult Cases: Equalizing on zero paired parameter sets Leaving no similarities between the two classes of customers makes it very difficult to predict outcome under priority. As with moderate cases, each instantiation of a case can produce a different prioritization outcome since the difference in each set of parameter pairs between the two classes of customers and even the rates within a single customer will affect system performance. Case Arrival C A > C B C A > C B C A > C B C A > C B Service C A > C B C A < C B C A < C B C A > C B Abandonment C A > C B C A > C B C A < C B C A < C B Single Server Results C B - FCFS C B I FCFS C B > FCFS C B > FCFS C B > FCFS C A - FCFS C A > FCFS C A I FCFS C A < FCFS C A < FCFS C B - C A C B < C A C B > C A C B > C A C B > C A Best Policy C A C B C B C B 10 Server Results C B - FCFS C B I FCFS C B > FCFS C B > FCFS C B > FCFS C A - FCFS C A > FCFS C A < FCFS C A < FCFS C A < FCFS C B - C A C B < C A C B > C A C B > C A C B > C A Best Policy I C B C B C B Table 3 - Difficult Case Results Case 4: If we compare cases 1 and 9 in the set of moderate cases, they appear to be divided on the decision to prioritize based on abandonment alone (since one prioritizes a higher rate and one prioritizes a lower), so it might be expected that both the single and 10 server systems do not deliver a consistent best policy outcome. However, since the customer class with the higher abandonment rate is ranked as best prioritized in the single server system, it might suggest that abandonment should take precedent. 23

24 Case 8: Both single and 10 server system sets suggest prioritizing C B. Though similar to case 2 in the moderate cases, which remained inconclusive, this positive outcome for C B further suggests prioritizing for lower arrival rates produces fewer failures. Case 11: From the simple and moderate cases, we would expect that both single and 10 server cases suggest priority to C B, since altogether these outcomes positively correlate towards the prioritization of C B in the simple and moderate cases. This is further suggested by having a high average difference as seen in Appendix C4. Case 12: With a close similarity to case 10 from the moderate cases, we might expect this outcome, however with the hypothesized opposing correlation from the service rates this further suggests that the service rate remains as the weakest prioritization tool for determining prioritization. In further examining cases 8, 11 and 12, we can further substantiate suggestion for the strength of prioritization choice tools from the higher average differences in prioritizing C B over C A by examining their average differences for these specific cases in Appendix C. Case 11 has the highest average difference with all 3 paired parameters correlating positively towards C B, then Case 12 with only the service rate not correlating towards C B (which as suggested in the simple cases is the weakest tool), then with the lowest difference Case 8. Case 8 shows that while abandonment is the strongest contributor to suggesting prioritization, it can be overpowered by both service and arrival rates with opposing correlation. 24

25 Further Discussion of Parameter Sets and Prioritization Choice From the results and discussion on the cases previously generated, I suggest that that the higher abandonment rate determines prioritization more significantly than the arrival or service rates, however by how much is not clear. Most case results suggest that lower Arrival and higher Service rates are respectively the second and third most significant determinants of prioritization, however some results suggest otherwise. Though simple cases 3 and 7 suggest that arrival rates are better determinants due to the complete inconclusiveness of case 3, moderate cases 2 and 9 suggest that service rates are a better determinant due to the complete consistency in outcome. This might be due to variability in random number generation, not having simulation warm up periods or perhaps having small replication numbers. It is clear in the results of the difficult cases that however strong of a determinant for prioritization abandonment rates may appear to be, it can be overpowered by the opposing correlation of the arrival and service rates, therefore we can conclude that outside of the simple cases, all other instances of cases must be closely examined to determine priority. To summarize the prioritization rules suggested from the discussion, to improve overall system performance, C A should be prioritized over C B in the following ranked rules: 1.) R A > R B 2.) A < B 3.) A > B 25

26 Comparison of outcome choice in implementing static heuristics Introduction to static heuristics A heuristic is an algorithmic method for categorizing, problem solving and decision making frequently used in mathematical problems. There are many forms of heuristics from those that can be categorized by a single equation, to significantly complicated rule based algorithms. Static heuristics strategies make a decision at time zero, set priorities at that time and stick to them for the length of operation. They will serve the priority as long as they are readily available in the queue, otherwise they will serve lower priority items until a higher priority item becomes available. Since static heuristics in this context only have two choices of which customer class to prioritize, there are essentially only two outcomes for all proposed heuristics: whether they prioritize C A or C B. We can therefore analyze a heuristics ability to prioritize correctly according to the results collected. Using heuristics to prioritize the correct Customer Class In this section, I compare the static heuristics ability to choose the right prioritization based on the outcomes of the instances of classes created. A number of different types of static heuristics are introduced and ranked according to the number of right prioritization decisions that are made using that heuristic. 26

27 Since the 10 server system resulted in a number of ambiguous and inconclusive, yet non-conflicting results, I only compare the proposed heuristics to outcomes produced by the single server system. Lo investigated two static heuristics in his thesis on patient triage: one based solely on the abandonment rate, where priority was given to the patient with the higher abandonment rate. If R A > R B Else serve C A Serve C B The other static heuristic attempted to combine both abandonment rates and service rates in making a more informed decision, by multiplying rates for the respective customer class and prioritizing the one with the larger product. If R A A > R B B Else serve C A serve C B I further propose two more static heuristics in attempting to again make more informed decisions base on all 3 sets of parameter rates. The first attempts to account for a server s ability to keep up with the number of arrivals, and in doing so the product of the abandonment and service rate is divided by the arrival rate. If R A A / A > R B B / B Else serve C A serve C B The second attempts to accommodate for the paired strength of the arrival and service rate in choosing the correct prioritization as examined in the results over simply prioritizing the higher abandonment rate. If A / A - R A > B / B - R B serve C A 27

28 Else serve C B Case Number Solution C2 I C2 C1 C2 C2 C2 C2 C2 C2 C2 C2 C1 Correct r C2 C1 C2 C1 C2 C2 C2 C1 C1 C2 C2 C2 C1 10 ru C2 C1 C2 C1 C1 C1 C1 C1 C1 C2 C2 C2 C1 7 ru/l C2 C1 C1 C1 C1 C1 C1 C1 C1 C2 C2 C2 C1 6 rul C2 C1 C2 C1 C1 C1 C1 C1 C1 C2 C2 C2 C1 7 U/L-R C1 C2 C2 C2 C2 C2 C2 C2 C2 C1 C2 C1 C2 7 Table 4 - Heuristic Results Comparison From the number of correct decisions these proposed heuristics choose, it is clear that basing prioritization decisions solely on the abandonment rate (at least for the instances of these cases) most consistently makes the right decisions. This significantly suggests the strength of high abandonment rate on determining priority over all other parameter rates. Parameter Sensitivity Analysis In this section we examine the strength of select cases in generating the correct prioritization by adjusting customer parameters to either more positively or negatively affect the prioritization rules suggested from the results discussion. We use only single server systems in this analysis since they are found to be more conclusive in generating results. We focus specifically on cases 8 and 9 since in both of these cases, prioritizing the customer class with the higher abandonment rate does not produce better results. We 28

29 attempt to see at what point increasing the abandonment rate will yield in favour of prioritizing the customer class with the higher abandonment rate. C2-FCFS Avg L U % Type 1 Fail 1.92% 0.56% 3.29% % Type 2 Fail -3.12% -3.26% -2.98% % Total Failure -0.60% -1.22% 0.02% Working C1- FCFS Avg L U % Type 1 Fail % % -9.21% % Type 2 Fail 9.40% 8.28% 10.53% % Total Failure -0.85% -1.23% -0.46% Working C2-C1 Avg L U % Type 1 Fail 13.02% 12.49% 13.56% % Type 2 Fail % % % % Total Failure 0.25% 0.01% 0.50% Working Table 5 - Case 9 Sensitivity Analysis for R A = 2.2 In examining the initial results generated for case 9, we see that the difference between prioritizing C A and C B is less than 1 percent, suggesting a only a small increase between the abandonment rate for C A should be necessary to favour the prioritization of C A. Initially the abandonment rate was increased by only 0.2, however this did not create enough of a difference in order to favour C A over the already favoured C B. R A was increased until finally it was prioritizing C A was favoured when the abandonment rate was 2.2 (0.7 higher than what was originally tested). 29

30 C2-FCFS Avg L U % Type 1 Fail 0.76% -1.07% 2.60% % Type 2 Fail -3.24% -3.36% -3.13% % Total Failure -1.24% -2.11% -0.37% Working C1- FCFS Avg L U % Type 1 Fail -9.01% % -6.62% % Type 2 Fail 9.07% 7.63% 10.50% % Total Failure 0.03% -0.46% 0.51% Working C2-C1 Avg L U % Type 1 Fail 9.77% 9.20% 10.34% % Type 2 Fail % % % % Total Failure -1.27% -1.66% -0.87% Working Table 6 - Case 8 Sensitivity Analysis for R A = 1.6 In examining the initial results generated for case 8 in, we see that the difference between prioritizing C A and C B. Since both sets of customers are completely different (all three sets of parameters are affecting prioritization), and both arrival and service rates are against opposing the abandonment correlation for prioritizing C A, we would expect a higher increase in abandonment rate for C A to favour the prioritization of C A. Initially only a 0.2 rate increase was tested, and on the first run, it was found that the increase was enough to favour the prioritization of C A. The rate was decreased by 0.1 (to 1.6), and still the test favoured C A, which perhaps suggests higher complexity between the interaction of the three pairs of parameters. 30

31 Challenges and Sources of Error The most significant challenge was the lack of a standardized method of organization within my results collection. While the results collected were accurate, generating multiple simulation models under different parameter instances made consistency in results collection difficult. To make things more consistent, organization in excel documents and version generation of each simulation model and excel file should have been created. Standard methods should have been established early on. Lack of ability for to generate results on the same sets of random numbers for all replications also made substantiating conclusions more difficult since the confidence limits were widened. Generating trials based on the same sets of simulation generated random numbers for different policies with the same servers and same system parameters would decrease interval gap across a paired t-test analysis between the analysis of two policies and would generate less ambiguous results. 31

32 Suggested Future Work While the results substantiated from this report could go in a number of directions, if the next thesis student were to take over what was done on this project, he/she should first expand all models to include more servers. Single and 10 server models are not very realistic in depicting how a true service center would operate, and a large multi-server system might behave significantly different than what was suggested in the results. Furthermore I would suggest generating more instances of the generalized cases where the customer classes are more significantly different, to see at what point the rules and results substantiated might fall apart. Furthermore, from the results generated in the heuristic comparison, it appeared as if there were room to perhaps develop and sophisticate the / - R static heuristic to include the difference in the rate of customer classes, since this heuristic was able to capture the correct prioritization under a number of conditions. Something that was planned, but never implemented was adding generalist and specialist servers to the system to see how well the prioritization rules substantiated would be able to perform and stay consistent under those settings. Lastly it would be interesting to implement a financial model into the analysis of the system, as usually two customer classes will not be identically valued from an organizational perspective. This might beg the question At what point is a greater level of total C A failure appropriate when C B is implemented? which would have to be investigated using a cost/benefit analysis implementing a financial model. 32

33 Conclusion While the simulation results from the instances of the cases generated suggested that there are rates stronger than others in determining which prioritization should be used to decrease overall total system failure, they should not all be considered individually. As the parameter sensitivity analysis suggested, the way each pair of parameters interacts when customer class types are completely different, is highly complex and can overrule all suggestions. However as examined in the static heuristic comparison, as a rule of thumb, it appears best to favour the prioritization customer class type with the higher reneging rate to reduce overall system failure. 33

34 Appendix A Modified Single Server First Come First Serve Model

35 Appendix B Single and 10 Server System Parameters 1 Server Parameters Case Customer A Arrival Rate Customer B Arrival Rate Customer A Abandonment Rate Customer B Abandonment Rate Customer A Service Rate Customer B Service Rate Server Parameters Case Customer A Arrival Rate Customer B Arrival Rate Customer A Abandonment Rate Customer B Abandonment Rate Customer A Service Rate Customer B Service Rate

36 Appendix C1 Single Server Paired T-Test Results Case 1 Case 2 Case 3 C2-FCFS Avg L U Avg L U Avg L U % Type 1 Fail 1.55% 0.41% 2.69% 1.55% 0.33% 2.77% 1.60% 0.35% 2.86% % Type 2 Fail -2.35% -3.65% -1.04% -1.02% -1.52% -0.53% -1.62% -2.45% -0.78% % Total Failure -0.40% -0.48% -0.31% 0.26% -0.10% 0.63% -0.01% -0.23% 0.22% Working C1- FCFS Avg L U Avg L U Avg L U % Type 1 Fail -1.02% -1.76% -0.28% -1.92% -2.79% -1.06% -1.56% -2.41% -0.70% % Type 2 Fail 1.56% 1.06% 2.06% 1.52% 1.00% 2.05% 1.70% 1.13% 2.27% % Total Failure 0.27% 0.14% 0.40% -0.20% -0.37% -0.03% 0.07% -0.08% 0.22% Working C2-C1 Avg L U Avg L U Avg L U % Type 1 Fail 2.57% 0.70% 4.45% 3.48% 1.39% 5.56% 3.16% 1.06% 5.27% % Type 2 Fail -3.90% -5.70% -2.10% -2.55% -3.56% -1.53% -3.32% -4.72% -1.92% % Total Failure -0.67% -0.72% -0.61% 0.46% -0.08% 1.00% -0.08% -0.43% 0.28% Working

37 Appendix C2 Single Server Paired T-Test Results Continued Case 4 Case 5 Case 6 C2-FCFS Avg L U Avg L U Avg L U % Type 1 Fail 2.43% 2.25% 2.62% 3.20% 3.02% 3.38% 2.95% 2.79% 3.10% % Type 2 Fail -2.30% -2.61% -1.99% -3.93% -4.31% -3.56% -5.36% -5.91% -4.81% % Total Failure 0.06% -0.03% 0.16% -0.37% -0.50% -0.23% -1.21% -1.41% -1.00% Working C1- FCFS Avg L U Avg L U Avg L U % Type 1 Fail -3.78% -4.21% -3.35% -3.30% -3.58% -3.01% -3.62% -3.89% -3.36% % Type 2 Fail 3.58% 3.28% 3.87% 4.13% 3.85% 4.42% 6.51% 5.98% 7.03% % Total Failure -0.10% -0.21% 0.00% 0.42% 0.38% 0.46% 1.44% 1.30% 1.58% Working C2-C1 Avg L U Avg L U Avg L U % Type 1 Fail 6.22% 5.65% 6.78% 6.50% 6.09% 6.91% 6.57% 6.16% 6.97% % Type 2 Fail -5.88% -6.45% -5.30% -8.07% -8.69% -7.44% % % % % Total Failure 0.17% 0.08% 0.26% -0.78% -0.90% -0.66% -2.65% -2.98% -2.32% Working

38 Appendix C3 Single Server Paired T-Test Results Continued Case 7 Case 8 Case 9 C2-FCFS Avg L U Avg L U Avg L U % Type 1 Fail 5.08% 4.73% 5.44% 2.14% 1.89% 2.39% 3.17% 2.80% 3.55% % Type 2 Fail -7.45% -8.16% -6.74% -2.77% -3.20% -2.34% -3.96% -4.46% -3.46% % Total Failure -1.18% -1.38% -0.99% -0.31% -0.43% -0.20% -0.39% -0.48% -0.31% Working C1- FCFS Avg L U Avg L U Avg L U % Type 1 Fail -5.61% -6.06% -5.15% -3.64% -3.94% -3.35% -5.78% -6.13% -5.42% % Type 2 Fail 8.49% 7.90% 9.08% 5.27% 4.76% 5.78% 7.19% 6.69% 7.68% % Total Failure 1.44% 1.32% 1.57% 0.81% 0.70% 0.93% 0.71% 0.63% 0.78% Working C2-C1 Avg L U Avg L U Avg L U % Type 1 Fail 10.69% 9.89% 11.49% 5.78% 5.27% 6.30% 8.95% 8.23% 9.67% % Type 2 Fail % % % -8.04% -8.94% -7.14% % % % % Total Failure -2.63% -2.88% -2.37% -1.13% -1.33% -0.93% -1.10% -1.24% -0.96% Working

39 Appendix C4 Single Server Paired T-Test Results Continued Case 10 Case 11 Case 12 C2-FCFS Avg L U Avg L U Avg L U % Type 1 Fail 5.24% 4.87% 5.61% 3.15% 2.87% 3.43% 3.18% 2.95% 3.41% % Type 2 Fail -9.13% -9.95% -8.31% -7.12% -7.74% -6.49% -4.99% -5.48% -4.49% % Total Failure -1.95% -2.17% -1.72% -1.98% -2.16% -1.80% -0.90% -1.06% -0.75% Working C1- FCFS Avg L U Avg L U Avg L U % Type 1 Fail -3.37% -3.62% -3.12% -2.32% -2.46% -2.17% -2.04% -2.12% -1.97% % Type 2 Fail 6.04% 5.71% 6.38% 5.25% 4.91% 5.60% 3.44% 3.28% 3.60% % Total Failure 1.34% 1.29% 1.39% 1.47% 1.34% 1.59% 0.70% 0.64% 0.75% Working C2-C1 Avg L U Avg L U Avg L U % Type 1 Fail 8.60% 7.99% 9.22% 5.47% 5.07% 5.87% 5.23% 4.92% 5.53% % Type 2 Fail % % % % % % -8.42% -9.05% -7.79% % Total Failure -3.28% -3.55% -3.02% -3.45% -3.74% -3.16% -1.60% -1.79% -1.41% Working

40 Appendix C5 Single Server Paired T-Test Results Continued Case 13 C2-FCFS Avg L U % Type 1 Fail 2.70% 1.23% 4.16% % Type 2 Fail -1.69% -2.33% -1.06% % Total Failure 0.50% 0.09% 0.92% Working C1- FCFS Avg L U % Type 1 Fail -3.38% -4.42% -2.33% % Type 2 Fail 2.50% 1.91% 3.09% % Total Failure -0.44% -0.67% -0.21% Working C2-C1 Avg L U % Type 1 Fail 6.07% 3.56% 8.58% % Type 2 Fail -4.19% -5.42% -2.97% % Total Failure 0.94% 0.29% 1.59% Working

41 Appendix D1 10 Server Paired T-Test Results Case 1 Case 2 Case 3 FCFS 1 Server 10 Server Difference 1 Server 10 Server Difference 1 Server 10 Server Difference % Type A Fail % Type B Fail % Total Failure Working Priority CA % Type A Fail % Type B Fail % Total Failure Working Priority CB % Type A Fail % Type B Fail % Total Failure Working

42 Appendix D2 10 Server Paired T-Test Results Continued Case 4 Case 5 Case 6 FCFS 1 Server 10 Server Difference 1 Server 10 Server Difference 1 Server 10 Server Difference % Type A Fail % Type B Fail % Total Failure Working Priority CA % Type A Fail % Type B Fail % Total Failure Working Priority CB % Type A Fail % Type B Fail % Total Failure Working

43 Appendix D3 10 Server Paired T-Test Results Continued Case 7 Case 8 Case 9 FCFS 1 Server 10 Server Difference 1 Server 10 Server Difference 1 Server 10 Server Difference % Type A Fail % Type B Fail % Total Failure Working Priority CA % Type A Fail % Type B Fail % Total Failure Working Priority CB % Type A Fail % Type B Fail % Total Failure Working

44 Appendix D4 10 Server Paired T-Test Results Continued Case 10 Case 11 Case 12 FCFS 1 Server 10 Server Difference 1 Server 10 Server Difference 1 Server 10 Server Difference % Type A Fail % Type B Fail % Total Failure Working Priority CA % Type A Fail % Type B Fail % Total Failure Working Priority CB % Type A Fail % Type B Fail % Total Failure Working