NEW DISPATCHING RULES FOR SCHEDULING IN A JOB SHOP - AN EXPERIMENTAL STUDY

Size: px
Start display at page:

Download "NEW DISPATCHING RULES FOR SCHEDULING IN A JOB SHOP - AN EXPERIMENTAL STUDY"

Transcription

1 NEW DISPATCHING RULES FOR SCHEDULING IN A JOB SHOP - AN EXPERIMENTAL STUDY Oliver Holthaus Faculty of Business Administration and Economics Department of Production Management University of Passau Dr.-Hans-Kapfinger Strasse 3, 9432 Passau, Germany. and Chandrasekharan Rajendran Industrial Engineering and Management Division Department of Humanities and Social Sciences Indian Institute of Technology, Madras , India. ABSTRACT We present two new dispatching rules for scheduling in a job shop. These rules combine the process-time and work-content in the queue for the next operation on a job, by making use of additive and alternative approaches. An extensive and rigorous simulation study has been carried out to evaluate the performance of the proposed dispatching rules against those by the SPT rule, the WINQ rule, a random rule based on the SPT and WINQ rules, and the best existing rule. The important aspects of the results of the experimental investigation are also discussed in detail. 1. INTRODUCTION Scheduling in a job shop is an important aspect of a shopfloor management system, which can have a significant impact on the performance of the shopfloor. The job shop scheduling problem can be stated as follows: N jobs are to be processed by M machines or work stations within a given time period in such way that given objectives are optimized. Each job consists of a specific set of operations which have to be processed according to a given technical precedence order (routing). In case all jobs to be scheduled are available at the beginning of the scheduling process the problem is called static, if the set of jobs to be processed is continuously changing over time the problem 1

2 is called dynamic. In a deterministic problem all parameters are known with certainty. If at least one parameter is probabilistic (for instance, the release times of the jobs) the problem is called stochastic. The aim of the planning process is to find a schedule for processing all jobs optimizing one or more goals (for instance, minimizing mean flowtime or minimizing mean tardiness). A variety of analytical models and optimization algorithms have been developed for solving static and dynamic deterministic problems (see Baker (1974), Lawler et al. (1982), Lawler (1983), Blazewicz et al. (1994), and Brucker (1995)). In theory it is possible to determine optimal schedules for static or dynamic deterministic scheduling problems. In practice the computation of optimal solutions is impossible since these problems belong to the class of NP-hard problems (Rinnooy Kan (1976) and Brucker (1995)). Therefore the existence of algorithms which are polynomial-bounded in the problem size is very unlikely. Only for small problems ( M 1 and N 2 ) it is possible to generate optimal schedules using for example a branch & bound-approach (Brucker et al. (1994)). However, the time requirements for calculating optimal processing orders for a job shop scheduling problem occuring in practice would be beyond any scope of time. In the last decades therefore many heuristic methods based upon the shifting bottleneck procedure (Adams et al. (1988) and Balas et al. (1995)) or special solution strategies, such as simulated annealing (Van Laarhoven et al. (1992)), tabu search (Dell'Amico and Trubian (1993)), and genetic algorithms (Bierwirth (1995) and Dorndorf and Pesch (1995)) have been developed to solve larger problems and to ensure execution times of a few minutes. For a survey and comparison of these methods for deterministic scheduling problems, see Aarts et al. (1994), Blazewics et al. (1994), and Brucker (1995). In dynamic stochastic job shops the jobs arrive continuously in time. The release times, routings and processing times of the jobs are stochastic parameters and not known in advance. For dynamic stochastic scheduling problems considered in this paper it is neither theoretically nor practically possible to compute optimal schedules in advance. Only for those jobs currently in the shop processing sequences on the various machines can be determined. The decision as to which job is to be loaded on a machine, when the machine becomes free, is normally made with the help of dispatching rules. Over the 2

3 years, many dispatching rules have been proposed by many researchers (see Blackstone et al. (1982), Haupt (1989), and Ramasesh (199)). It is observed that no rule has been found to perform well for all important criteria, e.g. mean flowtime and mean tardiness. The choice of a dispatching rule depends on which criterion is intended to be improved upon. In general, it has been observed that process-time based rules fare better under tight load conditions, while due-date based rules perform better under light load conditions (Conway (1965), Rochette and Sadowski (1976), and Blackstone et al. (1982)). A job shop could be classified as an open shop or a closed shop, depending upon the way in which jobs are routed in the shop. In a closed shop, the number of routings available to a job is fixed and an arriving job can follow one of the available routings. In an open shop, there is no limitation on the routing of a job and each job could have a different routing. In this paper we consider an open shop, and present two new dispatching rules that make use of process-time and work-content of jobs in the queue for next operation. These rules are based on additive and alternative strategies. 2. LITERATURE REVIEW It is a common practice to have some standard assumptions incorporated in the job shop model in simulation studies (see Baker (1974), Blackstone et al. (1982), and Haupt (1989)). The assumptions which have also been made in our simulation study are listed below: 1. Each machine can perform only one operation at a time on any job. 2. A job, once taken up for processing, should be completed before another job can be taken up, i.e. job preemption is not allowed. 3. No two successive operations on a job can be performed on the same machine. 4. An operation on any job cannot be performed until all previous operations on the job are completed. 5. There are no limiting resources other than the machines. 6. There are no alternate routings. 7. There are no interruptions on the shopfloor, e.g. no machine breakdowns. 8. The jobs are independent of each other; no assembly is involved. 9. There are no parallel machines. 3

4 Dispatching rules can be classified in a number of ways (Haupt (1989)). One such classification is as follows: 1. Process-time based rules, 2. Due-date based rules, 3. Combination rules, and 4. Rules that are neither process-time based nor due-date based. The shortest processing time (SPT) rule is an example of a process-time based rule. The process-time based rules ignore the due-date information on jobs. The SPT rule has been found to minimize the mean flowtime and a good performance related to the mean tardiness objective has also been observed under highly loaded conditions in the shop (Conway (1965), Rochette and Sadowski (1976), Blackstone et al. (1982), and Haupt (1989)). Due-date based rules schedule the jobs based on their due-date information. An example of a due-date based rule is the earliest due-date (EDD) rule. In general, duedate based rules give good results under light load conditions, but the performance of these rules deteriorates under high load levels (Ramasesh (199)). Combination rules make use of both process-time and due-date information, e.g. Least Slack rule, Critical Ratio, etc. (Blackstone et al. (1982)). The rules which do not fall into any of these categories load the jobs depending on shopfloor conditions rather than on the characteristics of jobs. An example of this type of rule is the WINQ rule (total workcontent of jobs in the queue for the next operation on a job) (Haupt (1989)). Researchers have also propounded the idea of combining simple priority rules to form more complex rules. In such a case, two procedures are standard: the additive approach and the alternative approach. The additive or weighted combination approach determines the priority by computing the expression Z i = g f = α 1 f (Q f ) i, where (Q f ) i denotes the priority value of a simple priority rule f for job i, where f = 1,2,...,g, α f denotes the weight (or coefficent) of rule f, where α f >, and Z i is the resultant priority index for job i. The disadvantage of this approach is that the priority index, Z i, is sensitive to the values of α f, and that one has to resort to the use of search algorithms to determine the best values of α f (O'Grady and Harrison (1985)). Moreover, while making use of the past data and search algorithms to set the values of α f, we assume that the shop would have similar characteristics in future, e.g. arrival pattern, 4

5 service rate, etc. The alternative approach (or hierarchical approach) is based on a conditional procedure. Typically, only two simple rules are combined in such an expression. A detailed discussion on various dispatching procedures and rules can be found in Blackstone et al. (1982), Haupt (1989), and Ramasesh (199). Some of the noteworthy dispatching rules that have recently been proposed and extensively evaluated are due to Baker and Kanet (1983), Vepsalainen and Morton (1987), Anderson and Nyirendra (199), and Raghu and Rajendran (1993). Among these rules, the rule due to Raghu and Rajendran (1993) is the best rule for minimizing mean flowtime and mean tardiness, and the SPT rule is still the most effective with respect to the objective of minimizing the number of tardy jobs. 3. DEVELOPMENT OF THE PROPOSED RULES The motivation of the present study has been the results of the past simulation studies. It is generally observed that the SPT rule performs very well under tight load conditions, and that the rules such as the WINQ rule tend to give preference to jobs that move on to queues with the least backlog, rather than to speed up a job now that could later be stopped in a congested queue (Haupt (1989)). The SPT rule is found to be the quite effective with respect to the measures of mean flowtime and mean tardiness. We also observe that the rules such as WINQ can aid in reducing the waiting time of jobs by making use of shopfloor information about other machines as well. These observations have been the guiding principles for the development of our new dispatching rules. 3.1 Dispatching rule based on additive strategy A simple rule that we formulate on the basis of additive strategy is the sum of processtime and total work-content of jobs in the queue for the next operation on a job. To explain, suppose that there are three jobs, job 1, job 3 and job 4, waiting in the queue at machine 1. Assume that jobs 1 and 4 go to machine 3 for their subsequent operation, and that job 3 goes next to machine 2. Let the total work-content of jobs (for their operation on machine 2) currently in the queue at machine 2 be TWKQ 2, and the total work-content of jobs (for their operation on machine 3) currently in the queue at machine 3 be TWKQ 3. Suppose the process-times of jobs 1, 3 and 4 on machine 1 be 5

6 t 11, t 31, and t 41 respectively. Suppose machine 1 becomes free and we need to choose a job for loading on to it. Rule 1, that is proposed in this paper on the basis of additive strategy, assigns the priority index for job i, viz. Z i, as follows: Z1 = t11 + TWKQ3, for job 1, because job 1 goes to machine 3 for its next operation, Z3 = t31 + TWKQ2, for job 3, because job 3 goes to machine 2 for its next operation, and Z4 = t41 + TWKQ3, for job 4, because job 4 goes to machine 3 for its next operation. According to our proposed rule, we will choose the job with the least Z i value. 3.2 Dispatching rule based on alternative approach The second rule proposed in this paper, called Rule 2, seeks to use the process-time and total work-content of jobs in the queue for the next operation on a job, on the basis of alternative or hierarchical strategy. For the sake of clarity and easier understanding, we present this rule with the aid of the numerical illustration that is presented in Section 3.1. In addition to the information furnished in the numerical illustration, we assume that R 2 and R 3 denote the times at which machines 2 and 3 respectively become free, after processing jobs that are currently loaded on them. Suppose machine 1 becomes free at the present instant, denoted by T. The proposed rule Rule 2 works as follows: Step 1: /* Jobs 1, 3 and 4 are in the queue at machine 1 */ Step 2: Check if T + t11 R3 + TWKQ3 for job 1, because job 1 goes to machine 3 for its next operation, T + t31 R2 + TWKQ2 for job 3, because job 3 joins the queue at machine 2 for its next operation, and T + t41 R3 + TWKQ3 for job 4 because job 4 goes to machine 3 for its next operation. /* We check if a job, after getting processed on the current machine, will not wait in the queue for its next operation. */ Form a set ψ with the jobs that satisfy Exp. (1). (1) 6

7 If ψ is not an empty set then else choose the job, from set ψ, that has the shortest process-time on machine 1; /* In this case, the SPT rule is operational for the jobs in the set ψ. */ choose the job, from all jobs in the queue at machine 1, that will join the queue at the machine with the least work-content. In otherwords, choose the job that will join the queue corresponding to min{ TWKQ2, TWKQ3 }. /* In this case, the WINQ rule is operational for all jobs in the queue at machine 1. */ 4. EXPERIMENTAL EVALUATION OF RULES BY SIMULATION Since the proposed dispatching rules are based on the implementation of additive and alternative strategies of the SPT and WINQ rules, and the rule by Raghu and Rajendran (1993) is the best rule reported so far, we have chosen to consider the SPT, WINQ and RR (Raghu and Rajendran) rules, apart from the two proposed rules, for experimental evaluation. We have also introduced another rule for evaluation, viz. that of randomly choosing between the SPT and WINQ rules (RAN) for making a scheduling decision. We have chosen to use the RAN rule because we believe that such a choice will combine the advantages of both the SPT and WINQ rules, albeit in a random fashion. The measures of performance are mean flowtime, mean tardiness, and the percentage of tardy jobs. 4.1 Experimental conditions The simulation experiment has been conducted in an open shop configuration, consisting of 1 machines. The routing for each order is different and generated randomly, with every machine having an equal probability of being chosen. The number of operations for each job is uniformly distributed between 4 and 1. The process-times are drawn from rectangular distributions. Three process-time distributions are used: , , and 7

8 The total work-content (TWK) method of due-date setting (Blackstone et al. (1982)) is used in all the experiments with allowance factors c of 3, 4 and 5. The job arrivals are generated using an exponential distribution. Six machine-utilization levels U g are tested in the experiments, viz. 6%, 7%, 8%, 85%, 9%, and 95%. Thus, in all, there are three types of process-time distributions, three different due-date settings and six different utilization levels, making a total of 54 simulation experiment sets for every dispatching rule. Each simulation experiment consists of twenty different runs (or replications). In each run, the shop is continously loaded with job-orders that are numbered on arrival. 4.2 Steady-state condition of the shop In order to ascertain when the system reaches the steady-state, we have observed the shop parameters, such as utilization level of machines, mean flowtime of jobs, etc., continuously. It has been found that the shop reaches a steady-state when 5 job orders are completed. 4.3 Run length and number of replications Typically, the total sample size in simulation studies of job shop scheduling is of the order of thousands of job completions (Conway et al. (196), and Blackstone et al. (1982)). For a given total sample size, it is preferable to have a smaller number of replications and a larger run length, and the recommended number of replications is about 1 (Law and Kelton (1984)). The method suggested by Fishman (1971) has been taken as a guideline in the present study to fix the total sample size. Following these guidelines, we have fixed the number of replications as 2, with the run length for every replication as 2 completed job-orders. The statistical analysis of the experimental data with single-factor ANOVA with block design (Common Random Number for one block) and Duncan's Multiple Range Test (Montgomery (1991) and Lorenzen and Anderson (1993)) has shown that this sample size yields a variance which results in a Type I error of at most 1%. When collecting data from a given replication, we have considered the orders numbering from 51 to 25 for statistical computation purposes, 8

9 and the shop is loaded with jobs, until these 2 numbered job-orders are completed. This helps in overcoming the problem of 'censored data' (Conway (1965)). 5. RESULTS AND DISCUSSION The results of the simulation study are presented in Figures 1 to 5. The results presented are obtained by taking the mean of mean values of the twenty replications. Figure 1 shows the mean percentage increase in mean flowtime for all rules relative to the best performing rule in dependence of the utilization level and the process-time distribution. To explain further, we compute the relative percentage increase in mean flowtime as follows: If F k denotes the mean flowtime of jobs due to the application of rule k, where k = 1 indicates the SPT rule, k = 2 indicates the WINQ rule, k = 3 denotes the RAN rule (random choice between the SPT and WINQ rules), k = 4 refers to the RR rule, k = 5 denotes Rule 1 proposed in this paper based on the additive strategy, and k = 6 denotes Rule 2 proposed in this paper based on the alternative approach. The relative percentage increase in mean flowtime for rule k with respect to the best performing rule is given by ( ( F k - min{ F k, 1 k 6 } ) / min{ F k, 1 k 6 } ) 1. This computation of the relative percentage increase in mean flowtime for rule k has been done with respect to three different values of the allowance factor, viz. c = 3, 4 and 5. The results are presented in Figure 1. It is to be noted that all rules, except the RR rule, do not make use of information on due-dates of the jobs, and hence the mean flowtime due to rule k, except that of the RR rule, remains the same for all values of the allowance factor. The results are quite interesting. For each combination of process-time distribution and utilization level, Rule 1, proposed in this paper, emerges as the most effective in minimizing mean flowtime. We have conducted statistical tests (with Type I error of at 9

10 most 1%) to determine if the performance of a rule is significantly better than others. We present only the important findings. For minimizing mean flowtime, the performance of Rule 1 is significantly better than the performance of all other rules, except in the case of U g = 6%, where the mean flowtime of Rule 1 is smaller than those of the other rules, though statistically not significant. In addition, the relative performance of the proposed rule improves as the shop-load level increases. These findings are quite interesting and significant because the existing literature states that SPT is an efficient rule in minimizing mean flowtime. The proposed Rule 1 exploits the goodness of both the SPT and WINQ rules in that we not only seek to maximize the throughput at the current machine, but also seek to load a job which does not wait for a long time at subsequent operations. Considering Rule 2 and the process-time distribution 1-5, this rule fares better than the SPT and WINQ rules. For the processtime distributions 1-5 and 1-1, the performance of Rule 2 is significantly better than that of the WINQ rule at all levels of utilization. In addition, at the high shop-load level of 95% the mean flowtime of Rule 2 is significantly smaller than the mean flowtime of the SPT rule. Also the performance of the RAN rule is quite interesting. As the shopload level is increased to 9% or 95% the random choice between the SPT and WINQ rules is significantly better than the sole use of SPT or WINQ. Figures 2 and 3 show the mean tardiness (time units tu) for all rules with respect to different utilization levels (Figure 2: 6%, 7%, and 8%, Figure 3: 85%, 9%, and 95%), process-time distributions ([1, 5], [1, 1], and [1, 5]), and values of the allowance factor c (3, 4, and 5). While the performance of Rule 1 is significantly better than the performance of the SPT and WINQ rules, Rule 1 fares only sometimes better than the RR rule in minimizing mean tardiness. For most of the times, the RR rule is significantly better than all other rules for minimizing mean tardiness. Since the RR rule incorporates the information about slack (and hence due-date), it performs better at minimizing mean tardiness. Only for high utilization levels (U g = 9% or U g = 95% ) Rule 1 is quite good at minimizing mean tardiness. These aspects are typical of processtime based rules. Future research could be directed towards the development of dispatching rules that include the information on process-time, due-date, and workcontent of jobs in the queue for the next operation on a job. Such an attempt at the development of a rule could result in superior performance in minimizing both mean 1

11 flowtime and mean tardiness. It is also quite interesting to observe that the rule based on the random choice between the SPT rule and the WINQ rule is better than the rules considered individually, for minimizing mean flowtime and mean tardiness. Such a rule seeks to combine the advantages of both the SPT and WINQ rules, albeit randomly. Figures 4 and 5 show the percentage of tardy jobs for all rules for different utilization levels (Figure 4: 6%, 7%, and 8%, Figure 5: 85%, 9%, and 95%), process-time distributions ([1, 5], [1, 1], and [1, 5]), and values of the allowance factor c (3, 4, and 5). For lower shop-load levels (and/or larger allowance factors) the performance of the RR rule is significantly better than the performance of all other rules. As the utilization level increases the performance of the process-time based rules (SPT, RAN, Rule 1, and Rule 2) improves and these rules are mostly superior to the RR rule in minimizing the percentage of tardy jobs. In addition, the SPT rule is the most effective for this objective. 6. CONCLUSION In this paper, we have proposed two new dispatching rules for scheduling in a job shop. These rules are based on additive and alternative stategies, and make use of information about the process-time and total work-content of jobs in the queue for the next operation on a job. An extensive simulation experimentation has been carried out to evaluate the performance of various dispatching rules. It has been found that no single rule is effective in minimizing all measures of performance. One of the proposed rules, viz. Rule 1 that combines the process-time and WINQ on the basis of the additive strategy, is found to be quite significant in minimizing the mean flowtime. The results also indicate that future research could be directed towards the development of rules that include information about process-time, total work-content of jobs in the queue for the next operation on a job, and due-date, so as to minimize simultaneously as many measures of performance as possible. ACKNOWLEDGEMENT 11

12 This research was carried out when the second author was at the University of Passau on a Short-term Invitation Program and was supported by the Deutscher Akademischer Austauschdienst e.v., Bonn. REFERENCES Adams, J., Balas, E., and Zawack, D., 1988, The shifting bottleneck procedure for job shop scheduling, Management Science, 34, Anderson, E.J., and Nyirendra, J.C., 199, Two new rules to minimize tardiness in a job shop, International Journal of Production Research, 28, Aarts, E.H.L., Van Laarhoven, P.J.M., Lenstra, J.K., and Ulder, N.L.J., 1994, A computational study of local search algorithms for job shop scheduling, ORSA Journal on Computing, 6, Baker, K.R., 1974, Introduction to Sequencing and Scheduling, Wiley, New York. Baker, K.R., and Kanet, J.J., 1983, Job shop scheduling with modified due dates, Journal of Operations Management, 4, Balas, E., Lenstra, J.K., and Vazacopoulos, A., 1995, The one-machine problem with delayed precedence constraints and its use in job shop scheduling, Management Science, 41, Bierwirth, C., 1995, A generalized permutation approach to job shop scheduling with genetic algorithms, OR Spektrum, 17, Blackstone, J.H., Phillips, D.T., and Hogg, G.L., 1982, A state-of-the-art survey of dispatching rules for manufacturing job shop operations, International Journal of Production Research, 2, Blazewicz, J., Ecker, K., Schmidt, G., and Weglarz, J., 1994, Scheduling in Computer and Manufacturing Systems, 2nd. edition, Springer, Berlin. Brucker, P., 1995, Scheduling Algorithms, Springer, Berlin. Brucker, P., Jurisch, B., and Sievers, B., 1994, A branch & bound algorithm for the jobshop problem, Discrete Applied Mathematics, 49, Conway, R.W., Johnson, B.M., and Maxwell, W.L., 196, An experimental investigation of priority dispatching, Journal of Industrial Engineering, 11, Conway, R.W., 1965, Priority dispatching and job lateness in a job shop, Journal of Industrial Engineering, 16, Dell'Amico, M., and Trubian, M., 1993, Applying tabu search to the job-shop scheduling problem, Annals of Operations Research, 41, Dorndorf, U., and Pesch, E., 1995, Evolution based learning in a job shop scheduling environment, Computers and Operations Research, 22,

13 Fishman, G.S., 1971, Estimating sample size in computing simulation experiments, Management Science, 18, O'Grady, P.J., and Harrison, C., 1985, A general search sequencing rule for job shop sequencing, International Journal of Production Research, 23, Haupt, R., 1989, A survey of priority rule-based scheduling, OR Spektrum, 11, Law, A.M., and Kelton, W.D., 1984, Confidence intervals for steady state simulation: I. A survey of fixed sample size procedures, Operations Research, 32, Lawler, E.L., 1983, Recent results in the theory of machine scheduling, in: Bachem, A., Grötschel, M., Korte, B. (eds), Mathematical Programming: The State of the Art, Springer, Berlin, Lawler, E.L., Lenstra, J.K., and Rinnooy Kan, A.H.G., 1982, Recent developments in deterministic sequencing and scheduling: a survey, in: Dempster, M.A.H., Lenstra, J.K., Rinnooy Kan, A.H.G. (eds), Deterministic and Stochastic Scheduling, Reidel, Dordrecht, Lorenzen, T.J., and Anderson, V.L., 1993, Design of Experiments: A No-Name Approach, Marcel Dekker, New York. Montgomery, D.C., 1991, Design and Analysis of Experiments, John Wiley & Sons, New York. Raghu, T.S., and Rajendran, C., 1993, An efficient dynamic dispatching rule for scheduling in a job shop, International Journal of Production Economics, 32, Ramasesh, R., 199, Dynamic job shop scheduling: A survey of simulation research, OMEGA, 18, Rinnooy Kan, A.H.G., 1976, Machine Scheduling Problems, Martinus Nijhoff, The Hague. Rochette, R., and Sadowski, R.P., 1976, A statistical comparison of the performance of simple dispatching rules for a particular set of job shops, International Journal of Production Research, 14, Van Laarhoven, P.J.M., Aarts, E.H.L., and Lenstra, J.K., 1992, Job shop scheduling by simulated annealing, Operations Research, 4, Vepsalainen, A.P.J., and Morton, T.E., 1987, Priority rules for job shops with weighted tardiness costs, Management Science, 33,

14 Relative percentage increase in mean flowtime [%] SPT RR, c=3 WINQ RR, c=4 RAN RR, c=5 Rule 1 Rule 2 range of processing times: range of processing times: range of processing times: Utilization level [%] Figure 1: Relative percentage increase in mean flowtime of rules for different utilization levels and process-time distributions. 14

15 Mean tardiness [tu] WINQ SPT RAN [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] c = 3 c = 4 c = 5 RR Rule 1 Rule 2 utilization level: 6% utilization level: 7% utilization level: 8% Figure 2: Mean tardiness of rules for different process-time distributions, utilization levels, and values of the allowance factor c. 15

16 Mean tardiness [tu] WINQ SPT RAN [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] c = 3 c = 4 c = 5 RR Rule 1 Rule 2 utilization level: 85% utilization level: 9% utilization level: 95% Figure 3: Mean tardiness of rules for different process-time distributions, utilization levels, and values of the allowance factor c. 16

17 Percentage of tardy jobs [%] 1 RAN RR utilization level: 6% 8 WINQ Rule 1 6 SPT Rule [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] utilization level: 7% [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] utilization level: 8% [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] c = 3 c = 4 c = 5 Figure 4: Percentage of tardy jobs of rules for different process-time distributions, utilization levels, and values of the allowance factor c. 17

18 Percentage of tardy jobs [%] WINQ SPT RR RAN Rule 1 Rule 2 utilization level: 85% [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] utilization level: 9% [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] utilization level: 95% [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] [1, 5] [1, 1] [1, 5] c = 3 c = 4 c = 5 Figure 5: Percentage of tardy jobs of rules for different process-time distributions, utilization levels, and values of the allowance factor c. 18