א א א א א א א א

Similar documents
Chapter 6: CPU Scheduling. Basic Concepts. Histogram of CPU-burst Times. CPU Scheduler. Dispatcher. Alternating Sequence of CPU And I/O Bursts

CPU Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Unix Scheduler

Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - V CPU Scheduling - I. University at Buffalo.

Principles of Operating Systems

Roadmap. Tevfik Ko!ar. CSC Operating Systems Spring Lecture - V CPU Scheduling - I. Louisiana State University.

Motivation. Types of Scheduling

Uniprocessor Scheduling

Operating System 9 UNIPROCESSOR SCHEDULING

Advanced Types Of Scheduling

Scheduling I. Today. Next Time. ! Introduction to scheduling! Classical algorithms. ! Advanced topics on scheduling

Lecture 11: CPU Scheduling

CS 153 Design of Operating Systems Winter 2016

Intro to O/S Scheduling. Intro to O/S Scheduling (continued)

A Paper on Modified Round Robin Algorithm

CPU Scheduling. Disclaimer: some slides are adopted from book authors and Dr. Kulkarni s slides with permission

CPU Scheduling. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Journal of Global Research in Computer Science

CPU Scheduling. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CPU Scheduling. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Project 2 solution code

Operating Systems Process Scheduling Prof. Dr. Armin Lehmann

A COMPARATIVE ANALYSIS OF SCHEDULING ALGORITHMS

Process Scheduling Course Notes in Operating Systems 1 (OPESYS1) Justin David Pineda

Review of Round Robin (RR) CPU Scheduling Algorithm on Varying Time Quantum

ENGG4420 CHAPTER 4 LECTURE 3 GENERALIZED TASK SCHEDULER

Pallab Banerjee, Probal Banerjee, Shweta Sonali Dhal

LEAST-MEAN DIFFERENCE ROUND ROBIN (LMDRR) CPU SCHEDULING ALGORITHM

Recall: FCFS Scheduling (Cont.) Example continued: Suppose that processes arrive in order: P 2, P 3, P 1 Now, the Gantt chart for the schedule is:

CS162 Operating Systems and Systems Programming Lecture 10. Tips for Handling Group Projects Thread Scheduling

GENERALIZED TASK SCHEDULER

SELF OPTIMIZING KERNEL WITH HYBRID SCHEDULING ALGORITHM

Scheduling II. To do. q Proportional-share scheduling q Multilevel-feedback queue q Multiprocessor scheduling q Next Time: Memory management

Process Scheduling for John Russo generated Mon Nov 07 13:57:15 EST 2011

Lecture Note #4: Task Scheduling (1) EECS 571 Principles of Real-Time Embedded Systems. Kang G. Shin EECS Department University of Michigan

Job Scheduling in Cluster Computing: A Student Project

Efficient Round Robin Scheduling Algorithm with Dynamic Time Slice

Mixed Round Robin Scheduling for Real Time Systems

Analysis of Adaptive Round Robin Algorithm and Proposed Round Robin Remaining Time Algorithm

Queue based Job Scheduling algorithm for Cloud computing

Real-Time and Embedded Systems (M) Lecture 4

Clock-Driven Scheduling

SCHEDULING AND CONTROLLING PRODUCTION ACTIVITIES

Scheduling: Introduction

Motivating Examples of the Power of Analytical Modeling

3. Scheduling issues. Common approaches /2. Common approaches /1. Common approaches / /17 UniPD / T. Vardanega 06/03/2017

Lecture 6: Scheduling. Michael O Boyle Embedded Software

OPERATING SYSTEMS. Systems and Models. CS 3502 Spring Chapter 03

Gang Scheduling Performance on a Cluster of Non-Dedicated Workstations

Limited-preemptive Earliest Deadline First Scheduling of Real-time Tasks on Multiprocessors

Comparative Study of Parallel Scheduling Algorithm for Parallel Job

Smarter Round Robin Scheduling Algorithm for Cloud Computing and Big Data

CS 318 Principles of Operating Systems

DEADLINE MONOTONIC ALGORITHM (DMA)

CHAPTER 4 CONTENT October :10 PM

IJRASET: All Rights are Reserved

Priority-Driven Scheduling of Periodic Tasks. Why Focus on Uniprocessor Scheduling?

System. Figure 1: An Abstract System. If we observed such an abstract system we might measure the following quantities:

A Survey of Resource Scheduling Algorithms in Green Computing

EFFECT OF CROSS OVER OPERATOR IN GENETIC ALGORITHMS ON ANTICIPATORY SCHEDULING

The BEST Desktop Soft Real-Time Scheduler

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Application the Queuing Theory in the Warehouse Optimization Jaroslav Masek, Juraj Camaj, Eva Nedeliakova

Dependency-aware and Resourceefficient Scheduling for Heterogeneous Jobs in Clouds

SINGLE MACHINE SEQUENCING. ISE480 Sequencing and Scheduling Fall semestre

Real-time System Overheads: a Literature Overview

General Terms Measurement, Performance, Design, Theory. Keywords Dynamic load balancing, Load sharing, Pareto distribution, 1.

Real-Time Scheduling Theory and Ada

Dynamic Fractional Resource Scheduling for HPC Workloads

A COMPARATIVE ANALYSIS OF SCHEDULING POLICIES IN A DISTRIBUTED SYSTEM USING SIMULATION

Energy-Efficient Scheduling of Interactive Services on Heterogeneous Multicore Processors

Job Scheduling Challenges of Different Size Organizations

LOADING AND SEQUENCING JOBS WITH A FASTEST MACHINE AMONG OTHERS

Production Job Scheduling for Parallel Shared Memory Systems

The Gang Scheduler - Timesharing on a Cray T3D

10/1/2013 BOINC. Volunteer Computing - Scheduling in BOINC 5 BOINC. Challenges of Volunteer Computing. BOINC Challenge: Resource availability

On-line Multi-threaded Scheduling

Real-Time Systems. Modeling Real-Time Systems

Comparative Analysis of Scheduling Algorithms of Cloudsim in Cloud Computing

Node Allocation In Grid Computing Using Optimal Resouce Constraint (ORC) Scheduling

Stride Scheduling. Robert Grimm New York University

Discrete Event Simulation

Resource Allocation Strategies in a 2-level Hierarchical Grid System

OPTIMAL ALLOCATION OF WORK IN A TWO-STEP PRODUCTION PROCESS USING CIRCULATING PALLETS. Arne Thesen

Fixed-Priority Preemptive Multiprocessor Scheduling: To Partition or not to Partition

Author's personal copy

Integrated Scheduling: The Best of Both Worlds

Addressing UNIX and NT server performance

Job Scheduling based on Size to Hadoop

On the Comparison of CPLEX-Computed Job Schedules with the Self-Tuning dynp Job Scheduler

Mathematical Modeling and Analysis of Finite Queueing System with Unreliable Single Server

Common Tool for Intelligent Scheduling / Critical Chain Project Management for US Navy & Contractor Shipyards

Hadoop Fair Scheduler Design Document

CMS readiness for multi-core workload scheduling

EPOCH TASK SCHEDULING IN DISTRIBUTED SERVER SYSTEMS

Decentralized Preemptive Scheduling Across Heterogeneous Multi-core Grid Resources

Stergios V. Anastasiadis and Kenneth C. Sevcik. Computer Systems Research Institute. University of Toronto.

Preference-Oriented Fixed-Priority Scheduling for Real-Time Systems

Job Scheduling for the BlueGene/L System

SimSo: A Simulation Tool to Evaluate Real-Time Multiprocessor Scheduling Algorithms

Utilization and Predictability in Scheduling. the IBM SP2 with Backlling. The Hebrew University of Jerusalem.

Transcription:

א א א W א א א א א א א א א 2008 2007 1

Chapter 6: CPU Scheduling Basic Concept CPU-I/O Burst Cycle CPU Scheduler Preemptive Scheduling Dispatcher Scheduling Criteria Scheduling Algorithms First-Come, First-Served Scheduling Shortest-Job-First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback Queue Scheduling Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation Deterministic Modeling Queuing Models Simulations Implementation Process Scheduling Models An Example: Solaris 2 An Example: Windows 2000 An Example: Linux 2

6.1 Basic Concepts Objective of multiprogramming is to have some process running at all times, in order to maximize CPU utilization. In a uniprocessor system, only one process may be running at a time. The idea of multiprogramming: o Several processes are kept in memory at one time. o A process executes until it must wait. o Operating system gives the CPU to another process. 6.1.1 CPU-I/O Burst Cycle Figure 6.1 Alternating sequence of CPU and I/O bursts Process execution consists of a cycle of CPU execution and I/O wait (Figure 6.1). Process execution begins with a CPU burst followed by I/O burst, then another CPU burst and so on. The last CPU burst will end with a system request to terminate execution. 3

Figure 6.2 Histogram of CPU-burst times The duration of CPU bursts have been measured (Figure 6.2). An I/O bound program has many very short CPU bursts. A CPU-bound program might have few very long CPU bursts. This distribution can help us select an appropriate CPU-Scheduling algorithm. 6.1.2 CPU Scheduler Whenever CPU becomes idle, the short-term scheduler (or CPU scheduler) must select one of the processes in the ready queue to be executed. Ready queue is not necessarily a first-in, first-out (FIFO) queue. It can be implemented as a FIFO queue, a priority queue, a tree, or an unordered linked list. Ready queue records process control blocks (PCBs) of the processes. 4

6.1.3 Preemptive Scheduling CPU scheduling decisions may take place when a process: o Switches from running to waiting state. o Switches from running to ready state. o Switches from waiting to ready state. o Terminates. Nonpreemptive scheduling scheme: o CPU scheduling takes place only under the first and last circumstances. o Once the CPU has been allocated to a process, the process keeps the CPU until it terminates or switch to the waiting state. Preemptive Scheduling Scheme: 6.1.4 Dispatcher o CPU scheduling takes place under any of the four circumstances. The dispatcher is a module that gives control of the CPU to the process selected by the short-term scheduler. The function of dispatcher involves: o Switching context. o Switching to user mode. o Jumping to the proper location in the user program to restart that program. Dispatcher should be fast as it is invoked during every process switch. Dispatch latency: is the time it takes for the dispatcher to stop one process and start another running. 6.2 Scheduling Criteria Criteria for comparing CPU-scheduling algorithms include: o CPU utilization: keep the CPU as busy as possible. o Throughput: number of processes completed per time unit o Turnaround time: interval from time of submission of a process to the time of completion. o Waiting time: amount of time a process spends waiting in the ready queue. o Response time: amount of time it takes to start responding after submitting a request, but not the time it takes to output the response. CPU scheduling algorithms should maximize CPU utilization and throughput, and minimize turnaround time, waiting time, and response time. 5

6.3 Scheduling Algorithms CPU scheduling deals with the problem of deciding which of the processes in the ready queue it to be allocated the CPU. 6.3.1 First-Come, First-Served (FCFS) Scheduling The process that requests the CPU first is allocated the CPU first. The ready queue is implemented as a FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The running process is removed from the ready queue. Example: Process Burst Time P1 24 P2 3 P3 3 o Suppose that the processes arrive in the order: P1, P2, P3. o The Gantt Chart for the schedule is: P 1 P 2 P 3 0 24 27 30 o Waiting time for P1 = 0, P2 = 24, P3 = 27 o Average waiting time: (0 + 24 + 27) / 3 = 17 millisecond. FCFS scheduling is nonpreemptive. Advantage: o Code is simple to write and understand. Disadvantages: o Average waiting time is often quite long. o Troublesome for timesharing systems. o Convoy effect: occur when processes wait for one big process to get of the CPU. Results in lower CPU and device utilization. 6

6.3.2 Shortest-Job-First (SJF) Scheduling This algorithm associates with each process the length of its next CPU burst. When CPU is available, it is assigned to the process with smallest next CPU burst. If two processes have the same length of next CPU burst, FCFS scheduling is used. A more appropriate term shortest next CPU burst, because scheduling is done by examining the length of the next CPU burst of a process, rather than its total length. Two schemes: o Nonpreemptive: once CPU given to the process it cannot be preempted until it completes its CPU burst. o Preemptive: if a new process arrived at ready queue with CPU burst length less than remaining time of current executing process, the executing process is preempted. Preemptive Shortest-Job-First is knows as Shortest-Remaining-Time-First (SRTF). Example Process Arrival time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 o Nonpreemptive SJF: P 1 P 3 P 2 P 4 0 2 4 5 7 8 12 16 Average waiting time = (0 + 6 + 3 + 7) / 4 = 4 millisecond o Preemptive SJF: P 1 P 2 P 3 P 2 P 4 P 1 0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2) / 4 = 3 7

SJF scheduling algorithm can be used in long-term scheduling in a batch system. We can use the process time limits specified by the user when submitting the job. Advantage: o SJF scheduling is optimal. It gives the minimum average waiting time. Disadvantage: o Cannot be implemented at the level of short term scheduling, because there is no way to know the length of the next CPU burst. The length of the next CPU burst can be predicted as an exponential average of the measured lengths of previous CPU bursts. Where: o 0 α 1 T n+1 = α t n + (1 α) T n o t n o T n o T n+1 represents length of the nth CPU burst. represents past history. represents predicted value for next CPU burst. Figure 6.3 prediction of the length of the next CPU burst (for.α = 1 / 2, T 0 = 10). 8

6.3.3 Priority Scheduling A priority is associated with each process. CPU is allocated to the process with the highest priority. Equal-priority processes are scheduled in FCFS order. SJF algorithm is a special case of the general priority-scheduling algorithm. In SJF algorithm the largest the CPU burst the lower the priority. Priorities are represented by numbers. Lower numbers represent high priority. Priorities can be defined either: o Internally: use measurable quantities to compute the priority of a process. For example, time limits, memory requirements, the number of opened files, the ratio of average I/O burst to average CPU burst, etc. o Externally: set by criteria external to the operating system, such as importance of a process. Priority scheduling can be either preemptive or nonpreemptive. A problem with priority scheduling is starvation, and the solution is aging. Indefinite blocking (or starvation): low priority processes may never execute, because higher-priority processes can prevent them from ever getting the CPU. Aging: is a technique of gradually increasing the priority of processes that wait in the system for a long time. Example Process Burst time Priority P 1 10 3 P 2 1 1 P 3 2 4 P 4 1 5 P 5 5 2 P 2 P 1 P 3 P 5 P 4 0 1 6 16 18 19 o Average waiting time = (6 + 0 + 16 + 18 + 1) / 5 = 8.2 milliseconds. 9

6.3.4 Round-Robin (RR) Scheduling Designed specially for timesharing systems. Similar to FCFS scheduling, but preemption is added to switch between processes. Time quantum (or time slice): a small unit of time. Ready queue is treated as a circular queue. CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum. Ready queue is implemented as a FIFO queue. New processes are added to the tail. How it works? Example o CPU scheduler: picks the first process from the ready queue Sets a timer to interrupt after 1 time quantum Dispatches the process. o Then, either: The process has a CPU burst less than 1 time quantum, so process itself releases the CPU after finishing its burst. The process has a CPU burst longer than 1 time quantum, so the timer will go off and cause an interrupt. A context switch is executed and the process is put at the tail of the ready queue. o The scheduler selects the next process in the ready queue. Process Burst time P 1 53 P 2 17 P 3 68 P 4 24 o Use a time quantum of 20 milliseconds. P 1 P 2 P 3 P 4 P 1 P 3 P 4 P 1 P 3 P 3 0 20 37 57 77 97 117 121 134 154 162 Average waiting time = ( 81 + 20 + 94 + 97 ) / 4 = 73 milliseconds. 10

RR scheduling algorithm is preemptive. Process is preempted and put back in ready queue if its CPU-burst exceeded 1 time quantum. If there are n processes in the ready queue and the time quantum is q, then each process gets 1 / n of the CPU time in chunks of at most q time units. A process must wait no longer than (n 1) x q time units until its next time quantum. Performance of RR algorithm depends heavily on the size of the time quantum: o If time quantum is very large, the RR Scheduling degenerates to FCFS policy. o If time quantum is very small, the RR approach is called processor sharing. Figure 6.4 Showing how a smaller time quantum increases context switches. For example, if the context-switch time is approximately 10 percent of the time quantum, then about 10 percent of the CPU time will be spent in context switch. Figure 6.5 Showing how turnaround time varies with the time quantum. 11

Turnaround time also depends on the size of time quantum (Figure 6.5). Average turnaround time of a set of processes does not necessarily improve as the time quantum size increases. Average turnaround time can be improved if most processes finish their next CPU burst in a single time quantum. Advantage: o Response time is short, at most (n-1)q. o Suitable for timesharing systems Disadvantage: o Average waiting time is often quite long. 6.3.5 Multilevel Queue Scheduling Figure 6.6 Multilevel queue scheduling. Ready queue is partitioned into several separate queues. For example, a queue for foreground (or interactive) processes, and another queue for background (or batch) processes. A process is permanently assigned to one queue, based on some property of it. Each queue has its own scheduling algorithm. For example RR for foreground queue and FCFS for background queue. Scheduling must be done between the queues themselves: 12

o Fixed-priority preemptive scheduling: Each queue has absolute priority over lower-priority queues. No process in a specific queue could run unless all above queues are empty. o Time slice between queues: each queue gets a certain amount of CPU time which it can schedule amongst its processes. For example, 80% for foreground queue and 20% for background queue. 6.3.6 Multilevel Feedback Queue Scheduling Allow a process to move between queues. If a process uses too much CPU time, it is moved to a lower-priority queue. A process that waits too long in a lower-priority queue may be moved to a higherpriority queue (aging). Figure 6.7 Multilevel feedback queues. A process entering ready queue is put in queue 0. A process in queue 0 is given a time quantum of 8 milliseconds. If does not finish within this time, it is moved to the tail of queue 1. If queue 0 is empty, the process at the head of queue 1 is given a time quantum of 16 milliseconds. If does not complete, it is preempted and is put into queue 2. Processes in queue 2 are run as FCFS, only when queue 0 and queue 1 are empty. Multilevel feedback queue scheduler is defined by the following parameters: 13

o Number of queues. o Scheduling algorithms for each queue o Method used to determine when to upgrade a process o Method used to determine when to demote a process o Method used to determine which queue a process will enter when that process needs service Multilevel versus Multilevel Feedback Queue Scheduling In multilevel queue processes do not move between queues (inflexible). In multilevel feedback queue processes can move between queues (flexible). Multilevel queue scheduling has lower scheduling overhead. Multilevel feedback scheduling is more complex. In multilevel queue scheduling processes may suffer from starvation. In multilevel feedback scheduling aging can be used. 14