Courses/Computer Science/CPSC 457.F2014/Lecture Notes/Scheduling

From wiki.ucalgary.ca
Jump to: navigation, search

Process Scheduling

This session starts our discussion of concurrency in earnest, with particular attention to how the OS provides the illusion of concurrency (i.e., simultaneous execution) by multiplexing the computer's CPU(s).

Background / Link to Prior Topic

Kernel Control Flow: the kernel as an asynchronous event handler

Understanding how the kernel supports concurrency depends on understanding the nature of the kernel's control flow --- especially in contrast to the control flow of most programs you've written up until this point in your education. Most basic CS programs are sequential, single-threaded pieces of code that progress through some number of repetition control structures (i.e., loops), decision control structures (e.g., switch..case, if..else), and function invocations.

In contrast, the kernel has little of its own control flow, and taken as a whole, this control flow is not sequential. Instead, the kernel is largely an event-driven service request mechanism. It is largely quiescient until userspace asks it to do something (via a syscall), the CPU asks it to do something (via a hardware or software interrupt or exception), or a piece of hardware asks it to do something (via the CPU & interrupts). Although it is true that some kernels have internal threads that are periodically scheduled, these threads are simply "very privileged" processes. Part of what makes the kernel so interesting is its ability to preempt (i.e., interrupt) not only user level processes, but also parts of its own internal control flow to service other important asynchronous events.

It is in this context of many concurrent control paths that we must understand how real scheduling is accomplished (the policy and mechanism of the scheduler component of the kernel) and the support necessary for that to happen (e.g., reliable clock interrupt, interrupt service routines).

Sequential Execution to Concurrent Execution

We've also seen how the OS can transition from sequential code execution to concurrent execution; it:

  • initializes the scheduler
  • creates two processes (via do_fork and via sys_execve)
  • enables interrupts;
  • and then allows the newly created scheduler to begin choosing between these two processes.
  • In the meanwhile, one of the processes (the init process) begins creating children via do_fork() and sys_execve()

This procedure naturally brings up the question: as more processes are created, and some hundred or thousand processes come to exist, how does the OS choose which process should be given the CPU?

note: when I say "given the CPU", I mean "take the saved CPU state (i.e, register values, including the instruction pointer) and write it to the hardware registers, thus enabling the CPU to pick up executing the particular process). In this way, we can also develop another perspective on processes: we can see them as virtual CPUs --- not in the sense that they execute any code, but that they are executing the code of the program that has been loaded into the process address space.

The Topic

In this session, we will begin an examination of one of the central purposes of an "operating system" (i.e., an execution environment --- be it a desktop OS, a web browser, or a distributed billing and customer service system): scheduling jobs / tasks / processes. After all, fundamentally, the number of virtual tasks that could exist on a system are typically much greater than the amount of physical resources (i.e., CPUs) available. How does the OS make one or a few CPUs look like dozens, hundreds, or thousands?

Hence, the hardware (e.g., CPU, memory, and devices) must be multiplexed or shared across all virtual tasks present on the system. Tasks come and go mostly unpredictably. Scheduling deals with sharing these resources in a coherent fashion in the face of this dynamic load.

Focus Question

What support is necessary for deciding which program / process should run next (i.e., be given access and control over the CPU and other system devices)? Once we pick a strategy or approach for deciding on or picking the next process, is there any way to show that we are doing well at scheduling?

Agenda

Overview of scheduling considerations: fairness, efficiency, progress; scheduling metrics

the key concept of preemption vs. cooperative multitasking

time quanta, clock (basic hardware and low-level OS support for reliable timing)

process states (support needed for distinguishing between some characteristics of processes; meta-data that has an impact on determining what process is a candidate for being run)

types of processes (interactive vs. batch)

approaches to scheduling: random, FCFS, LJF, SJN, RR, UWRR, ... priority

priority inversion

context switch (tie back to time quanta)

comparing 2.4, 2.6, and the CFS scheduler code

"real-time" scheduling

Notes

The slides for today:

Linux defines the symbol "current" to be a macro that resolves to the currently executing process (for example, the process that invoked a system call). Note, of course, that this symbols is only meaningful in process context, not interrupt context. See the current.h file, which derives the value of current from the CPU state (specifically, the value of the stack pointer in the kernel).

There were many terms and key concepts presented in this session, among them:

  • preemption
  • time quantum
  • context switch
  • fairness, responsiveness
  • types of processes
  • example scheduling algorithms (know how to draw scheduling timelines)
  • metrics for comparing scheduling algorithms (know these)

The code for the CFS is at:

The code for part of the 2.6 O(1) scheduler (multi-level feedback queue) is at:

The relevant lines:

5451
5452        put_prev_task(rq, prev);
5453        next = pick_next_task(rq);
5454

Of course, pick_next_task runs in constant time, but there is a lot more work hidden in there.

The code for (part) of the 2.4 scheduler (for 2.4.24) is here:

and the O(n) snippet is listed below:

/*
 * Default process to select...
 */
       next = idle_task(this_cpu);
       c = -1000;
       list_for_each(tmp, &runqueue_head) {
               p = list_entry(tmp, struct task_struct, run_list);
               if (can_schedule(p, this_cpu)) {
                       int weight = goodness(p, this_cpu, prev->active_mm);
                       if (weight > c)
                               c = weight, next = p;
               }
       }

Scribe Notes

  • s1
  • s2
  • s3

Readings

  • MOS: 2.4.1 "Introduction to Scheduling" (plus 2 intro paragraphs in S2.4)
  • MOS: 2.4.5 "Policy vs. Mechanism"
  • MOS: 10.3.4: "Scheduling in Linux" or LKD: Chapter 4 "Process Scheduling"
  • MOS: 2.7 "Summary" (this talks about some things we'll consider next)