Courses/Computer Science/CPSC 457.S2016/Midterm

From wiki.ucalgary.ca
Jump to: navigation, search

Question 1

  • Prehistory: human power or mechanical power
  • Generation 1: vacuum tubes
  • Generation 2: transistors
  • Generation 3: integrated circuits
  • Generation 4: Personal Computing
  • Generation 5: Mobile Computing
  • Generation 6: Virtual/Cloud Computing

Question 2

  • To load programs and manage their execution
  • To mediate hardware access for applications
  • To manage time (computation) and space (memory) resources

Question 3

Instructions to access I/O devices are typically privileged to ensure that the operating system can maintain organization of the devices and ensure that one process does not interfere with another using the device.

Question 4

  • Mainframe - Batch Processing - Payroll Accounting
  • Server - Interactive Processing for Requests - Web Server
  • Personal Computer - Interactive Processing for Active User - Windows
  • Mobile - Interactive Processing with Limited Resources - iOS
  • Hard Real Time - Meet hard deadlines - Welding Robot
  • Soft Real Time - Do things as timely as possible - Media Player (If you put Car here, I hope you were talking about the entertainment system and not self-driving).
  • Embedded - Limited processing with extremely limited resources - Thermostat (Fridge, Toaster...)

Question 5

Monolithic Operating Systems exchange organization and maintainability for speed. Micro-kernel architecture tries to structure the Operating System to be as maintainable and organized as possible at the expense of speed. The hybrid suggests that speed is our primary desire for an Operating System, but that we would like to be able to maintain it and add new functionality where possible.

Question 6

A type one hyper-visor connects directly to the hardware and translates requests from the guest OSs. A type 2 hyper-visor relies on a host OS to manage the hardware.

Question 7

  • Push arguments onto the stack
  • Call the read function
  • Library loads the system call # into a register and traps into the kernel
  • Kernel starts executing at a fixed point
  • Kernel looks up system call in syscall handler table & runs the handler
  • Returns to the library call (setting the mode back to user)
  • Library call returns to original program

Question 8

  • Process Identifier
  • Related Processes
  • Process state
  • Program counter
  • CPU registers
  • CPU Scheduling information
  • Security Information
  • Memory management information (not the actual address space)
  • Accounting Information
  • I/O status information

Question 9

  • Interupt Execution of Running Process
  • Jump to Kernel
  • Save the CPU state
  • Select new process
  • Clear Cache
  • Restore CPU state for new process
  • Set Program Counter
  • Return to user space and execute first instruction

Question 10

State Running Ready Blocked
Running X Process is Preempted Process questions I/O
Ready Process is scheduled to run (dispatched) X Not Possible
Blocked Not Possible Process returns from I/O X

Question 11

Threads require unique stacks, program counters and registers, they are able to share their other resources including the program text and data. This is useful because it allows for a light-weight switch between threads, where we only have to manage the registers and the program counter to switch between threads.

Question 12

Threads implemented in user space do not need an OS that is thread aware to manage them. They have their own runtime system and you can switch between threads without requiring a system call. However if one thread blocks for I/O then the whole process becomes blocked. Threads implemented in kernel space have the support of the OS to manage scheduling and a thread can block while others from the same process continue, however now a system call is required to switch between threads.

Question 13

One example, response time vs throughput. Either we want to quickly process some of each job available or we want to process all of as many jobs as we can. (Personal favourite: Proportionality vs Fairness)

Question 14

Your on your own on this one.

Question 15

  • Assumption We will schedule P1 before P4 because it arrived first.

Gantt Chart

| P2 -> 4ms | P3 -> 12ms | P1 -> 22ms | P4 -> 32ms |

Wait Time

P1 12ms
P2 0ms
P3 4ms
P4 22ms

Average

38ms/4 = 9.5ms

Question 16

  • Assumption We will schedule P2 before P4 because it arrived first.

Gantt Chart

| P3 -> 8ms | P2 -> 12ms | P4 -> 22ms | P1 -> 32ms |

Wait Time

P1 22ms
P2 8ms
P3 0ms
P4 12ms

Average

42ms/4 = 10.5ms

Question 17

Gantt Chart

| R1 -> 3ms | R1 -> 5ms | R3 -> 7ms | R1 -> 9ms | R1 -> 11ms | R4 -> 12ms | R4 -> 14ms | R2 -> 26ms | R5 -> 38ms |

The Queue

0ms - R1 Arrives

Running: Nothing - Queue: R1(9ms) - Choose R1

3ms - R2 Arrives

Running: R1(6ms) - Queue: R2(12ms) - Choose R1

5ms - R3 Arrives

Running: R1(4ms) - Queue: R2(12ms) - R3(2ms) - Choose R3

7ms - R3 Finishes

Running: Nothing - Queue: R2(12ms) - R1(4ms) - Choose R1

9ms - R4 Arrives

Running: R1 (2ms) - Queue: R2(12ms) - R4(3ms) - Choose R1

11ms - R1 Finishes

Running: Nothing - Queue: R2(12ms) - R4(3ms) - Choose R4

12 ms - R5 Arrives

Running: R4(2ms) - Queue: R2(12ms) - R5(12ms) - Choose R4

14 ms - R4 Finishes

Running: Nothing - Queue: R2(12ms) - R5(12ms) - Choose R2 - (It arrived first)

26 ms - R2 Finishes

Running: Nothing - Queue: R5(12ms) - Choose R5

38 ms - R5 Finishes

Wait Time

R1 2ms Started running as it arrived, only waited for R3 at 5ms to 7ms
R2 11ms Started Running at 14ms, arrived at 3ms.
R3 0ms Started running as it arrived, never waited
R4 2ms Started running at 11ms and arrived at 9ms.
R5 14ms Started running at 26ms and arrived at 12ms.

Average

29ms/5 = 5.8ms

Question 18

Gantt Chart

| R1 -> 4ms | R2 -> 8ms | R1 -> 12ms | R3 -> 14ms | R2 -> 18ms | R4 -> 21ms | R1 -> 22ms | R5 -> 26ms | R2 -> 30ms | R5 -> 34ms | R5 -> 38ms |

The Queue

0ms - R1 Arrives

Running: Nothing - Queue: R1(9ms) : Choose R1

3ms - R2 Arrives

Running: R1 (6ms) - Queue: R2(12ms) : Not Scheduling

4ms - Quantum for R1 ends - it goes on the tail of the queue

Running: Nothing - Queue: R2(12ms) - R1(5ms) : Choose R2

5ms - R3 Arrives

Running: R2 (11ms) - Queue: R1(5ms) - R3(2ms) : Not Scheduling

8ms - Quantum for R2 ends - it goes on the tail of the queue

Running: Nothing - Queue: R1(5ms) - R3(2ms) - R2(8ms) : Choose R1

9ms - R4 Arrives

Running: R1(4ms) - Queue: R3(2ms) - R2(8ms) - R4(3ms) : Not Scheduling

12ms - Quantum for R1 ends - it goes on the tail of the queue - R5 Arrives (can assume opposite order)

Running: Nothing - Queue: R3(2ms) - R2(8ms) - R4(3ms) - R1(1ms) - R5 (12ms) : Choose R3

14ms - Burst for R3 ends

Running: Nothing - Queue: R2(8ms) - R4(3ms) - R1(1ms) - R5 (12ms) : Choose R2

18ms - Quantum for R2 ends - it goes on the tail of the queue

Running: Nothing - Queue: R4(3ms) - R1(1ms) - R5 (12ms) - R2(4ms) - : Choose R4

21ms - Burst for R4 ends

Running: Nothing - Queue: R1(1ms) - R5 (12ms) - R2(4ms) - : Choose R1

22ms - Burst for R1 ends

Running: Nothing - Queue: R5 (12ms) - R2(4ms) - : Choose R1

26ms - Quantum for R5 ends - it goes on the tail of the queue

Running: Nothing - Queue: R2(4ms) - R5 (8ms) - : Choose R2

30ms - Burst for R2 ends

Running: Nothing - Queue: R5 (8ms) - : Choose R5

34ms - Quantum for R4 ends - it goes on the tail of the queue

Running: Nothing - Queue: R5 (8ms) - : Choose R5

38ms - Burst for R5 ends

Running: Nothing - Queue:


Wait Time

R1 13ms Started running as it arrived, Waited from 4ms to 8ms, then from 12ms to 21ms.
R2 15ms Started running at 4ms and arrived at 3ms, Waited from 8ms to 14ms, then from 18ms to 26ms.
R3 7ms Started running at 12ms and arrived at 5ms.
R4 9ms Started running at 18ms and arrived at 9ms.
R5 14ms Started running at 22ms and arrived at 12ms. Waited from 26ms to 30ms.

Average

57ms/5 = 11.4ms

Question 19

Estimated Burst 4 7 4 4 5
Real Burst 8 10 4 6 6

Burst 1 T_1 = (.75)8 + (.25)4 = 6 + 1 = 7

Burst 2 T_2 = (.75)10 + (.25)7 = 2.5 + 1.75 = 4.25 = 4

Burst 3 T_3 = (.75)4 + (.25)4 = 3 + 1 = 4

Burst 4 T_4 = (.75)4 + (.25)6 = 1 + 1.75 = 3 + 1.5 = 4.5 = 5

Question 20

  • Private
  • Non-Volatile
  • Infinitely Fast
  • Infinitely Large

Question 21

  • With Static relocation the addresses of all memory instructions are changed as the program is loaded.
  • With Dynamic relocation the address of all memory instructions are modified by the base register and checked against the limit register at run time.