Jun 13, 2012

Real Time Operating Systems: Subject Notes

RTOS (Real Time Operating Systems) is a subject that was offered by the University of Technology, Sydney (UTS). These are some of my notes from that subject.

Pre-emption (or context switch)

  • Pre-emption is the ability of the Operating System to stop a currently scheduled task in favour of a higher priority task. Enables pre-emptive multi-tasking.
  • An interrupt generally denotes something that needs to be handled straight away. Normal scheduling is avoided until the interrupt is handled; in other words, interrupts generally cannot be pre-empted.
  • By making the scheduler pre-emptive, we make it more responsive to events. The downside is that it is more susceptible to race conditions, where the executing program modifies/uses data the pre-empted process has not finished using.
  • Pre-emptive multi-tasking can be compared to co-operative multi-tasking, where the process must give their time to other processes. This requires the process to be co-operative and not hog all the resources.
  • A scheduler that can pre-empt a process during a system call is a pre-emptive kernel.
A simple flow of how pre-emption works. The Red box is a low priority thread that is pre-empted, the Yellow and Blue are interrupts and the Green is a higher priority thread.

Process

  • A process is an instance of a program in execution. The execution happens in a sequential fashion.
  • A process has at a minimum a copy of the program code, a Process Control Block (discussed later), a Stack (to keep track of active subroutines and events) and a Data Section or Heap. The data section includes the program code, process-specific data (such as inputs and outputs) and storage of intermediate data.
  • A process will start in the New state, then go to Ready (waiting to be assigned). It can then be put into either the Running (executing) or Waiting (waiting on I/O operations) states until it reaches the Terminated state.
    A visualisation of the process state
  • The Process Control Block (PCB) contains the information associated with a particular process and its context. This includes the Process state (see above diagram), a Program Counter (hold the memory address of the next program instruction to execute), CPU Registers (for storage of process specific data), CPU Scheduling Information (so the CPU can make scheduling decisions), Memory-management information, Accounting information (how long the last run was, how much time accumulated, etc), and I/O status information.
  • When we wish to switch the executing process, the CPU will store the current state of the running process into a PCB and load up the state of the next process from their PCB. When the process has completed executing, the CPU will then store that process' PCB and reload the previous PCB. This is called Context Switching.
  • Context switching is considered overhead. It is time in which the CPU does nothing.
  • The queues used to store the details of processes are the Job Queue (all processes in the system), the Ready Queue (processes in main memory waiting to execute), and the Device Queue (processes waiting for an I/O device to respond).
  • A Scheduler can be either Long-term (selects processes to be brought into the ready queue, and does not need to be fast because it matches the speed of I/O devices) or Short-term (selects process from ready-queue to be executed, and needs to match the CPU speed)
  • Since Long-term schedulers take the most time, it determines the degree of multi-programming a computer can handle.
  • A process can either be considered an I/O bound process (does more I/O tasks) or a CPU bound process (does more computations)
  • Processes are generally created from a parent process, which in turn creates it's own child processes. This creates a tree-hierarchy of processes.
  • Aspects of the relationship between a child process to its parent are:
    • Resource sharing: The parent and child can share all resources, a sub-set of resources, or no resources at all
    • Execution: The parent and child run concurrently, or the parent waits until the child terminates
    • Address space: The child is an exact duplicate of the parent, or the child has another program image loaded into it
    • Termination: The process can make a call to exit() which makes the system deallocate process resources, or the parent can abort the child. Some operating systems do no allow the child to continue if the parent is terminated
  • Processes can communicate by either Message passing (which passes through the kernel) or through Shared Memory (much quicker but can lead to race conditions). Message passing requires that a communication link is established that is either physical or logical.
  • Message passing can be Blocking/Synchronous (The sender must wait until the message is received or the receiver must wait until a message is sent) or Non-blocking/Asynchronous (both entities do not have to wait)
  • The Client-Server communication model uses Sockets (end-point for communication), Remote Procedure Call or RPC (abstract procedure calls across networked systems. Uses a Stub to locate and marshal/pack the parameters into a message) and Remote Method Invocation or RMI (similar to RPC but for Object-Orientated applications).
  • A Light Weight Process (LWP) is different to normal processes in that it shares some or all of its logical address space and system resources with other processes. It differs from threads (discussed later) in that it has its own Process Identifier and is fully controlled by the kernel (threads are controlled by the application). This set-up means that a LWP has less processing overhead.

Threads

  • Threads are by-products of processes in that they share everything the parent process has. It has its own stack and registers to store information, but it shares code and files with other threads and the parent process. This lessens the strain on resources.
  • Threads can be implemented through the User-level (via libraries) or the Kernel-level (direct kernel support)
  • There are three mapping schemes used to map user (process) threads to kernel threads.
    1. One-to-One scheme has one user thread to every kernel thread
    2. Many-to-One scheme has many user threads to every kernel thread
    3. Many-to-many scheme has many user threads running off many kernel threads
  • Java threads are handled by the Java Virtual Machine (JVM). To use Java threads you must use the Runnable class or interface.
  • Terminating a thread involves two methods: Asynchronous (terminate immediately) or Deferred (allows the thread to periodically check if it should be terminated)

CPU Scheduling

  • The Scheduler will select a process from the ready queue and allocate the CPU to that process.
  • The Dispatcher gives control of the CPU to the process selected by the short-term scheduler. It handles context switching and jumping to the correct location in the program code.
  • Dispatcher Latency is the term used to describe the time it takes for the dispatcher to stop one program and start another
  • The criteria that the scheduler uses to determine which process to select include:
    • CPU Utilization: process keeps the CPU busy
    • Throughput: number of processes that complete execution per time unit
    • Turnaround time: total time to execute a particular process
    • Waiting time: amount of time a process has been waiting in the queue
    • Response time: time when the request was made till the first response
  • Scheduling schemes include:
    • First Come First Served (FCFS): Tasks are executed in the order that they arrive. This means that a high-priority process will have to wait until other tasks are executed first. In addition, a long process will significantly slow down all other processes.
    • Shortest Job First (SJF): Each process will be associated with the estimated time of its next CPU burst. The shortest job at the time will be executed over other processes. This significantly reduces the waiting time.
    • Round Robin (RR): The CPU has a time quantum of q time units. Each process will be executed for that amount of time before being pre-empted. If there are n processes, each process will get 1/n of the CPU time in time chunks of q. No process waits more than (n1-)q time units. Note that q has to be large enough otherwise there is too much overhead.
  • Priority Scheduling means associating a number with a process to aid in scheduling decisions. A problem with such solutions is starvation (where a low priority process never gets the chance to run). The solution is aging (increase the priority of a process over time).
  • Multi-level queues means splittling the Ready Queue into two:
    • Foreground are the interactive user applications. They generally require some form of user input. They typically use Round Robin scheduling.
    • Background are the batch jobs and system services. They do not require any sort of user input. They typically use FIFO or FCFS scheduling.
  • CPU scheduling becomes more complex when multiple CPUs are added. It needs to share the load.
  • Hard Real Time Scheduling means that critical processes are assured that they get completed within a set period of time. Soft Real Time Scheduling require that just critical processes receive priority over other processes
  • Thread scheduling can occur locally through libraries or globally through the kernel.

Process Synchronization

  • Concurrent access to data is not desirable as this will result in race conditions and data inconsistency. Therefore we must implement mechanisms to ensure data remains consistent.
  • Terms used in process synchronization are:
    • Mutual Exclusion: When one process is operating in its critical section no other process can enter their critical section
    • Race condition: Where there is concurrent access to data and the result is dependent on the order of execution
    • Critical section: Section of code where the process accesses shared data
    • Entry section: Section of code where the process asks if it can move into the critical section
    • Exit section: Section of code where the process releases the shared data
    • Bounded waiting: A bound must exist on the number of times other processes are allowed to enter their critical sections after a process has made a request to enter its critical section.
    • Locks: Flags used to determine whether the data is in use. The entry section will acquire the lock, while the exit section will release it
    • Semaphore: A synchronization tool that can simply be an integer value with two methods: acquire and release. It can be a Counting semaphore (no bounds) or a Binary Semaphore (also known as a mutex lock)
    • Monitors are a high-level abstraction that provide a convenient mechanism for process synchronization. Only one process may be active in a monitor at a time.
  • For semaphores to work, we must guarantee that no two processes can execute the acquire or release functions on the same semaphore at the same time. A semaphore wait list does this by creating a linked list that stores the current value and a pointer to the next value.

Main Memory

  • A program must be bought into memory and placed within a process to be run. Main Memory (generally RAM), Registers and Cache are the only storage elements that a CPU can directly access.
  • Memory address types are:
    • Logical Address: generated by the CPU and is an abstraction for user-level programs so that we do not have to specify the physical address at compile time
    • Physical Address: what is seen by the memory management unit. This is the address the RAM unit uses to refer to a physical element.
  • The Memory Management Unit (MMU) maps Logical to Physical addresses. It will add a relocation register to the logical address sent by a user program to determine a real physical address.
  • Dynamic Loading involves loading a routine only if it is needed. This saves system resources and can be implemented by the program, so no special support from the OS is required.
  • Dynamic Linking (or shared libraries) is a method where linking does not occur until execution. A small piece of code (known as a stub) is inserted into the program which attempts to locate the required routine in memory. The code will then replace itself with the address of the loaded routine
  • Swapping involves moving a process out of main memory and into secondary storage. It can later be moved back for continued execution. However, the system must maintain a ready queue of processes ready to run but are currently swapped out.
  • Physical memory is divided into fixed-size blocks called frames, while Logical memory is divided into blocks called pages. The OS will keep track of all free frames so that when a process requires n pages it will find n free frames and load the program. Note that while logical memory may seem contiguous to the process, it may actually be non-contiguous in physical memory. To facilitate this we need a page table to transfer Logical addresses to physical addresses.
  • Segmentation: a memory management scheme that acknowledges a process is a collection of segments. Segments are distinct from other segments, but are bounded within the process. In physical memory they can occupy the same frame.

Real Time Systems

  • A Real Time system requires that processes be finished before a certain deadline. Real-time does not mean Real-fast; it means that the system can respond to external environmental events almost instantly, seemingly running in "real-time".
  • An Embedded system is part of a larger system
  • A Safety critical system has catastrophic results in the case of failure
  • Hard real-time systems guarantee that processes will be completed before the deadline. Soft real-time systems only prioritize real-time tasks over other processes
  • A real-time system are generally developed for a single purpose and have specific timing requirements. To achieve this, real-time systems are developed using the System-on-Chip (SoC) strategy. This involves putting all the required hardware onto a s single integrated circuit. This is in contrast to a Bus orientated system which separate these components.

No comments:

Post a Comment

Thanks for contributing!! Try to keep on topic and please avoid flame wars!!