Types of operating system - Explain the following types of system: batch, single-user (standalone), multi-user (multi-access), multi-tasking and multi-programming.
Interrupts - Describe a range of conditions or events which could generate interrupts.
Interrupts- Describe interrupt handling and the use of priorities.
Interrupts- Describe the factors involved in allocating differing priorities.
Memory management and buffering - Explain the reasons for, and possible consequences of, partitioning of main memory.
Memory management and buffering - Describe methods of data transfer including the use of buffers to allow for differences in speed of devices.
Memory management and buffering – Describe buffering and explain why double buffering is used.
Scheduling - Describe the principles of high level scheduling: processor allocation, allocation of devices and the significance of job priorities.
Scheduling - Explain the three basic states of a process: running, ready and blocked.
Scheduling - Explain the role of time-slicing, polling and threading.
This video aims to take students through a recap of operating systems in Computer Science. Topics covered here are Batch processing, single-user and multiuser operating systems. Also included are multitasking and multiprogramming operating systems.
}A multiprogramming computer system is one where more than one job is held in the computer's main memory at the same time and can be processed in the computer's central processing unit (CPU) at (apparently) the same time.
Multiprogramming is used to ensure the most efficient use of the CPU and prevent the CPU being idle while waiting for a slower peripheral. (Scheduling is an important part of this process where it decided which job is to be processed next i.e. may be prioritised as it allocates time slices).
The operating system may move jobs in and out of memory and allows each job a pre-determined time-slice to access the CPU. The real-tie clock causes regular interrupts to create time-slices.
A time-slice is the amount of time allocated to each job by the operating system i.e. each tasks has chance of a time slice, which are allocated quickly where priority can be altered (interrupts). The scheduler is responsible for allocating time slices. (see below - remember a time slice can be running, ready or blocked)
Polling is the sequential checking of jobs so that each gets its appropriate share of time
To allow more than one job to be resident in the main memory at any one time, the memory needs to be separated into separate parts, this is know as partitioning. Partitioning is the division of computer memory for different jobs where memory is divided up into fixed or variable partitions or chunks. This is necessary to accommodate more than one task and to ensure each task doesn't corrupt another. Each task has a chunk to reside in and when the tasks is finished memory is released. If an operating system fails to manage memory partitioning correctly the programs/data could corrupt each other.
Paging jobs in and out to make the most efficient use of memory. This is all achieved by the use of interrupts.
Multi-tasking occurs when more than one task or application is available to the user at the same time / can run at the same time. (The operating system/user can switch quickly from one task to another.) e.g. copying and pasting an image from the internet onto a Word document
** A large mainframe computer will have multiples processors and may run a number of process threads simultaneously. **
** A multi-user, multi programming will do the above but you will have multi-users, running separate processes from different terminals such as in a call-centre
Batch processing uses a transaction file to record events, then the master file is updated at the end of the period (day/week/month)
It is simpler and more efficient system to operate as all jobs are a similar task and transactions.
It can be carried out automatically at times when the computer system is not otherwise in use (e.g. at night) and therefore requires limited resources i.e. human or computer
This video aims to take students through the operating systems chapter in Computer Science. Topics covered here are Interrupts, buffers and priority queues.
Interrupts are signals generated by a hardware device or by a software application that tell the CPU it needs some CPU time and which may cause a break in the execution of the current routine.
What generates an interrupt?
• A hardware device has signalled that it has data to process / completed a task.
• A software process needs a service to be provided / OS function to be performed.
• An allotted amount of time has expired, and an action needs to be performed.
• A hardware failure has occurred and needs to be addressed.
Example of different kinds of interrupts
Purpose of highest priority interrupts
Impending data loss
Impending hardware/software failure
Detection of imminent power failure
Examples of Hardware interrupts Examples of Software interrupts
Mouse click or mouse move Mathematical error in program
Keyboard press File handling error
Printer ready Execution of program
Printer out of paper CTRL/ALT/DEL
Power failure
What happens with higher priority interrupts
A CPU can only ever do one job at a time. If it is working on one job and another job needs some CPU processing time then that other job sends a signal, an interrupt, to the processor. When the processor receives the signal, it does one of a number of things.
·The Operating System suspends the current interrupt routine
The Operating System stores the address of the current interrupt
A priority level is assigned to the processor and is stored in the processor’s status register.
It runs the new higher priority interrupt routine
When the processor starts the execution of a program its priority level is re-set to be equal to the priority of the program in execution.
When executing the current program, the processor only accepts interrupts that have a higher priority level.
When the processor is executing the resulting interrupt service routine the processor priority level is again re-set to the priority of the interrupt and the processor will only accept interrupts with higher priority and ignore interrupts with the same or low priority.
The Operating System returns to the original interrupt routine and continue
How does an interrupt affect the execution of the fetech-decode-execute cycle
(Here you need to apply your knowledge to Topic 1)
1. The processor receives the interrupt and completes the fetch-decode-execute cycle of the instruction that it was running when it received the interrupt.
2. The current contents of the processor registers (including the program counter) are saved to memory.
3. The origin of the interrupt is identified so that the appropriate ISR (Interrupt Service Routine) is called.
4. All other lower-priority interrupts are suspended to allow the ISR to finish running.
5. The program counter is updated with the address of the first instruction of the ISR
6. The ISR / interrupt completes its execution
7. The processor registers are reloaded with the values that were saved to memory
8. The lower-priority interrupts that were suspended are re-established.
9. The program counter is set to point to the address of the next instruction that needs to be executed in the program that the processor was running when it received the interrupt.
This video aims to take students through the operating systems chapter in Computer Science. Topics covered here are memory management (including paging and segmentation) plus virtual memory.
Scheduling and processing states are also covered.
What happends during SCHEDULING
Processing of jobs is controlled by the scheduler.
The currently active programs will be held in a job queue (runnable/ready state) i.e. READY: A process that is queuing and prepared to execute when given the opportunity.
Each program will receive a slice of processing time when it reaches the front of the queue (running state) i.e. RUNNING: The process that is currently being executed.
If a program requires slow input or output, it will temporarily leave the job queue (blocked state) i.e BLOCKED: A process that cannot execute until some event occurs, such as the completion of an I/O operation.
Input/output will be handled by the spooling system while the processor continues to process other jobs.
The scheduler will poll the blocked jobs, to check when input/output is completed and the job can re-join the job queue. There are two different designs of scheduler - a single queue with equal job priority and multiple job queues e.g. with small fast jobs given priority over large slow jobs.
Polling to check the state of input/output devices and to check when suspended jobs are ready to re-join the job queue.
Efficiency can be improved by providing multiple job queues, so that users experience minimum delays.
** This overlaps with data structures and the idea of a queue i.e. smaller jobs enter a fast queue. Larger jobs are held in a main queue and only receive processor time when the fast queue is empty - use example of a supermarket.
Threading
Scheduling is where the OS manages multiple processes. However, a process can split itself up into mini-processes known as threads. Each thread is managed by the scheduler separately, meaning they run in no fixed order. It can therefore take advantage of multiple cores and perform background tasks such as spell checking in a word processor whilst freeing the main application to perform other tasks.
Downside is synchronisation. You have no control over the order that the thread get executed which can cause problems if one thread it reliant on other for completion...