๐Ÿ“š Chapters

Operating System

Unit 1 - Introduction & Process Management

๐Ÿ“– Full Notes
โšก Last Min Notes
๐Ÿ’ป Chapter 1 โ€” Introduction to Operating System
๐Ÿ“Š OPERATING SYSTEM - MIND MAP
OS โ†’ Software that manages hardware and provides services to applications
Structure โ†’ Monolithic, Layered, Microkernel, Modular
Functions โ†’ Process, Memory, File, I/O, Security Management
Types โ†’ Batch, Time-sharing, Real-time, Distributed, Mobile
System Calls โ†’ Interface between user programs and OS kernel
Process โ†’ Program in execution with PCB and states
1. Introduction to Operating Systems
Operating System (OS):
An Operating System is system software that acts as an intermediary between computer hardware and user applications. It manages hardware resources and provides services to application programs.
Real-life Example - Restaurant Manager:
Think of OS like a restaurant manager:
โ€ข Customers (Users/Applications) want food
โ€ข Kitchen (Hardware) cooks the food
โ€ข Manager (OS) takes orders, assigns tasks to chefs, manages kitchen equipment, ensures everything runs smoothly

Just like manager coordinates between customers and kitchen, OS coordinates between applications and hardware!
Goals of Operating System:
1. Convenience: Make computer easy to use (GUI, commands)
2. Efficiency: Use hardware resources efficiently
3. Resource Management: Manage CPU, memory, storage, I/O devices
4. Security: Protect data and prevent unauthorized access
5. Reliability: Ensure system works correctly without failures
Components of Computer System:
4 Components: 1. Hardware (Bottom Layer) CPU, Memory, Disk, I/O devices 2. Operating System (Middle Layer) Controls hardware, provides services 3. Application Programs (Upper Layer) Word processors, browsers, games 4. Users (Top Layer) People, other computers, machines
Key Point: OS is the only program that runs at all times on the computer (called the kernel). Everything else is either a system program or application program.
2. Operating System Structure
OS Structure:
The way an operating system organizes its components and how they interact with each other.
Types of OS Structures:
1. Monolithic Structure:
All OS services run in kernel mode as a single large program.

Characteristics:
โ€ข No structure - everything in one big program
โ€ข All functions have access to all data
โ€ข Fast (no overhead of communication)

Example: MS-DOS, Early UNIX

Advantages: โœ“ Fast, โœ“ Simple
Disadvantages: โœ— Hard to maintain, โœ— One bug crashes entire system
2. Layered Structure:
OS divided into layers, each layer uses services of layer below it.

Layers (Bottom to Top):
Layer 0: Hardware
Layer 1: CPU Scheduling
Layer 2: Memory Management
Layer 3: I/O Management
Layer 4: User Programs

Example: THE OS, Windows NT

Advantages: โœ“ Easy to debug (layer by layer), โœ“ Modular
Disadvantages: โœ— Slow (many layers), โœ— Hard to define layers
3. Microkernel Structure:
Only essential services in kernel, rest run as user processes.

Kernel Contains:
โ€ข Minimal process management
โ€ข Memory management
โ€ข Inter-process communication (IPC)

User Space Contains:
โ€ข File system
โ€ข Device drivers
โ€ข Network protocols

Example: Mach, QNX, Minix

Advantages: โœ“ Reliable (less code in kernel), โœ“ Easy to extend
Disadvantages: โœ— Slower (frequent message passing)
4. Modular Structure (Loadable Kernel Modules):
Modern approach - kernel has core components, others loaded as modules.

How it works:
โ€ข Core kernel is small
โ€ข Modules loaded dynamically when needed
โ€ข Each module talks to core via defined interfaces

Example: Linux, Solaris

Advantages: โœ“ Flexible, โœ“ Efficient, โœ“ Easy to add features
Best of both: Speed of monolithic + modularity of layers
3. Main Functions of Operating System
1. Process Management:
What is Process Management?
Managing programs that are currently running (processes).

OS Responsibilities:
โ€ข Create and delete processes
โ€ข Suspend and resume processes
โ€ข Provide mechanisms for process synchronization
โ€ข Provide mechanisms for process communication
โ€ข Handle deadlocks

Example: When you open Chrome, Word, and Spotify together, OS manages all three processes, decides which gets CPU time, and ensures they don't interfere with each other.
2. Memory Management:
What is Memory Management?
Managing computer's RAM (main memory).

OS Responsibilities:
โ€ข Keep track of which memory is used and by whom
โ€ข Decide which processes to load in memory
โ€ข Allocate and deallocate memory
โ€ข Prevent processes from accessing each other's memory

Example: If you have 8GB RAM and run programs needing 10GB, OS uses virtual memory (hard disk space) to manage the extra 2GB.
3. File Management:
What is File Management?
Managing files and directories on storage devices.

OS Responsibilities:
โ€ข Create and delete files and directories
โ€ข Provide operations on files (read, write, append)
โ€ข Map files to disk storage
โ€ข Backup files
โ€ข Provide file permissions and security

Example: When you save a document, OS decides where on the hard disk to store it, updates directory structure, and manages file permissions.
4. I/O Device Management:
What is I/O Management?
Managing input/output devices (keyboard, mouse, printer, etc.)

OS Responsibilities:
โ€ข Provide device drivers
โ€ข Buffer and cache data
โ€ข Handle I/O operations
โ€ข Manage device queues

Example: When you print a document, OS uses printer driver to communicate with printer, manages print queue if multiple documents are waiting.
5. Security and Protection:
What is Security?
Protecting system from unauthorized access and threats.

OS Responsibilities:
โ€ข User authentication (passwords, biometrics)
โ€ข Access control (file permissions)
โ€ข Protect processes from each other
โ€ข Defend against viruses and malware

Example: Login password, file permissions (who can read/write/execute), preventing one user from accessing another user's files.
4. Characteristics of Operating System
1. Resource Manager:
OS manages all hardware resources (CPU, memory, I/O devices) efficiently.

2. Interface Provider:
Provides user-friendly interface (GUI or Command Line) to interact with computer.

3. Multitasking:
Run multiple programs simultaneously.
Example: Listen to music while browsing and typing document.

4. Multiprogramming:
Keep multiple programs in memory and execute them by switching.

5. Time Sharing:
Multiple users can use the system simultaneously.
Example: Multiple users logged into a server.

6. Error Handling:
Detect and handle errors (hardware failure, division by zero, etc.)

7. Memory Protection:
Prevent processes from accessing memory they shouldn't.

8. Job Scheduling:
Decide which process runs when and for how long.
5. Types of Operating Systems
1. Batch Operating System:
Batch OS: Jobs grouped and executed without user interaction

Real-life Example: Bank processes all checks in one batch at night

Advantages: โœ“ High throughput โœ“ Less CPU idle time
Disadvantages: โœ— No interaction โœ— Long waiting
2. Time-Sharing OS (Multitasking):
Time-Sharing: Multiple users share CPU by time slicing

How: CPU gives small time to each process, switches rapidly

Examples: Windows, Linux, macOS

Advantages: โœ“ Multiple users โœ“ Quick response
Disadvantages: โœ— Context switching overhead
3. Real-Time OS (RTOS):
Hard Real-Time: Deadline must be met (Airbag, Pacemaker)
Soft Real-Time: Deadline can be missed occasionally (Video streaming)

Advantages: โœ“ Guaranteed response โœ“ High reliability
Disadvantages: โœ— Complex โœ— Expensive
4. Distributed OS:
Distributed: Multiple computers appear as one system

Example: Google servers working together

Advantages: โœ“ Reliability โœ“ Scalability
Disadvantages: โœ— Complex โœ— Security
5. Mobile OS:
Mobile: OS for smartphones/tablets
Examples: Android, iOS
Features: Touch interface, Battery optimization
6. System Calls
System Call: Interface between user program and OS kernel. Programs request OS services through system calls.
How System Call Works: 1. User Program โ†’ Calls library function 2. Mode Switch โ†’ User mode to Kernel mode 3. OS Kernel โ†’ Executes service 4. Return Result โ†’ Back to user program 5. Mode Switch โ†’ Kernel mode to User mode User Mode: Limited privileges, cannot access hardware Kernel Mode: Full privileges, can access everything
7. Types of System Calls
Notes Image
๐Ÿ“Š Types of System Calls
8. System Programs
System Programs: Utilities provided with OS

Types:
1. File Management (File explorer)
2. Status Information (Task Manager)
3. File Modification (Text editors)
4. Programming Support (Compilers)
5. Program Loading (Loaders)
6. Communications (Email, FTP)
7. Background Services (Antivirus)
โš™๏ธ Chapter 2 โ€” Process Management
1. Process Concept
Process:
A process is a program in execution. When a program (stored on disk) is loaded into memory and executed, it becomes a process.
Program vs Process:

Program:
โ€ข Passive entity (just code stored on disk)
โ€ข Example: Chrome.exe file on hard disk
โ€ข Static (doesn't change)

Process:
โ€ข Active entity (program running in memory)
โ€ข Example: Chrome browser running on your screen
โ€ข Dynamic (state changes during execution)

Analogy:
Program = Recipe book (inactive)
Process = Cooking using recipe (active)
Components of a Process:
Notes Image
๐Ÿ“Š Process Memory Layout
1. Text Section (Code):
โ€ข Contains executable code (instructions)
โ€ข Read-only (cannot be modified)

2. Data Section:
โ€ข Global variables
โ€ข Static variables

3. Heap:
โ€ข Dynamically allocated memory
โ€ข Grows upward during execution
โ€ข Managed by programmer (malloc/free)

4. Stack:
โ€ข Function calls
โ€ข Local variables
โ€ข Return addresses
โ€ข Grows downward during execution
2. Process Control Block (PCB)
Process Control Block (PCB):
A data structure maintained by OS for every process. Contains all information about the process. Also called Task Control Block (TCB).
Think of PCB as:
Student's report card containing all information: name, roll number, marks, attendance, behavior notes.

Similarly, PCB contains everything about a process!
Information in PCB:
Notes Image
๐Ÿ“Š PCB Structure
1. Process ID (PID):
Unique number identifying the process
Example: Chrome PID = 1234

2. Process State:
Current state (New, Ready, Running, Waiting, Terminated)

3. Program Counter:
Address of next instruction to execute

4. CPU Registers:
Values of all CPU registers (saved during context switch)

5. CPU Scheduling Information:
โ€ข Priority
โ€ข Scheduling queue pointers

6. Memory Management Information:
โ€ข Base and limit registers
โ€ข Page tables

7. Accounting Information:
โ€ข CPU time used
โ€ข Time limits
โ€ข Process start time

8. I/O Status Information:
โ€ข List of open files
โ€ข I/O devices allocated
Why PCB is Important:
When OS switches from one process to another (context switch), it saves current process state in PCB and loads next process state from its PCB. Without PCB, OS wouldn't know where to resume execution!
3. Process States
Process States:
A process changes state during its execution. There are 5 main states a process can be in.
Notes Image
๐Ÿ“Š Process State Diagram
1. New State:
New: Process is being created
โ€ข OS is allocating resources
โ€ข PCB is being initialized
โ€ข Not yet admitted to ready queue

Example: You double-click Chrome icon โ†’ Process in New state
2. Ready State:
Ready: Process is ready to execute, waiting for CPU
โ€ข All resources allocated except CPU
โ€ข In ready queue
โ€ข Waiting for scheduler to assign CPU

Example: Multiple students ready to ask teacher a question, waiting for their turn
3. Running State:
Running: Process is currently executing on CPU
โ€ข Instructions being executed
โ€ข Only ONE process can run on one CPU at a time

Example: Student currently asking teacher question
4. Waiting State (Blocked):
Waiting: Process waiting for some event (I/O completion)
โ€ข Cannot execute even if CPU is free
โ€ข Waiting for I/O operation to complete
โ€ข Or waiting for a signal/event

Example: Process waiting for user to click button, or waiting for file to load from disk
5. Terminated State:
Terminated: Process has finished execution
โ€ข All resources released
โ€ข PCB being deleted
โ€ข Process removed from system

Example: You close Chrome โ†’ Process terminates
4. Process State Transitions
State Transitions Explained:
1. New โ†’ Ready (Admitted):
OS admits process to ready queue
Example: OS decides there's enough memory for new process

2. Ready โ†’ Running (Scheduler Dispatch):
CPU scheduler selects process from ready queue
Example: Your process gets CPU time

3. Running โ†’ Ready (Interrupt):
Process loses CPU (time slice expired or higher priority process arrives)
Example: Timer interrupt - your time is up!

4. Running โ†’ Waiting (I/O or Event Wait):
Process needs I/O operation
Example: Process reading from disk - must wait for disk

5. Waiting โ†’ Ready (I/O Completion):
I/O operation completed, process can run again
Example: File loaded from disk, process ready to continue

6. Running โ†’ Terminated (Exit):
Process completes execution or is killed
Example: Program finishes or crashes
Real-life Example - Downloading a File:

1. New: You click download button โ†’ Process created
2. Ready: Process ready to start download
3. Running: Process starts downloading
4. Waiting: Waiting for network packets to arrive
5. Ready: Data received, ready to save to disk
6. Running: Writing data to disk
7. Terminated: Download complete!
5. Process Scheduling
Process Scheduling:
The method by which OS decides which process runs on CPU and for how long. Goal is to maximize CPU utilization and provide fair execution time to all processes.
Scheduling Queues:
Types of Queues: 1. Job Queue: All processes in the system 2. Ready Queue: Processes in main memory, ready to execute Waiting for CPU 3. Device Queue: Processes waiting for I/O device Each device has its own queue Example: Ready Queue: [P1, P3, P5] โ† Waiting for CPU Disk Queue: [P2, P4] โ† Waiting for disk Printer Queue: [P6] โ† Waiting for printer
Types of Schedulers:
1. Long-Term Scheduler (Job Scheduler):
โ€ข Selects which processes to bring into ready queue
โ€ข Controls degree of multiprogramming
โ€ข Executes less frequently (minutes)
โ€ข Decides: New โ†’ Ready

Example: Decides whether to load new program into memory or not
2. Short-Term Scheduler (CPU Scheduler):
โ€ข Selects which process from ready queue gets CPU
โ€ข Executes very frequently (milliseconds)
โ€ข Must be very fast!
โ€ข Decides: Ready โ†’ Running

Example: Every 10ms, decides which process runs next
3. Medium-Term Scheduler:
โ€ข Swaps processes in/out of memory
โ€ข Reduces degree of multiprogramming
โ€ข Used in swapping

Example: Moves inactive process from RAM to disk to free memory
Context Switch:
Context Switch:
The process of saving state of current process and loading state of next process.
Context Switch Steps: 1. Save state of Process A in its PCB - Program counter - CPU registers - Memory maps 2. Load state of Process B from its PCB - Restore program counter - Restore CPU registers - Restore memory maps 3. Process B starts running Time: Context switch is pure overhead No useful work done during switch Typically takes 1-10 microseconds
Important: Context switching is expensive! That's why OS tries to minimize it. Too many context switches = system becomes slow.
6. Threads
Thread:
A thread is a lightweight process - the smallest unit of execution within a process. Multiple threads can exist within a single process, sharing the same resources but executing independently.
Real-life Analogy - Microsoft Word:

Single Process (Word):
โ€ข Main process = Word application

Multiple Threads within Word:
โ€ข Thread 1: Accept your typing
โ€ข Thread 2: Check spelling (red underlines)
โ€ข Thread 3: Auto-save document
โ€ข Thread 4: Display UI

All happening simultaneously within one Word process!
Process vs Thread:
Notes Image
๐Ÿ“Š Process vs Thread
Benefits of Threads:
1. Responsiveness:
Application remains responsive even if part is blocked
Example: Web browser can download file while you browse other tabs

2. Resource Sharing:
Threads share memory and resources of process
No need for IPC mechanisms
Easier communication between threads

3. Economy:
Creating thread is cheaper than creating process
Context switch between threads is faster
Example: Thread creation 10-100x faster than process

4. Scalability (Multicore):
Threads can run on different CPU cores simultaneously
True parallelism on multicore systems
Example: Video encoding - each thread processes different frame
Types of Threads:
1. User-Level Threads (ULT):
Managed by user-level thread library, OS doesn't know about them

How it works:
โ€ข Thread library in user space manages threads
โ€ข OS sees only one process
โ€ข Thread switching done by library (no kernel involvement)

Advantages:
โœ“ Fast thread creation and switching
โœ“ Can run on any OS
โœ“ Flexible scheduling

Disadvantages:
โœ— If one thread blocks, entire process blocks
โœ— Cannot use multiple CPUs (OS thinks it's one process)

Example: Java Green Threads (old)
2. Kernel-Level Threads (KLT):
Managed directly by OS kernel

How it works:
โ€ข OS knows about each thread
โ€ข Kernel schedules threads
โ€ข OS maintains thread table

Advantages:
โœ“ If one thread blocks, others can continue
โœ“ Can use multiple CPUs (true parallelism)
โœ“ Better for multicore systems

Disadvantages:
โœ— Slower (kernel mode switches required)
โœ— More overhead

Example: Windows threads, Linux threads
Multithreading Models:
1. Many-to-One Model:
Many user threads โ†’ mapped to โ†’ One kernel thread

Diagram:
User Level: [T1] [T2] [T3] [T4]
                  โ†“
Kernel Level:     [K1]

Characteristics:
โ€ข If one thread blocks, all block
โ€ข Cannot run on multiple CPUs
โ€ข Fast thread management

Example: Green Threads
2. One-to-One Model:
One user thread โ†’ mapped to โ†’ One kernel thread

Diagram:
User Level: [T1] [T2] [T3] [T4]
                  โ†“    โ†“    โ†“    โ†“
Kernel Level: [K1] [K2] [K3] [K4]

Characteristics:
โ€ข If one thread blocks, others continue
โ€ข Can use multiple CPUs
โ€ข Creating thread = creating kernel thread (overhead)
โ€ข Most systems limit number of threads

Example: Windows, Linux
3. Many-to-Many Model:
Many user threads โ†’ mapped to โ†’ Many kernel threads

Diagram:
User Level: [T1] [T2] [T3] [T4] [T5] [T6]
                  โ†“    โ†“    โ†“
Kernel Level:     [K1] [K2] [K3]

Characteristics:
โ€ข OS creates sufficient kernel threads
โ€ข Best of both worlds
โ€ข Can run on multiple CPUs
โ€ข If one blocks, others continue

Example: Solaris, older UNIX versions
7. CPU Scheduling
CPU Scheduling:
The process of deciding which process in the ready queue gets the CPU next. Goal is to keep CPU busy and provide fair execution time to all processes.
Why Scheduling Needed?
In multiprogramming, when one process waits (for I/O), CPU switches to another process. Scheduling decides which process runs next to maximize CPU utilization.
Preemptive vs Non-Preemptive Scheduling:
Non-Preemptive Scheduling:
Once CPU is allocated to a process, process keeps it until it terminates or switches to waiting state. CPU cannot be taken away forcefully.

Characteristics:
โ€ข Process runs until completion or blocks
โ€ข No interruption by scheduler
โ€ข Simple to implement

Real-life Example:
Doctor's appointment - Once doctor starts with patient, continues until done. Next patient must wait.

Advantages:
โœ“ Simple
โœ“ Low overhead (no context switching)
โœ“ Predictable

Disadvantages:
โœ— Long process blocks CPU for long time
โœ— Poor response time
โœ— Not suitable for time-sharing systems

Examples: FCFS, SJF (non-preemptive), Priority (non-preemptive)
Preemptive Scheduling:
CPU can be taken away from a running process before it completes. Scheduler can interrupt and switch to another process.

Characteristics:
โ€ข Process can be interrupted
โ€ข Higher priority process can take CPU
โ€ข Timer interrupts used

Real-life Example:
Emergency room - If critical patient arrives, doctor stops current patient and attends emergency first.

Advantages:
โœ“ Better response time
โœ“ Fair to all processes
โœ“ Suitable for time-sharing
โœ“ High priority tasks handled quickly

Disadvantages:
โœ— Complex
โœ— Context switch overhead
โœ— Risk of race conditions

Examples: Round Robin, SRTF, Priority (preemptive)
Notes Image
๐Ÿ“Š Preemptive vs Non-Preemptive Scheduling
8. Scheduling Criteria
Scheduling Criteria:
Metrics used to evaluate and compare different CPU scheduling algorithms. Help determine which algorithm is best for a given situation.
Key Criteria:
1. CPU Utilization:
Percentage of time CPU is busy (not idle)

Goal: Maximize (keep CPU as busy as possible)
Range: 0% to 100%
Good: 40% (lightly loaded) to 90% (heavily loaded)

Formula:
CPU Utilization = (Total Busy Time / Total Time) ร— 100%

Example:
If CPU is busy for 80 seconds out of 100 seconds โ†’ 80% utilization
2. Throughput:
Number of processes completed per unit time

Goal: Maximize (complete more processes)
Unit: processes/hour or processes/second

Formula:
Throughput = Number of processes completed / Total time

Example:
10 processes completed in 100 seconds โ†’ Throughput = 0.1 processes/second
3. Turnaround Time:
Total time from process submission to completion
Time interval from arrival to termination

Goal: Minimize

Formula:
Turnaround Time = Completion Time - Arrival Time
Or
Turnaround Time = Waiting Time + Burst Time

Example:
Process arrives at time 0, completes at time 10
โ†’ Turnaround Time = 10 - 0 = 10 seconds
4. Waiting Time:
Total time process spends in ready queue waiting for CPU
Does NOT include execution time

Goal: Minimize

Formula:
Waiting Time = Turnaround Time - Burst Time

Example:
Turnaround Time = 10s, Burst Time = 3s
โ†’ Waiting Time = 10 - 3 = 7 seconds
5. Response Time:
Time from process submission to first response (first time it gets CPU)
Important for interactive systems

Goal: Minimize

Formula:
Response Time = Time of first response - Arrival Time

Example:
Process arrives at 0, first gets CPU at time 5
โ†’ Response Time = 5 - 0 = 5 seconds

Difference from Waiting Time:
Response time is till FIRST response, waiting time is TOTAL waiting
Notes Image
๐Ÿ“Š Summary of Scheduling Criteria
Important:
No single algorithm is best for all criteria. Trade-offs exist:
โ€ข Algorithm good for throughput may have poor response time
โ€ข Algorithm minimizing waiting time may reduce CPU utilization
โ€ข Choice depends on system requirements (batch vs interactive)
9. CPU Scheduling Algorithms
Scheduling Algorithms:
Rules/methods used by CPU scheduler to decide which process from the ready queue gets the CPU next.
1. First Come First Serve (FCFS):
FCFS: Simplest algorithm. Process that arrives first gets CPU first. Non-preemptive.
Real-life Example: Queue at ticket counter - First person in line gets served first!

Implementation: Using FIFO queue
โ€ข New process added to tail of queue
โ€ข Scheduler picks from head of queue
Notes Image
๐Ÿ“Š Example of FCFS Scheduling
Advantages:
โœ“ Simple and easy to implement
โœ“ Fair (first come, first served)
โœ“ No starvation
Disadvantages:
โœ— Convoy Effect (short processes wait for long process)
โœ— Poor average waiting time
โœ— Not suitable for time-sharing systems
โœ— Non-preemptive (cannot interrupt)
Convoy Effect: When short processes wait for a long process to complete. Like being stuck behind a slow truck on highway - all fast cars must wait!
2. Shortest Job First (SJF):
SJF: Process with smallest burst time gets CPU first. Can be preemptive or non-preemptive. Optimal algorithm (gives minimum average waiting time).
Notes Image
๐Ÿ“Š Example of Non-Preemptive SJF Scheduling
Advantages:
โœ“ Minimum average waiting time
โœ“ Optimal algorithm
โœ“ Better throughput than FCFS
Disadvantages:
โœ— Starvation (long processes may never execute)
โœ— Cannot know exact burst time in advance
โœ— Requires prediction of burst time
3. Shortest Remaining Time First (SRTF):
SRTF: Preemptive version of SJF. If new process arrives with shorter remaining time than current process, CPU switches to new process.
Notes Image
๐Ÿ“Š Example of Preemptive SJF (SRTF) Scheduling
4. Priority Scheduling:
Priority Scheduling: Each process has a priority. CPU allocated to process with highest priority. Can be preemptive or non-preemptive.
Priority Values:
โ€ข Lower number = Higher priority (usually)
โ€ข Example: Priority 1 > Priority 5

Real-life Example: Emergency room - Critical patients (high priority) treated before minor injuries (low priority)
Notes Image
๐Ÿ“Š Example of Non-Preemptive Priority Scheduling
Major Problem - Starvation:
Low priority processes may never execute if high priority processes keep arriving.

Solution - Aging:
Gradually increase priority of waiting processes over time.
Example: Every 10 minutes waiting โ†’ increase priority by 1
5. Round Robin (RR):
Round Robin: Each process gets small unit of CPU time (time quantum), then moved to end of queue. Preemptive. Designed for time-sharing systems.
Real-life Example: Teacher giving each student 5 minutes to ask questions. After 5 minutes, next student gets turn. First student goes to back of line if more questions.

Time Quantum (q): Fixed time slice (typically 10-100 milliseconds)
Notes Image
๐Ÿ“Š Example of Round Robin Scheduling
Advantages:
โœ“ Fair to all processes
โœ“ No starvation
โœ“ Good response time
โœ“ Suitable for time-sharing systems
Disadvantages:
โœ— Context switch overhead
โœ— Performance depends on time quantum
โœ— Average waiting time often high
Choosing Time Quantum:
โ€ข Too large: Becomes like FCFS (poor response)
โ€ข Too small: Too many context switches (overhead)
โ€ข Rule of thumb: 80% of processes should complete within one quantum
6. Multilevel Queue Scheduling:
Multilevel Queue: Ready queue divided into multiple queues based on process characteristics. Each queue has its own scheduling algorithm.
Notes Image
๐Ÿ“Š Multilevel Queue Scheduling
Example Categories:
โ€ข Foreground (Interactive): User applications (high priority, RR)
โ€ข Background (Batch): Compilations, backups (low priority, FCFS)
Advantages:
โœ“ Different algorithms for different process types
โœ“ Low overhead (no queue switching)
โœ“ Priority-based execution
Disadvantages:
โœ— Starvation (low priority queues may starve)
โœ— Inflexible (process cannot move between queues)
10. Scheduling Algorithms Comparison
Notes Image
๐Ÿ“Š Complete Comparison of Scheduling Algorithms
Key Takeaways:
โ€ข FCFS: Simplest but convoy effect
โ€ข SJF: Optimal but starvation possible
โ€ข SRTF: Better than SJF but more overhead
โ€ข Priority: Flexible but needs aging for starvation
โ€ข Round Robin: Fair but context switch overhead
โ€ข Multilevel: Realistic but complex

No algorithm is perfect! Choice depends on system requirements.
11. Inter-process Communication (IPC)
Inter-process Communication (IPC):
Mechanism that allows processes to communicate and synchronize their actions. Processes need to exchange data and coordinate execution.
Why IPC Needed?
โ€ข Share information between processes
โ€ข Speed up computation (divide work among processes)
โ€ข Modularity (separate concerns)
โ€ข Convenience (user may run multiple tasks)

Example: Web browser - one process downloads, another displays, another handles user input
Two Models of IPC:
1. Shared Memory:
Processes share a common memory region for communication

How it works:
โ€ข OS creates shared memory segment
โ€ข Processes read/write to this shared region
โ€ข Fast (no kernel involvement after setup)

Advantages:
โœ“ Fast (direct memory access)
โœ“ Efficient for large data transfer

Disadvantages:
โœ— Need synchronization (to avoid conflicts)
โœ— Complex to implement
2. Message Passing:
Processes communicate by sending/receiving messages

Operations:
โ€ข send(message)
โ€ข receive(message)

Advantages:
โœ“ Easier to implement
โœ“ Works for distributed systems
โœ“ No conflicts (OS manages)

Disadvantages:
โœ— Slower (kernel calls needed)
โœ— Overhead for small messages
Notes Image
๐Ÿ“Š Comparison
12. Remote Procedure Calls (RPC)
Remote Procedure Call (RPC):
Allows a program to execute a procedure (function) on another machine as if it were a local call. Makes distributed computing transparent.
Real-life Analogy:
Ordering food by phone:
โ€ข You call restaurant (remote machine)
โ€ข Ask for pizza (call procedure)
โ€ข Wait for delivery (wait for result)
โ€ข Get pizza (receive result)

You don't go to restaurant yourself - phone call handles everything!
Notes Image
๐Ÿ“Š How RPC Works
Key Point: Client doesn't know the function is executing remotely. RPC makes remote calls look like local calls!
13. Process Synchronization
Process Synchronization:
Coordination of execution of multiple processes to ensure data consistency when processes access shared resources.
Why Synchronization Needed?

Example - Bank Account (Race Condition):
Initial Balance = โ‚น1000

Process A: Withdraw โ‚น500
1. Read balance (โ‚น1000)
2. Calculate: 1000 - 500 = โ‚น500
3. Write back โ‚น500

Process B: Deposit โ‚น300 (runs simultaneously)
1. Read balance (โ‚น1000) โ† Wrong! Should be โ‚น500
2. Calculate: 1000 + 300 = โ‚น1300
3. Write back โ‚น1300

Final Balance: โ‚น1300 (Wrong! Should be โ‚น800)
โ‚น500 deposit lost! This is a race condition.
Critical Section Problem:
Critical Section: Part of code where shared resources are accessed. Only one process should execute in critical section at a time.
Critical Section Structure: do { // Entry Section (request permission) // CRITICAL SECTION // (access shared resource) // Exit Section (release permission) // Remainder Section // (other code) } while (true);
Requirements for Solution:
1. Mutual Exclusion:
Only one process in critical section at a time

2. Progress:
If no process in critical section, selection of next process cannot be postponed indefinitely

3. Bounded Waiting:
Limit on number of times other processes can enter CS before a waiting process gets turn (no starvation)
Synchronization Mechanisms:
โ€ข Semaphores
โ€ข Monitors
โ€ข Mutex locks
(Covered in detail in Unit 2)
โšก Last Minute Notes - Quick Revision (5-10 mins)
๐Ÿ“Œ How to Use: Read this 5-10 minutes before exam for quick revision of all key concepts!
๐Ÿ’ป Introduction to OS
Operating System = Interface between hardware and applications

Goals: Convenience, Efficiency, Resource Management, Security

OS Structures:
โ€ข Monolithic: All in kernel (fast but risky)
โ€ข Layered: Layer by layer (modular)
โ€ข Microkernel: Minimal kernel (reliable)
โ€ข Modular: Loadable modules (flexible)

Main Functions:
1. Process Management
2. Memory Management
3. File Management
4. I/O Management
5. Security
๐Ÿ”„ Types of OS
Batch: Jobs in batches, no interaction
Time-Sharing: Multiple users, time slicing
Real-Time: Hard (strict deadline), Soft (flexible)
Distributed: Multiple computers as one system
Mobile: Android, iOS (touch, battery optimized)
๐Ÿ“ž System Calls
System Call: Interface between user program and OS kernel

User Mode: Limited privileges
Kernel Mode: Full access to hardware

5 Types:
1. Process Control: fork(), exit(), wait()
2. File Management: open(), read(), write()
3. Device Management: ioctl()
4. Information: getpid(), time()
5. Communication: send(), receive()
โš™๏ธ Process Management
Process: Program in execution

Process vs Program:
โ€ข Program = Passive (code on disk)
โ€ข Process = Active (running in memory)

PCB (Process Control Block):
Contains: PID, State, PC, Registers, Priority, Memory info

5 Process States:
New โ†’ Ready โ†’ Running โ†’ Waiting โ†’ Terminated

Context Switch: Save current process, Load next process
(Pure overhead, takes 1-10 ฮผs)

3 Schedulers:
โ€ข Long-term: New โ†’ Ready (minutes)
โ€ข Short-term: Ready โ†’ Running (milliseconds)
โ€ข Medium-term: Swapping (memory management)
๐Ÿงต Threads
Thread: Lightweight process

Benefits:
โ€ข Responsiveness
โ€ข Resource sharing
โ€ข Economy (cheaper than process)
โ€ข Scalability (use multiple cores)

Types:
โ€ข User-Level: Fast but blocks entire process
โ€ข Kernel-Level: Slower but true parallelism

Models:
โ€ข Many-to-One: All threads โ†’ 1 kernel thread
โ€ข One-to-One: Each thread โ†’ 1 kernel thread
โ€ข Many-to-Many: Multiple โ†’ Multiple (flexible)
๐Ÿ“Š CPU Scheduling
Preemptive: Can interrupt (RR, SRTF, Priority)
Non-Preemptive: Cannot interrupt (FCFS, SJF)

5 Criteria:
1. CPU Utilization (Maximize)
2. Throughput (Maximize)
3. Turnaround Time (Minimize)
4. Waiting Time (Minimize)
5. Response Time (Minimize)

Formulas:
Turnaround Time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time
Response Time = First Response - Arrival Time
โฑ๏ธ Scheduling Algorithms
FCFS: First Come First Serve
โ€ข Simple, no starvation
โ€ข Convoy effect problem

SJF: Shortest Job First
โ€ข Optimal (minimum avg WT)
โ€ข Starvation possible

SRTF: Shortest Remaining Time First
โ€ข Preemptive SJF
โ€ข Better than SJF

Priority: Based on priority number
โ€ข Starvation (solve with Aging)

Round Robin: Time quantum (q)
โ€ข Fair, no starvation
โ€ข Context switch overhead
โ€ข q too large โ†’ FCFS
โ€ข q too small โ†’ too much overhead

Multilevel Queue: Multiple queues
โ€ข Different algorithms per queue
โ€ข Can cause starvation
๐Ÿ’ฌ IPC & Synchronization
IPC (Inter-process Communication):

Shared Memory: Fast, needs synchronization
Message Passing: Slower, easier to implement

RPC (Remote Procedure Call):
Execute function on remote machine as if local

Process Synchronization:
Coordinate processes accessing shared resources

Race Condition: Multiple processes access shared data
simultaneously โ†’ incorrect result

Critical Section: Code accessing shared resource
Only one process at a time

Requirements:
1. Mutual Exclusion
2. Progress
3. Bounded Waiting
โš ๏ธ Important Exam Points
Must Remember:
โŒ Process vs Thread differences
โŒ 5 Process states and transitions
โŒ PCB components
โŒ Preemptive vs Non-preemptive
โŒ All 5 scheduling criteria formulas
โŒ Scheduling algorithm examples (Gantt charts)
โŒ Convoy effect (FCFS)
โŒ Starvation vs Deadlock
โŒ User mode vs Kernel mode
โŒ Context switch overhead
๐Ÿ’ก Exam Strategy
For Scheduling Problems:
1. Draw Gantt chart first
2. Calculate completion time for each process
3. Use formulas: TAT = CT - AT, WT = TAT - BT
4. Take average of all processes
5. Show all steps!

For Theory Questions:
โ€ข Define clearly
โ€ข Give real-life example
โ€ข List advantages/disadvantages
โ€ข Compare if asked

Common Mistakes to Avoid:
โœ— Confusing Turnaround Time and Waiting Time
โœ— Forgetting arrival time in calculations
โœ— Wrong Gantt chart in SRTF (must check at each arrival)
โœ— Not specifying time quantum in Round Robin
๐ŸŒŸ Final Tips
Remember:
โ€ข OS manages hardware and provides services
โ€ข Process = Active, Program = Passive
โ€ข PCB stores all process information
โ€ข Context switch is overhead
โ€ข No perfect scheduling algorithm - each has trade-offs
โ€ข Synchronization prevents race conditions

๐ŸŽฏ All the Best!
Practice Gantt charts for scheduling problems. Understand concepts with real-life examples. You've got this! ๐Ÿ’ช๐Ÿš€