1. Asymptotic Notations
Asymptotic notation is used to describe the running time/performance of an algorithm when input size becomes very large (n ā ā).
š Big-O Notation ā O(f(n))
Represents the upper bound of time. Worst-case performance. Algorithm will not take more than O(f(n)) time.
Example: If Time = 3n + 5 ā O(n)
š Omega Notation ā Ī©(f(n))
Represents the lower bound. Best-case performance. Algorithm will take at least Ī©(f(n)) time.
Example: 3n + 5 ā Ī©(n)
š Theta Notation ā Ī(f(n))
Represents the tight bound. Exact time. When algorithm has both O(f(n)) and Ī©(f(n)).
Example: 3n + 5 ā Ī(n)
3. Solving Recurrence Relations
š (A) Recursion Tree Method
Write the recurrence as a tree. Calculate work at each level. Sum all levels.
Example: T(n) = 2T(n/2) + n ā Tree gives total = n log n
š (B) Back Substitution / Iteration Method
Expand the recurrence step-by-step. Find pattern. Apply general formula. Put k = log n.
Example:
T(n) = T(n/2) + 1 = T(n/4) + 2 = T(n/8) + 3 ...
When n/2^k = 1 ā k = log n
Final T(n) = log n + 1
š (C) Master Method (For divide & conquer)
Recurrence of form: T(n) = aT(n/b) + f(n)
Compare f(n) with n^(log_b a)
Case 1: If f(n) = O(n^(log_b a - ε)) ā T(n) = n^(log_b a)
Case 2: If f(n) = Ī(n^(log_b a)) ā T(n) = n^(log_b a) * log n
Case 3: If f(n) = Ī©(n^(log_b a + ε)) ā T(n) = f(n) ā
4. Sorting Techniques
1ļøā£ Quick Sort (Fastest in practice)
Concept: Quick sort is a divide-and-conquer algorithm. We choose a pivot element. Array is divided into two parts: Left side contains elements smaller than pivot, Right side contains elements larger than pivot. Then both parts are recursively sorted.
Why is it fast? Because it mostly divides arrays roughly half-half ā Levels ā log n. Work at each level ā n steps.
Steps to remember: Pick ā Partition ā Recur
1. Pick pivot
2. Partition (smaller left, larger right)
3. Recursive call on both sides
Simple Example:
Array: 5 2 9 1 6
Pivot = 5
Left = 2 1, Right = 9 6
Apply same process recursively ā sorted
| Time Complexity |
Value |
| Best Case |
O(n log n) |
| Average Case |
O(n log n) |
| Worst Case |
O(n²) - when pivot is worst (already sorted array) |
| Space |
O(log n) |
Where used? Competitive programming, Fast in real-life data, Used in many real-world libraries (like C++ STL quicksort version)
ā Quick Sort One-Liner: "Pivot based divide & conquer sorting; Avg O(n log n), Worst O(n²). Fastest in practice."
2ļøā£ Merge Sort (Guaranteed O(n log n))
Concept: Merge sort is also a divide-and-conquer algorithm, but it always performs perfect division.
Steps:
1. Divide array into 2 equal halves
2. Sort both halves recursively
3. Merge the two sorted halves
Key Line: "Merge sort has slow divide, fast merge."
Why always O(n log n)? Divide = log n levels. Merge at each level = n. Total = n log n (guaranteed)
Example:
8 3 1 7 ā 8 3 | 1 7 ā 8 | 3 and 1 | 7
Merge ā 3 8 and 1 7
Final merge ā 1 3 7 8
| Time Complexity |
Value |
| Best Case |
O(n log n) |
| Average Case |
O(n log n) |
| Worst Case |
O(n log n) |
| Space |
O(n) extra array (disadvantage) |
Where used? Stable sorting, Linked list sorting, External sorting (very big files)
ā Merge Sort One-Liner: "Always O(n log n); stable; uses extra space; divides then merges."
3ļøā£ Radix Sort (Digit-based sorting, very fast for numbers)
Concept: Comparison sort is not used here. It sorts digit by digit ā unit place ā tens ā hundreds. Internally, counting sort is mostly used.
Example: Numbers: 170, 45, 75, 90
Step 1 ā sort by unit digit
Step 2 ā sort by tens digit
Step 3 ā sort by hundreds digit
Final ā sorted list
Important Line (for exams): "Radix sort uses digit-place sorting and does not compare numbers."
| Time Complexity |
Value |
| Complexity |
O(d Ć (n + k)) |
| d = number of digits |
k = range of digits (0ā9) |
Best for: Aadhaar numbers, Phone numbers, Large numeric data
ā Radix Sort One-Liner: "Digit-by-digit sorting using counting sort; no comparisons; very fast for integers."
4ļøā£ Bucket Sort (When data is uniformly distributed)
Concept: Array is placed into multiple buckets (containers). Elements in each bucket are individually sorted. Then all buckets are joined together.
Simple Example: 0 to 1 range values: 0.12, 0.34, 0.09, 0.56
Bucket 0 (0ā0.19): 0.12, 0.09
Bucket 1 (0.20ā0.39): 0.34
Bucket 2 (0.40ā0.59): 0.56
Sort each bucket ā combine ā done
| Time Complexity |
Value |
| Average Case |
O(n) |
| Worst Case |
O(n²) - if all elements fall in one bucket |
Where used? Fractional data (0ā1), Uniform distribution required
ā Bucket Sort One-Liner: "Values distributed into buckets, each bucket sorted separately; Avg O(n)."
ā FINAL EXAM-WORTHY ONE-LINERS (Must Write)
ā Quick Sort: "Pivot based divide & conquer sorting; Avg O(n log n), Worst O(n²). Fastest in practice."
ā Merge Sort: "Always O(n log n); stable; uses extra space; divides then merges."
ā Radix Sort: "Digit-by-digit sorting using counting sort; no comparisons; very fast for integers."
ā Bucket Sort: "Values distributed into buckets, each bucket sorted separately; Avg O(n)."