šŸ“š DSA Study Notes

Asymptotic Notations, Recursion & Sorting Techniques

Compiled & Formatted by Ankush Raj

1. Asymptotic Notations

Asymptotic notation is used to describe the running time/performance of an algorithm when input size becomes very large (n → āˆž).

šŸ‘‰ Big-O Notation — O(f(n))

Represents the upper bound of time. Worst-case performance. Algorithm will not take more than O(f(n)) time.

Example: If Time = 3n + 5 → O(n)
šŸ‘‰ Omega Notation — Ī©(f(n))

Represents the lower bound. Best-case performance. Algorithm will take at least Ī©(f(n)) time.

Example: 3n + 5 → Ī©(n)
šŸ‘‰ Theta Notation — Θ(f(n))

Represents the tight bound. Exact time. When algorithm has both O(f(n)) and Ī©(f(n)).

Example: 3n + 5 → Θ(n)
2. Recursion

When a function calls itself until a base condition is reached.

fact(n):
  if n == 0: return 1
  else return n * fact(n-1)
3. Solving Recurrence Relations
šŸ‘‰ (A) Recursion Tree Method

Write the recurrence as a tree. Calculate work at each level. Sum all levels.

Example: T(n) = 2T(n/2) + n → Tree gives total = n log n
šŸ‘‰ (B) Back Substitution / Iteration Method

Expand the recurrence step-by-step. Find pattern. Apply general formula. Put k = log n.

Example:
T(n) = T(n/2) + 1 = T(n/4) + 2 = T(n/8) + 3 ...
When n/2^k = 1 → k = log n
Final T(n) = log n + 1
šŸ‘‰ (C) Master Method (For divide & conquer)

Recurrence of form: T(n) = aT(n/b) + f(n)

Compare f(n) with n^(log_b a)

Case 1: If f(n) = O(n^(log_b a - ε)) → T(n) = n^(log_b a)
Case 2: If f(n) = Θ(n^(log_b a)) → T(n) = n^(log_b a) * log n
Case 3: If f(n) = Ī©(n^(log_b a + ε)) → T(n) = f(n) ⭐
4. Sorting Techniques
1ļøāƒ£ Quick Sort (Fastest in practice)

Concept: Quick sort is a divide-and-conquer algorithm. We choose a pivot element. Array is divided into two parts: Left side contains elements smaller than pivot, Right side contains elements larger than pivot. Then both parts are recursively sorted.

Why is it fast? Because it mostly divides arrays roughly half-half → Levels ā‰ˆ log n. Work at each level ā‰ˆ n steps.

Steps to remember: Pick → Partition → Recur
1. Pick pivot
2. Partition (smaller left, larger right)
3. Recursive call on both sides
Simple Example:
Array: 5 2 9 1 6
Pivot = 5
Left = 2 1, Right = 9 6
Apply same process recursively → sorted
Time Complexity Value
Best Case O(n log n)
Average Case O(n log n)
Worst Case O(n²) - when pivot is worst (already sorted array)
Space O(log n)

Where used? Competitive programming, Fast in real-life data, Used in many real-world libraries (like C++ STL quicksort version)

āœ” Quick Sort One-Liner: "Pivot based divide & conquer sorting; Avg O(n log n), Worst O(n²). Fastest in practice."
2ļøāƒ£ Merge Sort (Guaranteed O(n log n))

Concept: Merge sort is also a divide-and-conquer algorithm, but it always performs perfect division.

Steps:
1. Divide array into 2 equal halves
2. Sort both halves recursively
3. Merge the two sorted halves

Key Line: "Merge sort has slow divide, fast merge."

Why always O(n log n)? Divide = log n levels. Merge at each level = n. Total = n log n (guaranteed)

Example:
8 3 1 7 → 8 3 | 1 7 → 8 | 3 and 1 | 7
Merge → 3 8 and 1 7
Final merge → 1 3 7 8
Time Complexity Value
Best Case O(n log n)
Average Case O(n log n)
Worst Case O(n log n)
Space O(n) extra array (disadvantage)

Where used? Stable sorting, Linked list sorting, External sorting (very big files)

āœ” Merge Sort One-Liner: "Always O(n log n); stable; uses extra space; divides then merges."
3ļøāƒ£ Radix Sort (Digit-based sorting, very fast for numbers)

Concept: Comparison sort is not used here. It sorts digit by digit — unit place → tens → hundreds. Internally, counting sort is mostly used.

Example: Numbers: 170, 45, 75, 90
Step 1 → sort by unit digit
Step 2 → sort by tens digit
Step 3 → sort by hundreds digit
Final → sorted list
Important Line (for exams): "Radix sort uses digit-place sorting and does not compare numbers."
Time Complexity Value
Complexity O(d Ɨ (n + k))
d = number of digits k = range of digits (0–9)

Best for: Aadhaar numbers, Phone numbers, Large numeric data

āœ” Radix Sort One-Liner: "Digit-by-digit sorting using counting sort; no comparisons; very fast for integers."
4ļøāƒ£ Bucket Sort (When data is uniformly distributed)

Concept: Array is placed into multiple buckets (containers). Elements in each bucket are individually sorted. Then all buckets are joined together.

Simple Example: 0 to 1 range values: 0.12, 0.34, 0.09, 0.56
Bucket 0 (0–0.19): 0.12, 0.09
Bucket 1 (0.20–0.39): 0.34
Bucket 2 (0.40–0.59): 0.56
Sort each bucket → combine → done
Time Complexity Value
Average Case O(n)
Worst Case O(n²) - if all elements fall in one bucket

Where used? Fractional data (0–1), Uniform distribution required

āœ” Bucket Sort One-Liner: "Values distributed into buckets, each bucket sorted separately; Avg O(n)."
⭐ FINAL EXAM-WORTHY ONE-LINERS (Must Write)
āœ” Quick Sort: "Pivot based divide & conquer sorting; Avg O(n log n), Worst O(n²). Fastest in practice."
āœ” Merge Sort: "Always O(n log n); stable; uses extra space; divides then merges."
āœ” Radix Sort: "Digit-by-digit sorting using counting sort; no comparisons; very fast for integers."
āœ” Bucket Sort: "Values distributed into buckets, each bucket sorted separately; Avg O(n)."