Design & Analysis of Algorithms

Unit 2 β€” D&C, Greedy, DP, Backtracking & Branch-Bound

πŸ“š Chapters

βš”οΈ Chapter 1 β€” Divide & Conquer + Greedy Algorithms
πŸ“Š DIVIDE & CONQUER + GREEDY β€” MIND MAP
Divide & Conquer β†’ Break problem into subproblems, solve recursively, combine results
D&C Examples β†’ Binary Search, Merge Sort, Quick Sort, Strassen's Matrix Multiplication
Greedy β†’ Make locally optimal choice at each step, hope for global optimum
Greedy Examples β†’ Fractional Knapsack, Prim's, Kruskal's, Huffman Coding
Key Difference β†’ D&C explores all subproblems; Greedy picks best option immediately
1. Divide & Conquer β€” General Method
Definition:
Divide and Conquer is an algorithm design paradigm that works by recursively breaking down a problem into two or more sub-problems of the same type, solving them independently, and then combining their solutions to solve the original problem.
Three Steps of D&C:
Step 1 β€” DIVIDE: Break the problem into smaller sub-problems of the same type.

Step 2 β€” CONQUER: Solve each sub-problem recursively. If sub-problem is small enough, solve it directly (base case).

Step 3 β€” COMBINE: Merge/combine the solutions of sub-problems to get the solution of the original problem.
C++
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// General D&C Pattern (example: sum of array) #include <iostream> using namespace std; int solve(int a[], int low, int high) { if (low == high) // Base case β€” small problem return a[low]; int mid = (low + high) / 2; // DIVIDE int left = solve(a, low, mid); // CONQUER left int right = solve(a, mid+1, high); // CONQUER right return left + right; // COMBINE }
General Recurrence:
T(n) = aT(n/b) + f(n)

a = number of subproblems | n/b = size of each subproblem | f(n) = cost of divide + combine
Real-life Analogy:
Sorting 1000 exam papers β†’ divide into 2 piles of 500 β†’ give each pile to a helper to sort β†’ merge the two sorted piles. Each helper further divides their pile β€” this is D&C!
2. Binary Search
Definition:
Binary Search finds the position of a target value in a sorted array by repeatedly dividing the search interval in half.
How it works:
1. Compare target with middle element
2. If target = middle β†’ Found!
3. If target < middle β†’ Search LEFT half
4. If target > middle β†’ Search RIGHT half
5. Repeat until found or subarray is empty
C++
1
2
3
4
5
6
7
8
9
10
11
12
int binarySearch(int a[], int low, int high, int key) { if (low > high) return -1; // Not found int mid = (low + high) / 2; if (key == a[mid]) return mid; // Found else if (key < a[mid]) return binarySearch(a, low, mid - 1, key); else return binarySearch(a, mid + 1, high, key); }
Time Complexity:
T(n) = T(n/2) + O(1)
Best: O(1) | Worst/Avg: O(log n) | Space: O(log n) recursive, O(1) iterative
Solved Numerical β€” Binary Search
Q: Search for key = 14 in array: [8, 14, 22, 25, 33, 51, 67, 78, 89]
Solution (Step-by-step):

Array indices: 0   1    2    3    4    5    6    7    8
Array values:  [8, 14, 22, 25, 33, 51, 67, 78, 89]   key = 14

Pass 1: low=0, high=8, mid=(0+8)/2 = 4 β†’ a[4]=33 > 14 β†’ Search LEFT
Pass 2: low=0, high=3, mid=(0+3)/2 = 1 β†’ a[1]=14 = key βœ“

Element found at index 1 in 2 comparisons!
Key Points:
β€’ Array MUST be sorted
β€’ Every comparison eliminates HALF of remaining elements
β€’ For n = 1,000,000 β†’ only ~20 comparisons needed (logβ‚‚ 10⁢ β‰ˆ 20)
3. Finding Maximum and Minimum
Problem:
Given an array of n elements, find both the maximum and minimum values using minimum number of comparisons.
Naive Approach:
Simple method: Compare each element with current max and min separately.
Comparisons = 2(n-1) β†’ Too many!
D&C Approach (Tournament Method):
Step 1: Divide array into two halves
Step 2: Recursively find max and min in each half
Step 3: Compare the two maxes β†’ overall max
Step 4: Compare the two mins β†’ overall min
C++
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
void maxMin(int a[], int low, int high, int &mx, int &mn) { if (low == high) { // Only 1 element mx = mn = a[low]; } else if (high == low + 1) { // Only 2 elements if (a[low] < a[high]) { mx = a[high]; mn = a[low]; } else { mx = a[low]; mn = a[high]; } } else { int mid = (low + high) / 2; int max1, min1, max2, min2; maxMin(a, low, mid, max1, min1); maxMin(a, mid + 1, high, max2, min2); mx = max(max1, max2); mn = min(min1, min2); } }
Comparisons:
T(n) = 2T(n/2) + 2
Total = 3n/2 - 2 comparisons (much better than 2n - 2!)
Solved Numerical
Q: Find Max and Min of array [22, 13, -5, -8, 15, 60, 17, 31, 47] using D&C
Solution:

Array: [22, 13, -5, -8, 15, 60, 17, 31, 47] β†’ n = 9

Divide: [22, 13, -5, -8, 15] | [60, 17, 31, 47]
Further: [22, 13, -5] | [-8, 15] | [60, 17] | [31, 47]
Further: [22, 13] | [-5] | [-8, 15] | [60, 17] | [31, 47]

Conquer (bottom-up):
[22, 13] β†’ max=22, min=13 (1 comparison)
[-5] β†’ max=-5, min=-5
Combine: max=Max(22,-5)=22, min=Min(13,-5)=-5 (2 comparisons)

[-8, 15] β†’ max=15, min=-8 (1 comparison)
Combine left half: max=Max(22,15)=22, min=Min(-5,-8)=-8 (2 comparisons)

[60, 17] β†’ max=60, min=17 (1 comparison)
[31, 47] β†’ max=47, min=31 (1 comparison)
Combine right half: max=Max(60,47)=60, min=Min(17,31)=17 (2 comparisons)

Final Combine: max=Max(22,60)=60, min=Min(-8,17)=-8 (2 comparisons)

Result: Max = 60, Min = -8
Total comparisons using D&C = 3(9)/2 - 2 = 11 (vs naive = 16)
4. 2-Way Merge Sort
Definition:
Merge Sort is a stable, comparison-based sorting algorithm that uses D&C. It divides the array into two halves, recursively sorts them, and then merges the two sorted halves.
Steps:
1. Divide: Split array into two halves at mid = (low+high)/2
2. Conquer: Recursively sort left half and right half
3. Combine: Merge the two sorted halves into one sorted array
C++
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
void merge(int a[], int low, int mid, int high) { int temp[high - low + 1]; int i = low, j = mid + 1, k = 0; while (i <= mid && j <= high) { if (a[i] <= a[j]) temp[k++] = a[i++]; else temp[k++] = a[j++]; } while (i <= mid) temp[k++] = a[i++]; while (j <= high) temp[k++] = a[j++]; for (int i = low; i <= high; i++) a[i] = temp[i - low]; // Copy back } void mergeSort(int a[], int low, int high) { if (low < high) { int mid = (low + high) / 2; mergeSort(a, low, mid); // Sort left half mergeSort(a, mid + 1, high); // Sort right half merge(a, low, mid, high); // Merge both halves } }
Time Complexity:
T(n) = 2T(n/2) + O(n)
Best = Worst = Average = O(n log n)
Space: O(n) β€” needs extra array for merging
Solved Numerical β€” Merge Sort
Q: Sort [38, 27, 43, 3, 9, 82, 10] using Merge Sort
Solution β€” Divide Phase:

Level 0: [38, 27, 43, 3, 9, 82, 10]
Level 1: [38, 27, 43, 3]   |   [9, 82, 10]
Level 2: [38, 27] | [43, 3]   |   [9, 82] | [10]
Level 3: [38] | [27] | [43] | [3] | [9] | [82] | [10]

Merge Phase (bottom-up):
Merge [38],[27] β†’ [27, 38]
Merge [43],[3] β†’ [3, 43]
Merge [9],[82] β†’ [9, 82]
[10] stays as [10]

Merge [27,38],[3,43] β†’ [3, 27, 38, 43]
Merge [9,82],[10] β†’ [9, 10, 82]

Merge [3,27,38,43],[9,10,82] β†’ [3, 9, 10, 27, 38, 43, 82] βœ“
Advantages:
β€’ Stable sort (maintains relative order of equal elements)
β€’ Guaranteed O(n log n) β€” no worst case degradation
β€’ Good for linked lists (no extra space needed)
Disadvantages:
β€’ Requires O(n) extra space for arrays
β€’ Slower than Quick Sort in practice (due to overhead of merging)
5. Quick Sort
Definition:
Quick Sort is a D&C sorting algorithm that works by selecting a pivot element and partitioning the array around it β€” elements smaller than pivot go left, larger go right. Then recursively sort both partitions.
Steps:
1. Choose Pivot: Usually first, last, or random element
2. Partition: Rearrange so elements < pivot are left, > pivot are right
3. Recurse: Apply Quick Sort to left and right partitions
C++
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
int partition(int a[], int low, int high) { int pivot = a[low]; int i = low + 1, j = high; while (i <= j) { while (i <= high && a[i] <= pivot) i++; while (a[j] > pivot) j--; if (i < j) swap(a[i], a[j]); } swap(a[low], a[j]); return j; // Pivot's final position } void quickSort(int a[], int low, int high) { if (low < high) { int j = partition(a, low, high); quickSort(a, low, j - 1); // Sort left part quickSort(a, j + 1, high); // Sort right part } }
Time Complexity:
Best/Average Case: O(n log n)
Worst Case: O(nΒ²) β€” when array already sorted & pivot is always smallest/largest
Space: O(log n) β€” recursive stack
Solved Numerical β€” Quick Sort
Q: Sort [50, 70, 60, 90, 40, 80, 10, 20, 30] using Quick Sort (pivot = first element)
Solution β€” Pass 1:

Array: [50, 70, 60, 90, 40, 80, 10, 20, 30]   pivot = 50

i scans right for element β‰₯ 50: i stops at index 1 (70)
j scans left for element ≀ 50: j stops at index 8 (30)
Since i < j β†’ swap(70, 30): [50, 30, 60, 90, 40, 80, 10, 20, 70]

i moves right, stops at index 2 (60)
j moves left, stops at index 7 (20)
Since i < j β†’ swap(60, 20): [50, 30, 20, 90, 40, 80, 10, 60, 70]

i moves right, stops at index 3 (90)
j moves left, stops at index 6 (10)
Since i < j β†’ swap(90, 10): [50, 30, 20, 10, 40, 80, 90, 60, 70]

i moves right, stops at index 5 (80)
j moves left, stops at index 4 (40)
Since i > j β†’ STOP! Swap pivot with a[j]: swap(50, 40)

Result: [40, 30, 20, 10, [50], 80, 90, 60, 70]
50 is now at its correct position (index 4)!

Now recursively sort:
Left: [40, 30, 20, 10] β€” apply Quick Sort
Right: [80, 90, 60, 70] β€” apply Quick Sort

Final sorted: [10, 20, 30, 40, 50, 60, 70, 80, 90] βœ“
PropertyMerge SortQuick Sort
Best CaseO(n log n)O(n log n)
Worst CaseO(n log n)O(nΒ²)
AverageO(n log n)O(n log n)
SpaceO(n)O(log n)
Stable?YesNo
In-place?NoYes
6. Selection Sort (D&C Perspective)
Definition:
Selection Sort repeatedly selects the minimum element from the unsorted portion and places it at the beginning. While not traditionally D&C, it can be viewed as: find min (conquer), place it (combine), recurse on remaining.
C++
1
2
3
4
5
6
7
8
9
10
11
void selectionSort(int a[], int n) { for (int i = 0; i < n - 1; i++) { int minIdx = i; for (int j = i + 1; j < n; j++) { if (a[j] < a[minIdx]) minIdx = j; } swap(a[i], a[minIdx]); } }
Time Complexity: O(nΒ²) in all cases
Space: O(1) β€” In-place
Comparisons: n(n-1)/2 always
Solved Numerical
Q: Sort [64, 25, 12, 22, 11] using Selection Sort
Solution:

Pass 1: Find min in [64,25,12,22,11] β†’ min=11 at idx 4 β†’ swap(64,11) β†’ [11, 25, 12, 22, 64]
Pass 2: Find min in [25,12,22,64] β†’ min=12 at idx 2 β†’ swap(25,12) β†’ [11, 12, 25, 22, 64]
Pass 3: Find min in [25,22,64] β†’ min=22 at idx 3 β†’ swap(25,22) β†’ [11, 12, 22, 25, 64]
Pass 4: Find min in [25,64] β†’ min=25, already in place β†’ [11, 12, 22, 25, 64]

Sorted: [11, 12, 22, 25, 64] βœ“
7. Strassen's Matrix Multiplication
Problem:
Multiply two nΓ—n matrices. Standard method takes O(nΒ³). Strassen's algorithm uses D&C to achieve O(n^2.81) by reducing 8 multiplications to 7.
Standard vs Strassen:
Standard Method:
For 2Γ—2 matrices: C = A Γ— B needs 8 multiplications + 4 additions

Strassen's Key Idea:
Reduce to 7 multiplications + 18 additions/subtractions
Multiplications are costlier than additions, so this is faster for large matrices!
Strassen's Formulas:
Given: A = [[a, b], [c, d]]   B = [[e, f], [g, h]]

7 Products:
P1 = a Γ— (f - h)
P2 = (a + b) Γ— h
P3 = (c + d) Γ— e
P4 = d Γ— (g - e)
P5 = (a + d) Γ— (e + h)
P6 = (b - d) Γ— (g + h)
P7 = (a - c) Γ— (e + f)

Result Matrix C:
C[0][0] = P5 + P4 - P2 + P6
C[0][1] = P1 + P2
C[1][0] = P3 + P4
C[1][1] = P5 + P1 - P3 - P7
Solved Numerical
Q: Multiply A = [[1, 3], [7, 5]] and B = [[6, 8], [4, 2]] using Strassen's method
Solution:
a=1, b=3, c=7, d=5, e=6, f=8, g=4, h=2

P1 = 1 Γ— (8-2) = 1 Γ— 6 = 6
P2 = (1+3) Γ— 2 = 4 Γ— 2 = 8
P3 = (7+5) Γ— 6 = 12 Γ— 6 = 72
P4 = 5 Γ— (4-6) = 5 Γ— (-2) = -10
P5 = (1+5) Γ— (6+2) = 6 Γ— 8 = 48
P6 = (3-5) Γ— (4+2) = (-2) Γ— 6 = -12
P7 = (1-7) Γ— (6+8) = (-6) Γ— 14 = -84

C[0][0] = P5 + P4 - P2 + P6 = 48 + (-10) - 8 + (-12) = 18
C[0][1] = P1 + P2 = 6 + 8 = 14
C[1][0] = P3 + P4 = 72 + (-10) = 62
C[1][1] = P5 + P1 - P3 - P7 = 48 + 6 - 72 - (-84) = 66

Result: C = [[18, 14], [62, 66]] βœ“

Verification: Standard multiplication gives same result!
Time Complexity:
T(n) = 7T(n/2) + O(nΒ²)
= O(n^logβ‚‚7) = O(n^2.81)

Standard: O(nΒ³) vs Strassen: O(n^2.81)
8. Greedy Algorithms β€” General Method
Definition:
A Greedy algorithm builds up a solution piece by piece, always choosing the next piece that offers the most immediate benefit (locally optimal choice). It never reconsiders past choices.
General Greedy Structure:
1. Selection: Choose the best available candidate
2. Feasibility: Check if candidate can be added to solution
3. Solution Check: Is the current set a complete solution?
C++
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// General Greedy Pattern (example: Activity Selection) #include <iostream> #include <algorithm> using namespace std; void greedySolve(int a[], int n) { sort(a, a + n); // Step 1: Sort by criteria int result[n], count = 0; for (int i = 0; i < n; i++) { // Step 2: Select + check feasibility if (count == 0 || a[i] >= result[count-1]) { result[count++] = a[i]; // Step 3: Add to solution } } for (int i = 0; i < count; i++) cout << result[i] << " "; }
When does Greedy work?
β€’ Problem has Greedy Choice Property β€” local optimal leads to global optimal
β€’ Problem has Optimal Substructure β€” optimal solution contains optimal solutions to subproblems
When does Greedy FAIL?
β€’ 0/1 Knapsack (greedy gives wrong answer β€” need DP!)
β€’ Travelling Salesman Problem
β€’ When local best β‰  global best
9. Fractional Knapsack Problem
Problem:
Given n items with weights and profits, and a knapsack of capacity W. You can take fractions of items. Maximize total profit without exceeding capacity W.
Greedy Strategy:
1. Calculate profit/weight ratio (value density) for each item
2. Sort items by ratio in decreasing order
3. Take items greedily β€” full items first, then fraction of last item
Solved Numerical
Q: Items: Profits = [25, 24, 15], Weights = [18, 15, 10], Capacity W = 20. Find max profit.
Solution:

Step 1: Calculate Profit/Weight ratio:
Item 1: 25/18 = 1.39
Item 2: 24/15 = 1.60 ← highest
Item 3: 15/10 = 1.50

Step 2: Sort by ratio (descending): Item 2, Item 3, Item 1

Step 3: Fill Knapsack (W = 20):
Take Item 2 fully: weight=15, profit=24 β†’ Remaining capacity = 20-15 = 5
Take Item 3 partially: need 5 out of 10 β†’ fraction = 5/10 = 0.5
Profit from fraction = 0.5 Γ— 15 = 7.5

Total Profit = 24 + 7.5 = 31.5
Items taken: xβ‚‚ = 1, x₃ = 0.5, x₁ = 0
Time Complexity: O(n log n) β€” dominated by sorting
Space: O(1)
Important:
β€’ Fractional Knapsack β†’ Greedy works (can take fractions)
β€’ 0/1 Knapsack β†’ Greedy FAILS (must take whole item or nothing β€” use DP!)
10. Minimum Cost Spanning Tree (MST)
Definition:
A Minimum Spanning Tree of a connected, weighted, undirected graph is a subset of edges that connects all vertices with the minimum total edge weight, without any cycles. It has exactly V-1 edges.
10a. Prim's Algorithm
Strategy: Start from any vertex, greedily add the cheapest edge that connects a visited vertex to an unvisited vertex.

Steps:
1. Start with any vertex (mark it visited)
2. From all edges connecting visited to unvisited vertices, pick the minimum weight edge
3. Add that edge to MST, mark the new vertex as visited
4. Repeat until all vertices are visited
Q: Find MST using Prim's for graph with edges:
(A-B,4), (A-C,3), (B-C,1), (B-D,2), (C-D,4), (D-E,2), (C-E,6)
Solution (Starting from vertex A):

Step 1: Start at A. Visited = {A}
Edges from A: A-B(4), A-C(3) β†’ Pick A-C(3)
MST edges: {A-C(3)}, Visited = {A, C}

Step 2: Edges from {A,C}: A-B(4), B-C(1), C-D(4), C-E(6) β†’ Pick B-C(1)
MST edges: {A-C(3), B-C(1)}, Visited = {A, C, B}

Step 3: Edges from {A,C,B}: B-D(2), C-D(4), C-E(6) β†’ Pick B-D(2)
MST edges: {A-C(3), B-C(1), B-D(2)}, Visited = {A, C, B, D}

Step 4: Edges from {A,C,B,D}: C-E(6), D-E(2) β†’ Pick D-E(2)
MST edges: {A-C(3), B-C(1), B-D(2), D-E(2)}, Visited = {A, C, B, D, E}

MST Total Cost = 3 + 1 + 2 + 2 = 8
MST Edges: A-C, B-C, B-D, D-E
Prim's Time Complexity:
Adjacency Matrix: O(VΒ²)
Min-Heap + Adj List: O(E log V)
Fibonacci Heap: O(E + V log V)
10b. Kruskal's Algorithm
Strategy: Sort ALL edges by weight, then greedily pick smallest edge that doesn't form a cycle (using Union-Find/Disjoint Set).

Steps:
1. Sort all edges in ascending order of weight
2. Pick the smallest edge. If it doesn't form a cycle, add to MST
3. Repeat until MST has V-1 edges
Q: Find MST using Kruskal's for the same graph:
(A-B,4), (A-C,3), (B-C,1), (B-D,2), (C-D,4), (D-E,2), (C-E,6)
Solution:

Step 1: Sort edges by weight:
B-C(1), B-D(2), D-E(2), A-C(3), A-B(4), C-D(4), C-E(6)

Step 2: Pick edges greedily:
Edge B-C(1) β†’ No cycle β†’ Add βœ“ MST = {B-C}
Edge B-D(2) β†’ No cycle β†’ Add βœ“ MST = {B-C, B-D}
Edge D-E(2) β†’ No cycle β†’ Add βœ“ MST = {B-C, B-D, D-E}
Edge A-C(3) β†’ No cycle β†’ Add βœ“ MST = {B-C, B-D, D-E, A-C}
(V-1 = 4 edges selected β†’ STOP)

MST Total Cost = 1 + 2 + 2 + 3 = 8 βœ“ (Same as Prim's!)
Kruskal's Time Complexity:
O(E log E) or O(E log V) β€” dominated by sorting edges
PropertyPrim'sKruskal's
ApproachVertex-based (grows MST)Edge-based (picks edges)
Data StructurePriority Queue / Min-HeapUnion-Find (Disjoint Sets)
Better forDense graphsSparse graphs
Time (Adj Matrix)O(VΒ²)O(E log E)
Starts fromAny single vertexSorted edge list
11. Huffman Coding
Definition:
Huffman Coding is a greedy algorithm for lossless data compression. It assigns variable-length binary codes to characters β€” frequent characters get shorter codes, rare characters get longer codes.
Steps to build Huffman Tree:
1. Create a leaf node for each character with its frequency
2. Put all nodes in a min-priority queue (sorted by frequency)
3. While queue has more than 1 node:
    a. Extract two nodes with minimum frequency
    b. Create new internal node with frequency = sum of both
    c. Make the two nodes its left and right children
    d. Insert new node back into queue
4. Assign 0 to left branches, 1 to right branches
Solved Numerical
Q: Build Huffman Tree for characters: A(5), B(9), C(12), D(13), E(16), F(45)
Solution:

Step 1: Queue: [A(5), B(9), C(12), D(13), E(16), F(45)]

Step 2: Extract A(5) & B(9) β†’ Create node AB(14)
Queue: [C(12), D(13), AB(14), E(16), F(45)]

Step 3: Extract C(12) & D(13) β†’ Create node CD(25)
Queue: [AB(14), E(16), CD(25), F(45)]

Step 4: Extract AB(14) & E(16) β†’ Create node ABE(30)
Queue: [CD(25), ABE(30), F(45)]

Step 5: Extract CD(25) & ABE(30) β†’ Create node CDABE(55)
Queue: [F(45), CDABE(55)]

Step 6: Extract F(45) & CDABE(55) β†’ Create root(100)

Huffman Codes:
F = 0 (1 bit) β€” most frequent, shortest code!
C = 100 (3 bits)
D = 101 (3 bits)
A = 1100 (4 bits)
B = 1101 (4 bits)
E = 111 (3 bits)

Weighted Path Length = 5Γ—4 + 9Γ—4 + 12Γ—3 + 13Γ—3 + 16Γ—3 + 45Γ—1 = 224 bits
Fixed-length would need 3 bits Γ— 100 = 300 bits. Huffman saves 25%!
Time Complexity: O(n log n)
Application: ZIP files, JPEG, MP3 compression
Properties of Huffman Code:
β€’ It is a prefix-free code β€” no code is prefix of another (unambiguous decoding)
β€’ It gives optimal prefix code for a given frequency distribution
β€’ Greedy choice: always merge two least frequent nodes
12. Optimal Merge Patterns
Problem:
Given n sorted files of different sizes, merge them into a single sorted file with minimum total cost. The cost of merging two files = sum of their sizes.
Greedy Strategy (like Huffman):
1. Always merge the two smallest files first
2. The merged file size = sum of both
3. Repeat until only one file remains
4. Total cost = sum of all intermediate merge costs
Solved Numerical
Q: Merge files with sizes: [5, 3, 2, 7, 9]. Find minimum merge cost.
Solution:

Step 1: Sort: [2, 3, 5, 7, 9]

Step 2: Merge 2 + 3 = 5 β†’ Cost = 5
Files: [5, 5, 7, 9]

Step 3: Merge 5 + 5 = 10 β†’ Cost = 10
Files: [7, 9, 10]

Step 4: Merge 7 + 9 = 16 β†’ Cost = 16
Files: [10, 16]

Step 5: Merge 10 + 16 = 26 β†’ Cost = 26
Files: [26]

Total Minimum Merge Cost = 5 + 10 + 16 + 26 = 57
Wrong approach (largest first):
Merge 9+7=16 (cost 16), then 16+5=21 (cost 21), then 21+3=24 (cost 24), then 24+2=26 (cost 26)
Total = 16+21+24+26 = 87 β†’ Much worse!
Time Complexity: O(n log n) β€” using min-heap
Key Insight: Same structure as Huffman tree! Files = leaves, merge costs = internal nodes
πŸ“Š Chapter 2 β€” Dynamic Programming
πŸ“Š DYNAMIC PROGRAMMING β€” MIND MAP
Core Idea β†’ Solve overlapping subproblems once, store results, reuse them
Two Approaches β†’ Top-Down (Memoization) & Bottom-Up (Tabulation)
Key Problems β†’ 0/1 Knapsack, LCS, Matrix Chain, TSP, Subset Sum
Requirements β†’ Optimal Substructure + Overlapping Subproblems
vs Greedy β†’ DP explores ALL options; Greedy picks one immediately
1. Dynamic Programming β€” General Method
Definition:
Dynamic Programming (DP) is an algorithm design technique that solves complex problems by breaking them into simpler overlapping subproblems, solving each subproblem only once, and storing the results in a table to avoid redundant computation.
Two Properties Required:
1. Optimal Substructure:
An optimal solution to the problem contains optimal solutions to its subproblems.

2. Overlapping Subproblems:
The same subproblems are solved multiple times. DP stores their solutions to avoid recomputation.
Two Approaches:
PropertyTop-Down (Memoization)Bottom-Up (Tabulation)
DirectionStarts from main problemStarts from smallest subproblem
TechniqueRecursion + CacheIterative + Table
SolvesOnly needed subproblemsAll subproblems
Stack Overflow?Possible (deep recursion)No
GenerallyEasier to codeMore efficient
Classic Example β€” Fibonacci:
Without DP: fib(5) recalculates fib(3) twice, fib(2) three times β†’ O(2ⁿ)
With DP: Store fib(2), fib(3)... in table β†’ O(n)
This is the power of DP β€” eliminate redundant work!
2. 0/1 Knapsack Problem
Problem:
Given n items with weights w[] and profits p[], and knapsack capacity W. Each item is either taken completely (1) or not taken (0). Maximize total profit without exceeding W.
Recurrence Relation:

K[i][w] = max( K[i-1][w] , p[i] + K[i-1][w - w[i]] )   if w[i] ≀ w
K[i][w] = K[i-1][w]   if w[i] > w

Either skip item i, or include item i (if it fits)
Solved Numerical
Q: Items: Profits = [1, 2, 5, 6], Weights = [2, 3, 4, 5], Capacity W = 8. Find max profit using DP.
Solution β€” Build DP Table K[i][w]:

Items: i=1(p=1,w=2), i=2(p=2,w=3), i=3(p=5,w=4), i=4(p=6,w=5)

DP Table (rows = items 0-4, cols = capacity 0-8):

w→ 0 1 2 3 4 5 6 7 8 i=0 0 0 0 0 0 0 0 0 0 i=1 0 0 1 1 1 1 1 1 1 i=2 0 0 1 2 2 3 3 3 3 i=3 0 0 1 2 5 5 6 7 7 i=4 0 0 1 2 5 6 6 7 8
K[4][8] = 8 β†’ Maximum Profit

How K[4][8] = 8 was computed:
K[4][8] = max(K[3][8], 6 + K[3][8-5]) = max(7, 6 + K[3][3]) = max(7, 6+2) = max(7, 8) = 8

Traceback β€” Which items?
K[4][8]=8 β‰  K[3][8]=7 β†’ Item 4 included βœ“ (remaining capacity = 8-5=3)
K[3][3]=2 = K[2][3]=2 β†’ Item 3 NOT included
K[2][3]=2 β‰  K[1][3]=1 β†’ Item 2 included βœ“ (remaining = 3-3=0)
K[1][0]=0 β†’ Done

Result: Items 2 & 4 selected. Max Profit = 2 + 6 = 8
Time Complexity: O(nW)
Space: O(nW) β€” can be optimized to O(W)
3. Subset Sum Problem
Problem:
Given a set of non-negative integers and a target sum S, determine if there is a subset that adds up to exactly S.
Recurrence:
dp[i][j] = dp[i-1][j] OR dp[i-1][j - set[i]]

Either exclude element i, or include it (if j β‰₯ set[i])
Solved Numerical
Q: Set = {3, 7, 1, 8, 4}, Target Sum S = 11. Is subset possible?
Solution:

Build boolean DP table dp[i][j] = "Can we make sum j using first i elements?"

j→ 0 1 2 3 4 5 6 7 8 9 10 11 {} T F F F F F F F F F F F {3} T F F T F F F F F F F F {3,7} T F F T F F F T F F T F {3,7,1} T T F T T F F T T F T T {3,7,1,8}T T F T T F F T T T T T {3,7,1,8,4} T T F T T T F T T T T T
dp[5][11] = TRUE

dp[5][11] = TRUE β†’ Yes, subset exists!

Traceback: 11 = 3 + 7 + 1 = {3, 7, 1} or 11 = 3 + 8 = {3, 8} or 11 = 7 + 4 = {7, 4}
Time: O(n Γ— S) | Space: O(n Γ— S) β€” can optimize to O(S)
4. Change Making Problem (Coin Change)
Problem:
Given coin denominations and an amount, find the minimum number of coins needed to make that amount. You have unlimited supply of each denomination.
Recurrence:
dp[i] = min(dp[i - coin[j]] + 1) for all valid coins
dp[0] = 0 (base case: 0 coins for amount 0)
Solved Numerical
Q: Coins = {1, 5, 6, 9}. Find minimum coins for amount = 11.
Solution β€” Build DP array:

dp[0] = 0
dp[1] = dp[1-1] + 1 = dp[0] + 1 = 1 (use coin 1)
dp[2] = dp[2-1] + 1 = 2 (1+1)
dp[3] = 3 (1+1+1)
dp[4] = 4 (1+1+1+1)
dp[5] = min(dp[5-1]+1, dp[5-5]+1) = min(5, 1) = 1 (use coin 5)
dp[6] = min(dp[5]+1, dp[1]+1, dp[0]+1) = min(2, 2, 1) = 1 (use coin 6)
dp[7] = min(dp[6]+1, dp[2]+1, dp[1]+1) = min(2, 3, 2) = 2 (6+1)
dp[8] = min(dp[7]+1, dp[3]+1, dp[2]+1) = min(3, 4, 3) = 3
dp[9] = min(dp[8]+1, dp[4]+1, dp[3]+1, dp[0]+1) = min(4, 5, 4, 1) = 1 (use coin 9)
dp[10] = min(dp[9]+1, dp[5]+1, dp[4]+1, dp[1]+1) = min(2, 2, 5, 2) = 2 (9+1 or 5+5)
dp[11] = min(dp[10]+1, dp[6]+1, dp[5]+1, dp[2]+1) = min(3, 2, 2, 3) = 2

Minimum coins = 2 β†’ coins used: 5 + 6 = 11

Note: Greedy would pick 9+1+1 = 3 coins. DP gives optimal answer = 2 coins!
Time: O(amount Γ— n) | Space: O(amount)
where n = number of coin denominations
5. Optimal Binary Search Tree (OBST)
Problem:
Given n keys with their search probabilities (frequencies), construct a BST that minimizes the total search cost (expected number of comparisons).
Recurrence:
C[i][j] = min over i≀r≀j { C[i][r-1] + C[r+1][j] + W[i][j] }

W[i][j] = sum of probabilities from key i to key j
C[i][i] = p[i] (single key)
C[i][i-1] = 0 (empty tree)
Solved Numerical
Q: Keys = {10, 20, 30} with frequencies p = {3, 3, 1}. Find optimal BST cost.
Solution:

Keys: k1=10(p=3), k2=20(p=3), k3=30(p=1)

Single keys (diagonal):
C[1][1] = 3, C[2][2] = 3, C[3][3] = 1

Two keys:
C[1][2]: W[1][2] = 3+3 = 6
  r=1: C[0][0] + C[2][2] + 6 = 0 + 3 + 6 = 9
  r=2: C[1][1] + C[3][2] + 6 = 3 + 0 + 6 = 9
  C[1][2] = min(9, 9) = 9, root = k1 or k2

C[2][3]: W[2][3] = 3+1 = 4
  r=2: C[1][1] + C[3][3] + 4 = ... β†’ not valid (C[1][1] is wrong subtree)
  r=2: 0 + 1 + 4 = 5
  r=3: 3 + 0 + 4 = 7
  C[2][3] = min(5, 7) = 5, root = k2

All three keys:
C[1][3]: W[1][3] = 3+3+1 = 7
  r=1: C[0][0] + C[2][3] + 7 = 0 + 5 + 7 = 12
  r=2: C[1][1] + C[3][3] + 7 = 3 + 1 + 7 = 11
  r=3: C[1][2] + C[4][3] + 7 = 9 + 0 + 7 = 16
  C[1][3] = min(12, 11, 16) = 11, root = k2 (20)

Optimal BST Cost = 11, Root = 20
Tree: 20 is root, 10 is left child, 30 is right child
Time: O(nΒ³) | Space: O(nΒ²)
6. Matrix-Chain Multiplication
Problem:
Given a chain of matrices A₁×Aβ‚‚Γ—...Γ—Aβ‚™, find the optimal parenthesization that minimizes the total number of scalar multiplications. (Matrix multiplication is associative β€” order matters for cost!)
Recurrence:
M[i][j] = min over i≀k<j { M[i][k] + M[k+1][j] + p[i-1]Γ—p[k]Γ—p[j] }

Dimensions: A_i has dimensions p[i-1] Γ— p[i]
M[i][i] = 0 (single matrix β€” no multiplication needed)
Solved Numerical
Q: Matrices A₁(10Γ—30), Aβ‚‚(30Γ—5), A₃(5Γ—60). Find minimum multiplications.
Solution:
Dimensions array p = [10, 30, 5, 60]

Chain length = 2:
M[1][2] = p[0]Γ—p[1]Γ—p[2] = 10Γ—30Γ—5 = 1500
M[2][3] = p[1]Γ—p[2]Γ—p[3] = 30Γ—5Γ—60 = 9000

Chain length = 3 (M[1][3]):
k=1: M[1][1] + M[2][3] + p[0]Γ—p[1]Γ—p[3] = 0 + 9000 + 10Γ—30Γ—60 = 9000 + 18000 = 27000
k=2: M[1][2] + M[3][3] + p[0]Γ—p[2]Γ—p[3] = 1500 + 0 + 10Γ—5Γ—60 = 1500 + 3000 = 4500

M[1][3] = min(27000, 4500) = 4500
Optimal split at k=2: (A₁ Γ— Aβ‚‚) Γ— A₃

First multiply A₁(10Γ—30) Γ— Aβ‚‚(30Γ—5) β†’ result(10Γ—5) β†’ 1500 multiplications
Then result(10Γ—5) Γ— A₃(5Γ—60) β†’ final(10Γ—60) β†’ 3000 multiplications
Total = 1500 + 3000 = 4500

vs (A₁ Γ— (Aβ‚‚ Γ— A₃)) = 9000 + 18000 = 27000 β€” much worse!
Time: O(nΒ³) | Space: O(nΒ²)
7. Longest Common Subsequence (LCS)
Problem:
Given two strings/sequences X and Y, find the longest subsequence common to both. A subsequence is a sequence that can be derived by deleting elements without changing order.
Recurrence:
If X[i] == Y[j]:   L[i][j] = L[i-1][j-1] + 1
If X[i] β‰  Y[j]:   L[i][j] = max(L[i-1][j], L[i][j-1])

Base: L[0][j] = L[i][0] = 0
Solved Numerical
Q: Find LCS of X = "ABCBDAB" and Y = "BDCAB"
Solution β€” DP Table:

"" B D C A B "" 0 0 0 0 0 0 A 0 0 0 0 1 1 B 0 1 1 1 1 2 C 0 1 1 2 2 2 B 0 1 1 2 2 3 D 0 1 2 2 2 3 A 0 1 2 2 3 3 B 0 1 2 2 3 4
L[7][5] = 4 β†’ LCS length = 4

Traceback (diagonal = match):
L[7][5]=4: X[7]=B = Y[5]=B β†’ match! β†’ go diagonal to L[6][4]
L[6][4]=3: X[6]=A = Y[4]=A β†’ match! β†’ go diagonal to L[5][3]
L[5][3]=2: X[5]=D β‰  Y[3]=C β†’ go to max(L[4][3], L[5][2]) = L[4][3]
L[4][3]=2: X[4]=B β‰  Y[3]=C β†’ go to max(L[3][3], L[4][2]) = L[3][3]
L[3][3]=2: X[3]=C = Y[3]=C β†’ match! β†’ go diagonal to L[2][2]
L[2][2]=1: X[2]=B β‰  Y[2]=D β†’ go to max(L[1][2], L[2][1]) = L[2][1]
L[2][1]=1: X[2]=B = Y[1]=B β†’ match! β†’ go diagonal to L[1][0]=0

LCS = "BCAB" (length 4)
Time: O(m Γ— n) | Space: O(m Γ— n)
where m = |X|, n = |Y|
8. Travelling Salesman Problem (TSP)
Problem:
Given n cities and distances between every pair, find the shortest route that visits every city exactly once and returns to the starting city (Hamiltonian cycle with minimum cost).
DP Recurrence (Held-Karp):
g(i, S) = min over j∈S { c(i, j) + g(j, S - {j}) }

g(i, S) = minimum cost to go from city i, visiting all cities in set S, and returning to start
Base: g(i, βˆ…) = c(i, 1) β€” direct return to starting city
Solved Numerical
Q: 4 cities with distance matrix:
d = [[0, 10, 15, 20], [10, 0, 35, 25], [15, 35, 0, 30], [20, 25, 30, 0]]
Find minimum cost tour starting from city 1.
Solution:

Starting city = 1. Need to visit {2, 3, 4} and return.

Base cases g(i, βˆ…):
g(2, βˆ…) = c(2,1) = 10
g(3, βˆ…) = c(3,1) = 15
g(4, βˆ…) = c(4,1) = 20

Sets of size 1:
g(2, {3}) = c(2,3) + g(3,βˆ…) = 35 + 15 = 50
g(2, {4}) = c(2,4) + g(4,βˆ…) = 25 + 20 = 45
g(3, {2}) = c(3,2) + g(2,βˆ…) = 35 + 10 = 45
g(3, {4}) = c(3,4) + g(4,βˆ…) = 30 + 20 = 50
g(4, {2}) = c(4,2) + g(2,βˆ…) = 25 + 10 = 35
g(4, {3}) = c(4,3) + g(3,βˆ…) = 30 + 15 = 45

Sets of size 2:
g(2, {3,4}) = min{ c(2,3)+g(3,{4}), c(2,4)+g(4,{3}) } = min{35+50, 25+45} = min{85, 70} = 70
g(3, {2,4}) = min{ c(3,2)+g(2,{4}), c(3,4)+g(4,{2}) } = min{35+45, 30+35} = min{80, 65} = 65
g(4, {2,3}) = min{ c(4,2)+g(2,{3}), c(4,3)+g(3,{2}) } = min{25+50, 30+45} = min{75, 75} = 75

Final (set of size 3):
g(1, {2,3,4}) = min{
  c(1,2) + g(2,{3,4}) = 10 + 70 = 80
  c(1,3) + g(3,{2,4}) = 15 + 65 = 80
  c(1,4) + g(4,{2,3}) = 20 + 75 = 95
} = 80

Minimum Tour Cost = 80
Tour: 1 β†’ 2 β†’ 4 β†’ 3 β†’ 1 (cost = 10+25+30+15 = 80)
Time: O(nΒ² Γ— 2ⁿ) | Space: O(n Γ— 2ⁿ)
Much better than brute force O(n!) but still exponential
9. Divide & Conquer vs Dynamic Programming
PropertyDivide & ConquerDynamic Programming
SubproblemsIndependent (no overlap)Overlapping (solved many times)
ApproachTop-Down (recursive)Usually Bottom-Up (tabular)
StorageNo memoization neededStores results in table
RedundancyMay recompute same subproblemsNever recomputes (looks up table)
ExamplesMerge Sort, Quick Sort, Binary Search0/1 Knapsack, LCS, Matrix Chain, TSP
Best forProblems with distinct subproblemsOptimization problems with overlapping subproblems
Easy way to remember:
β€’ D&C β†’ Divide, Solve, Combine (subproblems are independent)
β€’ DP β†’ Store and Reuse (subproblems overlap, so save answers)
πŸ”™ Chapter 3 β€” Backtracking & Branch and Bound
πŸ“Š BACKTRACKING & BRANCH-BOUND β€” MIND MAP
Backtracking β†’ Build solution step-by-step, undo (backtrack) when constraint is violated
State Space Tree β†’ Tree of all possible solutions; prune invalid branches
Backtracking Problems β†’ N-Queens, Sum of Subsets, Hamiltonian Cycle
Branch & Bound β†’ Like backtracking but uses bounds to prune more aggressively
B&B Problems β†’ 0/1 Knapsack, TSP (optimization problems)
1. Backtracking β€” General Method
Definition:
Backtracking is a systematic way of trying out different solutions. If the current solution doesn't satisfy constraints, it backtracks (undoes the last step) and tries a different path in the state-space tree.
Key Concepts:
β€’ State Space Tree: Tree representing all possible solutions
β€’ Solution Vector: (x₁, xβ‚‚, ..., xβ‚™) β€” choices made at each step
β€’ Bounding Function: Kills paths that can't lead to valid solution (pruning)
β€’ Promising Node: Node that could still lead to a valid solution
β€’ Non-promising: Node that violates constraints β†’ backtrack!
C++
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// General Backtracking Pattern (Permutation example) #include <iostream> using namespace std; int x[20], n; void backtrack(int k) { for (int v = 1; v <= n; v++) { x[k] = v; if (k == n) { // Complete solution for (int i = 1; i <= n; i++) cout << x[i] << " "; cout << endl; } else if (/* constraints satisfied */ true) backtrack(k + 1); // Go deeper // else: prune β€” don't recurse (backtrack) } }
2. N-Queen's Problem
Problem:
Place N queens on an NΓ—N chessboard such that no two queens attack each other. Queens can attack along rows, columns, and diagonals.
Constraints (for queen at row i, col j):
1. No two queens in the same column: x[i] β‰  x[j]
2. No two queens on the same diagonal: |x[i] - x[j]| β‰  |i - j|

Approach: Place one queen per row. For each row, try each column. If safe, move to next row. If not safe in any column, backtrack.
C++
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
int x[20]; // x[i] = column of queen in row i bool canPlace(int k, int col) { for (int i = 1; i < k; i++) { if (x[i] == col || abs(x[i] - col) == abs(i - k)) return false; // Same column or diagonal } return true; } void nQueens(int k, int n) { for (int col = 1; col <= n; col++) { if (canPlace(k, col)) { x[k] = col; if (k == n) { // All queens placed β€” print solution for (int i = 1; i <= n; i++) cout << x[i] << " "; cout << endl; } else nQueens(k + 1, n); // Place next queen } } }
Solved Numerical β€” 4-Queens
Q: Find all solutions for 4-Queens problem
Solution:

Solution 1: x = (2, 4, 1, 3) Solution 2: x = (3, 1, 4, 2) 1 2 3 4 1 2 3 4 β”Œβ”€β”€β”¬β”€β”€β”¬β”€β”€β”¬β”€β”€β” β”Œβ”€β”€β”¬β”€β”€β”¬β”€β”€β”¬β”€β”€β” 1β”‚ β”‚ Qβ”‚ β”‚ β”‚ 1β”‚ β”‚ β”‚ Qβ”‚ β”‚ β”œβ”€β”€β”Όβ”€β”€β”Όβ”€β”€β”Όβ”€β”€β”€ β”œβ”€β”€β”Όβ”€β”€β”Όβ”€β”€β”Όβ”€β”€β”€ 2β”‚ β”‚ β”‚ β”‚ Qβ”‚ 2β”‚ Qβ”‚ β”‚ β”‚ β”‚ β”œβ”€β”€β”Όβ”€β”€β”Όβ”€β”€β”Όβ”€β”€β”€ β”œβ”€β”€β”Όβ”€β”€β”Όβ”€β”€β”Όβ”€β”€β”€ 3β”‚ Qβ”‚ β”‚ β”‚ β”‚ 3β”‚ β”‚ β”‚ β”‚ Qβ”‚ β”œβ”€β”€β”Όβ”€β”€β”Όβ”€β”€β”Όβ”€β”€β”€ β”œβ”€β”€β”Όβ”€β”€β”Όβ”€β”€β”Όβ”€β”€β”€ 4β”‚ β”‚ β”‚ Qβ”‚ β”‚ 4β”‚ β”‚ Qβ”‚ β”‚ β”‚ β””β”€β”€β”΄β”€β”€β”΄β”€β”€β”΄β”€β”€β”˜ β””β”€β”€β”΄β”€β”€β”΄β”€β”€β”΄β”€β”€β”˜
4-Queens has exactly 2 solutions

Trace for Solution 1:
Row 1: Try col 1 β†’ place Q at (1,1)
Row 2: Try col 1 β†’ conflict! col 2 β†’ diagonal conflict! col 3 β†’ place Q at (2,3)
Row 3: All columns conflict β†’ Backtrack!
Row 2: Try col 4 β†’ diagonal conflict β†’ Backtrack to Row 1!
Row 1: Try col 2 β†’ place Q at (1,2)
Row 2: Try col 1 β†’ conflict! col 4 β†’ place Q at (2,4) βœ“
Row 3: col 1 β†’ place Q at (3,1) βœ“
Row 4: col 3 β†’ place Q at (4,3) βœ“
Solution found: (2, 4, 1, 3)
Important facts:
β€’ 4-Queens β†’ 2 solutions
β€’ 8-Queens β†’ 92 solutions
β€’ N-Queens for N β‰₯ 4 always has solutions
β€’ Time Complexity: O(N!) in worst case (but pruning reduces it significantly)
3. Sum of Subsets
Problem:
Given a set of n positive integers and a target sum M, find all subsets whose elements add up to M using backtracking.
Approach:
β€’ Sort the set in ascending order
β€’ At each step, decide to include or exclude the current element
β€’ Prune if: current sum + remaining elements < M (can't reach target)
β€’ Prune if: current sum + next element > M (will exceed target)
Solved Numerical
Q: Set W = {1, 2, 5, 6, 8}, M = 9. Find all subsets with sum = 9.
Solution (State Space Tree exploration):

Sort: W = {1, 2, 5, 6, 8} β†’ already sorted

Include 1 (sum=1) β†’ Include 2 (sum=3) β†’ Include 5 (sum=8) β†’ Include 6 (sum=14 > 9) βœ—
   Backtrack β†’ Exclude 6 β†’ Include 8 (sum=16 > 9) βœ— β†’ Backtrack
   Sum = 8, no more elements that fit β†’ Backtrack
Include 1 β†’ Include 2 β†’ Exclude 5 β†’ Include 6 (sum=9) β†’ Solution: {1, 2, 6} βœ“
Include 1 β†’ Exclude 2 β†’ Include 5 (sum=6) β†’ Exclude 6 β†’ Include 8 (sum=14 > 9) βœ—
Include 1 β†’ Exclude 2 β†’ Exclude 5 β†’ Include 6 (sum=7) β†’ Include 8 (sum=15 > 9) βœ—
Include 1 β†’ Exclude 2 β†’ Exclude 5 β†’ Exclude 6 β†’ Include 8 (sum=9) β†’ Solution: {1, 8} βœ“
Exclude 1 β†’ Include 2 β†’ Include 5 (sum=7) β†’ Exclude 6 β†’ Include 8 (sum=15 > 9) βœ—
Exclude 1 β†’ Exclude 2 β†’ Include 5 (sum=5) β†’ Exclude 6 β†’ Include 8 (sum=13 > 9) βœ—
Exclude 1 β†’ Include 2 β†’ Exclude 5 β†’ Include 6 (sum=8) β†’ Exclude 8 β†’ sum=8 β‰  9 βœ—
(... more branches explored and pruned ...)

All subsets with sum = 9: {1, 2, 6} and {1, 8}
4. Hamiltonian Cycles
Definition:
A Hamiltonian cycle is a path in a graph that visits every vertex exactly once and returns to the starting vertex. The problem is to find if such a cycle exists.
Backtracking Approach:
1. Start from vertex 1 (or any vertex)
2. Try adding adjacent vertices one by one
3. A vertex can be added if: it's adjacent to the last vertex AND not already visited
4. If all vertices are included and last vertex connects to first β†’ Hamiltonian cycle found!
5. If stuck, backtrack and try next vertex
Solved Numerical
Q: Find Hamiltonian cycle for graph: 1-2, 1-3, 2-3, 2-4, 3-4, 3-5, 4-5
Solution:

Start: vertex 1
Path: [1] β†’ try 2: [1,2] β†’ try 3: [1,2,3] β†’ try 4: [1,2,3,4] β†’ try 5: [1,2,3,4,5]
Check: Is 5 adjacent to 1? Look at edges... 5 is connected to 3 and 4, NOT to 1.
Backtrack!

Path: [1,2,3,5] β†’ try 4: [1,2,3,5,4]
Check: Is 4 adjacent to 1? Look at edges... No edge 4-1.
Backtrack!

Path: [1,2,4] β†’ try 3: [1,2,4,3] β†’ try 5: [1,2,4,3,5]
Check: Is 5 adjacent to 1? No. Backtrack!

Path: [1,2,4,5] β†’ try 3: [1,2,4,5,3]
Check: Is 3 adjacent to 1? Yes! Edge 1-3 exists.
Hamiltonian Cycle: 1 β†’ 2 β†’ 4 β†’ 5 β†’ 3 β†’ 1 βœ“
Key Points:
β€’ Hamiltonian Cycle problem is NP-Complete
β€’ No known polynomial-time algorithm
β€’ Backtracking prunes invalid paths but worst case is still O(n!)
5. Branch and Bound β€” General Method
Definition:
Branch and Bound (B&B) is like backtracking but for optimization problems. It uses a bound (upper/lower) to prune branches that cannot lead to a better solution than the current best.
PropertyBacktrackingBranch & Bound
Problem TypeDecision problems (feasibility)Optimization problems (min/max)
TraversalDFS (Depth-First)BFS, DFS, or Best-First (LC search)
PruningBounding function (constraints)Bound function (cost estimate)
GoalFind any/all valid solutionsFind optimal solution
Live NodeExplored via DFSExplored based on bound (best-first)
Key Terms:
β€’ Live Node: Node whose children haven't been fully explored
β€’ E-Node: Node currently being expanded
β€’ Dead Node: Node that is pruned (bound worse than current best)
β€’ Upper Bound (UB): Best possible value from this node
β€’ Lower Bound (LB): Minimum achievable from this node
6. 0/1 Knapsack using Branch & Bound
Approach:
Use upper bound = current profit + fractional knapsack on remaining items. If UB ≀ current best known profit, prune the node.
Solved Numerical
Q: Items (sorted by p/w): p=[10,10,12,18], w=[2,4,6,9], W=15. Solve using B&B.
Solution:

Profit/Weight ratios: 10/2=5, 10/4=2.5, 12/6=2, 18/9=2 (already sorted by ratio)

Root: profit=0, weight=0
UB = 0 + fractional knapsack on all items with W=15
= 10 + 10 + 12 + (15-12)Γ—2 = 32+6 = 38 (take items 1,2,3 fully, 3/9 of item 4)

Level 1 β€” Item 1 (w=2, p=10):
Include: profit=10, weight=2, UB=10+10+12+(3/9)Γ—18=38 ← explore
Exclude: profit=0, weight=0, UB=0+10+12+(1/9)Γ—18=24 ← explore later

Level 2 β€” Item 2 (w=4, p=10): (from include item 1, p=10, w=2)
Include: profit=20, weight=6, UB=20+12+(3/9)Γ—18=38 ← explore
Exclude: profit=10, weight=2, UB=10+12+(3/9)Γ—18=28

Level 3 β€” Item 3 (w=6, p=12): (from p=20, w=6)
Include: profit=32, weight=12, UB=32+(3/9)Γ—18=38 ← explore
Exclude: profit=20, weight=6, UB=20+(9/9)Γ—18=38

Level 4 β€” Item 4 (w=9, p=18): (from p=32, w=12)
Include: weight=12+9=21 > 15 β†’ pruned!
Exclude: profit=32, weight=12 β†’ feasible solution = 32

Now check other branches... Exclude item 3, include item 4: p=20+18=38, w=6+9=15 ≀ 15 βœ“
Better solution found = 38!

Optimal: Items 1, 2, 4 β†’ Profit = 10+10+18 = 38, Weight = 2+4+9 = 15
7. Travelling Salesperson using Branch & Bound
Approach:
Use a lower bound on the cost of completing the tour. The lower bound for each node = cost so far + estimated minimum remaining cost. Prune if LB β‰₯ current best tour.
Lower Bound Calculation (Row Reduction method):
1. Reduce cost matrix: subtract row minimum from each row, then column minimum from each column
2. Sum of all reductions = Lower Bound for root
3. For each child node: LB = parent LB + cost(parent→child) + additional reduction
Solved Numerical
Q: Find minimum cost tour for 4 cities: Cost matrix = [[∞,10,15,20],[5,∞,9,10],[6,13,∞,12],[8,8,9,∞]]
Solution β€” Row Reduction:

Original Matrix:
1 2 3 4 Row Min 1 [ ∞ 10 15 20 ] 10 2 [ 5 ∞ 9 10 ] 5 3 [ 6 13 ∞ 12 ] 6 4 [ 8 8 9 ∞ ] 8
After Row Reduction (subtract row min):
1 2 3 4 Col Min 1 [ ∞ 0 5 10 ] 2 [ 0 ∞ 4 5 ] 3 [ 0 7 ∞ 6 ] 4 [ 0 0 1 ∞ ] 0 0 1 5
After Column Reduction (subtract col min from cols 3,4):
Row reduction sum = 10+5+6+8 = 29
Col reduction sum = 0+0+1+5 = 6
Root Lower Bound = 29 + 6 = 35

Now expand nodes (1β†’2, 1β†’3, 1β†’4), compute LB for each, explore minimum first...
Following the LC (Least Cost) strategy leads to:

Optimal Tour: 1 β†’ 2 β†’ 4 β†’ 3 β†’ 1, Cost = 10 + 10 + 12 + 6 = 35
B&B vs DP for TSP:
β€’ DP (Held-Karp): O(nΒ² Γ— 2ⁿ) β€” guarantees optimal
β€’ B&B: Often faster in practice due to pruning β€” but worst case same
β€’ Both are exponential β€” TSP is NP-Hard!
⚑ Ready for Exam? Sab padh liya? Ab Quick Revision karo β€” formulas, key points aur common mistakes ek jagah! Quick Revision Karo β†’
⚑ Quick Revision β€” Last Minute Exam Prep!
πŸ“Œ How to use: Read this 15 minutes before exam. Includes definitions, formulas, tricks, solved patterns β€” everything you need!

βš”οΈ Chapter 1 β€” Divide & Conquer

πŸ“– What is Divide & Conquer?
Break problem into subproblems β†’ Solve recursively β†’ Combine solutions. Subproblems are independent.

πŸ’‘ Trick: Problem β†’ Divide β†’ Conquer β†’ Combine β†’ Answer
Recurrence: T(n) = aT(n/b) + f(n)
πŸ” Binary Search β€” Quick Points:
β€’ Array MUST be sorted
β€’ Compare with mid β†’ go left or right
β€’ T(n) = T(n/2) + O(1) = O(log n)
β€’ For n = 10⁢ β†’ only ~20 comparisons!

Exam Tip: Show low, high, mid at each step. Mark "Found" or "Not Found".
βœ… Sorting Algorithms (Remember differences!):

Merge Sort: Always O(n log n), stable, O(n) extra space
β†’ Divide β†’ sort halves β†’ merge sorted halves

Quick Sort: O(n log n) avg, O(nΒ²) worst, in-place, unstable
β†’ Choose pivot β†’ partition β†’ sort left & right

Selection Sort: Always O(nΒ²), simple, in-place
β†’ Find min β†’ swap to front β†’ repeat

⚠️ Common Mistake: Saying Quick Sort is always O(n log n) β€” it's O(nΒ²) when already sorted!
Strassen's Matrix Multiplication:

Standard: 8 multiplications β†’ O(nΒ³)
Strassen: 7 multiplications β†’ O(n^2.81)

7 Products (MUST Memorize):
P1 = a(f-h), P2 = (a+b)h, P3 = (c+d)e
P4 = d(g-e), P5 = (a+d)(e+h)
P6 = (b-d)(g+h), P7 = (a-c)(e+f)

Result:
C[0][0] = P5+P4-P2+P6 | C[0][1] = P1+P2
C[1][0] = P3+P4 | C[1][1] = P5+P1-P3-P7
πŸ“Š Finding Max-Min (D&C):
Naive: 2(n-1) comparisons β†’ Too many!
D&C: 3n/2 - 2 comparisons β†’ Much better!

Example: n=9 β†’ Naive = 16 comparisons, D&C = 11 comparisons

πŸ’° Greedy Algorithms

πŸ“– What is Greedy?
Make locally optimal choice at each step β†’ hope for globally optimal. Never reconsiders past choices.

When it works: Greedy Choice Property + Optimal Substructure
When it FAILS: 0/1 Knapsack, TSP (use DP instead!)
πŸŽ’ Fractional Knapsack:
1. Calculate profit/weight ratio for each item
2. Sort by ratio (descending)
3. Take items greedily β†’ fraction of last item if needed
Time: O(n log n)

⚠️ IMPORTANT: Fractional β†’ Greedy works | 0/1 β†’ Greedy FAILS (use DP!)
βœ… MST β€” Prim's vs Kruskal's (Table Trick):
Prim'sKruskal's
ApproachVertex-based (grow MST)Edge-based (pick edges)
UsesMin-Heap / Priority QueueUnion-Find / Disjoint Set
Better forDense graphsSparse graphs
TimeO(VΒ²) or O(E log V)O(E log E)
StartAny single vertexSorted edge list
πŸ—œοΈ Huffman Coding β€” Quick Steps:

1. Merge two smallest frequency nodes β†’ repeat
2. Left branch = 0, Right branch = 1
3. Frequent chars β†’ short codes, Rare β†’ long codes
4. Prefix-free (no code is prefix of another)
Time: O(n log n)

Exam Tip: Draw tree step-by-step + write final codes table!
πŸ“‚ Optimal Merge Patterns:
β€’ Always merge two smallest files first
β€’ Same structure as Huffman tree!
β€’ Total cost = sum of all intermediate merge costs
β€’ Wrong approach: Merging largest first β†’ much higher cost!

πŸ“Š Chapter 2 β€” Dynamic Programming

πŸ“– What is DP?
Solve overlapping subproblems once β†’ store results β†’ reuse them. Never recompute!

Two Requirements:
1. Optimal Substructure: Optimal solution uses optimal sub-solutions
2. Overlapping Subproblems: Same subproblems solved multiple times

Remember: Top-Down = Recursion + Cache | Bottom-Up = Iterative + Table
πŸ”₯ ALL DP FORMULAS (Memorize These!):

0/1 Knapsack:
K[i][w] = max( K[i-1][w] , p[i] + K[i-1][w-w[i]] )

LCS:
Match β†’ L[i][j] = L[i-1][j-1] + 1
No match β†’ L[i][j] = max(L[i-1][j], L[i][j-1])

Matrix Chain:
M[i][j] = min{ M[i][k] + M[k+1][j] + p[i-1]Γ—p[k]Γ—p[j] }

TSP (Held-Karp):
g(i,S) = min{ c(i,j) + g(j, S-{j}) }

Coin Change:
dp[i] = min( dp[i - coin[j]] + 1 )

Subset Sum:
dp[i][j] = dp[i-1][j] OR dp[i-1][j - set[i]]
πŸ’‘ 0/1 Knapsack β€” Solved Pattern:
Q: p=[1,2,5,6], w=[2,3,4,5], W=8
Answer: Build (n+1)Γ—(W+1) table. Fill row by row.
Result: K[4][8] = 8. Traceback β†’ Items 2 & 4 selected (profit = 2+6 = 8)

Exam Tip: Always show FULL table + traceback to find selected items!
πŸ’‘ LCS β€” Solved Pattern:
Q: X="ABCBDAB", Y="BDCAB"
Answer: Build table. Match β†’ diagonal+1. No match β†’ max(up, left).
L[7][5] = 4. Traceback diagonally β†’ LCS = "BCAB"

Exam Tip: Traceback: Match = go diagonal, No match = go to larger of up/left!
⚠️ D&C vs DP (Never Confuse!):
Divide & ConquerDynamic Programming
SubproblemsIndependentOverlapping
ApproachTop-Down (recursive)Bottom-Up (tabular)
StorageNone neededTable/Cache
RedundancyMay recomputeNever recomputes
ExamplesMerge Sort, Quick SortKnapsack, LCS, TSP

πŸ”™ Chapter 3 β€” Backtracking & Branch-Bound

πŸ“– What is Backtracking?
Build solution step-by-step β†’ if constraint violated β†’ undo last step (backtrack) β†’ try next option.

Key Terms:
β€’ State Space Tree = all possible paths
β€’ Pruning = cutting invalid branches
β€’ Promising node = might lead to solution
β€’ Non-promising = violates constraint β†’ backtrack!
πŸ‘Έ N-Queens β€” Quick Points:
Constraints: x[i] β‰  x[j] (diff columns) AND |x[i]-x[j]| β‰  |i-j| (diff diagonals)

Important Facts:
β€’ 4-Queens β†’ 2 solutions: (2,4,1,3) & (3,1,4,2)
β€’ 8-Queens β†’ 92 solutions
β€’ Place 1 queen per row, try each column
β€’ Worst case: O(N!)

Exam Tip: Draw the 4Γ—4 board for both solutions!
βž• Sum of Subsets β€” Quick Points:
β€’ Include/Exclude decision at each element
β€’ Prune if: sum + remaining < target (can't reach)
β€’ Prune if: sum + next element > target (will exceed)
β€’ Sort array first for better pruning!

πŸ”„ Hamiltonian Cycle:
β€’ Visit every vertex exactly once & return to start
β€’ NP-Complete β€” no polynomial solution
β€’ Check: adjacent? not visited? last connects to first?
⚠️ Backtracking vs Branch & Bound (Never Confuse!):
BacktrackingBranch & Bound
Problem TypeDecision (feasibility)Optimization (min/max)
TraversalDFS onlyBFS, DFS, or LC (Best-First)
Pruning ByConstraintsBounds (UB/LB)
GoalFind any/all valid solutionsFind optimal solution
πŸ”€ B&B Key Terms:
β€’ Live Node: Children not fully explored yet
β€’ E-Node: Currently being expanded
β€’ Dead Node: Pruned (bound worse than current best)
β€’ 0/1 Knapsack B&B: UB = current profit + fractional remaining
β€’ TSP B&B: Row/Column reduction for lower bound

🎯 Last Minute Exam Strategy

❌ TOP 5 Mistakes Students Make:
1. Quick Sort: Saying it's always O(n log n) β€” it's O(nΒ²) worst case!
2. 0/1 Knapsack: Using Greedy β€” Greedy only works for Fractional!
3. D&C vs DP: Confusing which is which β€” D&C = independent, DP = overlapping
4. Huffman: Not showing step-by-step tree building
5. TSP/Knapsack: Not showing complete DP table in numericals
βœ… Numerical Questions Strategy:
β€’ D&C (Sort): Show divide tree + merge steps
β€’ Greedy (Knapsack): Show ratio β†’ sort β†’ selection table
β€’ DP (Knapsack/LCS): Show FULL table + traceback
β€’ Backtracking (N-Queens): Draw state space tree + board
β€’ Always mention Time & Space complexity at the end!
πŸ”₯ MUST REVISE (High Priority):

1. Quick Sort partition with numerical
2. Merge Sort divide + merge trace
3. Strassen's 7 formulas (P1-P7)
4. Fractional Knapsack numerical
5. Prim's & Kruskal's on same graph
6. Huffman tree construction
7. 0/1 Knapsack DP table + traceback
8. LCS DP table + traceback
9. Matrix Chain Multiplication
10. 4-Queens all solutions
11. D&C vs DP comparison table
12. Backtracking vs B&B comparison table

Practice NOW:
βœ“ Solve one Knapsack DP from memory
βœ“ Write Strassen's formulas without looking
βœ“ Draw 4-Queens both solutions
πŸ’‘ Which Technique? (Decision Guide):

Independent subproblems β†’ Divide & Conquer
Overlapping subproblems + optimization β†’ Dynamic Programming
Local best = Global best β†’ Greedy
Find valid solutions (constraints) β†’ Backtracking
Find optimal + can compute bounds β†’ Branch & Bound
📝 Important Questions β€” Exam-Focused
About This Section:
Contains 15 important questions (10 Γ— 2M + 5 Γ— 5M) covering all chapters. Click any question to see the complete answer. All code is in C++.

Section A β€” 2 Marks Questions (15)

2M PYQ 25-26
Q1. Explain Hamiltonian cycles with examples.
Answer:

Path visiting each vertex exactly once and returning to start. Example: Triangle A→B→C→A or Square 1→2→3→4→1. NP-Complete. Used in TSP.
2M PYQ 25-26
Q2. Identify the complexity of Binary Search algorithm in Best, Average and Worst Case.
Answer:

β€’ Best Case: O(1) β€” Element found at middle position
β€’ Average Case: O(log n) β€” Element found after several divisions
β€’ Worst Case: O(log n) β€” Element not found or at deepest level

Each comparison eliminates half the search space, so maximum comparisons = logβ‚‚(n)
2M PYQ 25-26
Q3. Explain how data compression is achieved using Huffman codes.
Answer:

Assigns variable-length codes based on frequency: frequent chars β†’ shorter codes, rare β†’ longer codes. "MISSISSIPPI" uses 22 bits vs 88 bits (75% compression). Algorithm: Merge minimum frequency nodes, assign 0/1 to edges.
2M PYQ 25-26
Q4. Discuss the significance of pruning in the branch and bound method for TSP.
Answer:

Eliminates non-promising branches using bounds to reduce search from O(n!) β†’ practical levels. If lower bound > current best solution, prune that branch. Saves 70-90% of branches. Example: If tour cost β‰₯ best known, stop exploring that path.
2M PYQ 25-26
Q5. Identify the difference between "Divide & Conquer" and Dynamic Programming.
Answer:

D&C: Independent subproblems, top-down, no storage, may recompute. Example: Merge Sort, Quick Sort.
DP: Overlapping subproblems, bottom-up, stores results (memoization), never recomputes, faster. Example: Knapsack, Fibonacci.
2M
Q6. What is Divide and Conquer? Write its general recurrence relation.
Answer:

Divides problem into smaller subproblems, solves recursively, combines results. Recurrence: T(n) = aT(n/b) + f(n) where a=subproblems, n/b=size, f(n)=divide/combine cost. Examples: Merge Sort, Quick Sort, Binary Search.
2M
Q7. Differentiate between Divide & Conquer and Dynamic Programming.
Answer:

D&C: Independent subproblems, top-down, no caching. DP: Overlapping subproblems, bottom-up, stores results. D&C recomputes, DP avoids recomputation. D&C examples: Merge Sort; DP examples: Knapsack.
2M
Q8. What is the Greedy method? State its properties.
Answer:

Makes locally optimal choices at each step. Properties: (1) Greedy Choice Property - local optimal leads to global optimal, (2) Optimal Substructure - optimal solution contains optimal subproblems. Examples: Huffman, Prim's, Kruskal's, Fractional Knapsack.
2M
Q9. What is Strassen's Matrix Multiplication? How is it better than the traditional method?
Answer:

Traditional: 8 multiplications β†’ O(nΒ³). Strassen: 7 multiplications β†’ O(n²·⁸¹). Reduces multiplications from 8 to 7 using clever combinations. Better for large matrix sizes but higher constant factors.
2M
Q10. What is 0/1 Knapsack problem? Write its recurrence relation.
Answer:

Select items with max profit where total weight ≀ W (no fractions). Recurrence: K[i][w] = max(K[i-1][w], p[i] + K[i-1][w-w[i]]) if w[i]≀w, else K[i-1][w]. Time: O(nW), Space: O(nW).
2M
Q11. What is Backtracking? How does it differ from Branch & Bound?
Answer:

Backtracking: Solves feasibility (yes/no), uses DFS, explores one path, backtracks. Branch & Bound: Solves optimization (min/max), uses BFS/Best-First, explores all branches with bounds. Examples: Backtracking=N-Queens, B&B=TSP.
2M
Q12. What is Huffman Coding? What type of code does it produce?
Answer:

Greedy algorithm for lossless compression. Assigns variable-length codes based on frequency. Produces prefix-free code (no code is prefix of another) ensuring unique decodability. Algorithm: Match min frequency nodes, assign 0/1 to edges. Time: O(n log n).
2M
Q13. Write the recurrence for Matrix Chain Multiplication with meaning of each term.
Answer:

M[i][j] = min(M[i][k] + M[k+1][j] + p[i-1]*p[k]*p[j]) for i≀k<j. M[i][j]=min scalar multiplications for Ai×…×Aj. k=split point. p[i-1]*p[k]*p[j]=merge cost. Base: M[i][i]=0. Time: O(nΒ³).
2M
Q14. What is Optimal Merge Pattern? Which strategy does it use?
Answer:

Merges n sorted files with minimum cost. Greedy strategy: always merge two smallest files. Example: {2,3,4} β†’ merge 2+3=5 (cost 5) β†’ merge 5+4 (cost 9), total=14. Uses min-heap. Time: O(n log n).
2M
Q15. What is the Principle of Optimality in Dynamic Programming?
Answer:

Optimal solution contains optimal solutions to subproblems (Bellman). If sequence of decisions is optimal, every subsequence is also optimal. Necessary for DP. Combined with overlapping subproblems = DP basis. Example: Shortest path sub-paths are shortest.

Section B β€” 5 Marks Questions (7)

5M PYQ 25-26
Q1. Demonstrate Kruskal's algorithm and find the minimum cost spanning tree of following graph using Kruskal's algorithm.
Answer:

Kruskal's Algorithm finds Minimum Spanning Tree (MST) by sorting edges by weight and adding them if they don't form a cycle (using Union-Find).

Algorithm Steps:
1. Sort all edges in ascending order of weight
2. Initialize each vertex as separate component
3. For each edge (u,v) in sorted order:
  β€’ If u and v are in different components, add edge to MST and union components
  β€’ If u and v are in same component, skip (would form cycle)
4. Repeat until MST has (V-1) edges

Data Structure: Union-Find (Disjoint Set Union)
Time Complexity: O(E log E) where E = number of edges

Example Graph:
Edges (with weights): (A-B:2), (A-C:3), (B-C:1), (B-D:6), (C-D:4)

Sorted edges: B-C:1, A-B:2, A-C:3, C-D:4, B-D:6

MST Construction:
1. Add B-C:1 (MST={B-C}, weight=1)
2. Add A-B:2 (MST={B-C, A-B}, weight=3)
3. Skip A-C:3 (would create cycle with A-B-C)
4. Add C-D:4 (MST={B-C, A-B, C-D}, weight=7)
5. Skip B-D:6 (would create cycle)

Final MST weight = 7 with edges: {B-C, A-B, C-D}
5M PYQ 25-26
Q2. Illustrate the Subset Sum problem by applying it to the array [3, 34, 4, 12, 5, 2] with target sum of 9.
Answer:

Subset Sum Problem: Find if any subset of given array elements sums to target value. This is a classic DP problem with YES/NO answer.

Given: Array = [3, 34, 4, 12, 5, 2], Target = 9

Solution using Dynamic Programming:
Create DP table where dp[i][j] = TRUE if sum j can be achieved using first i elements

DP Table Construction:
i\j034579
0TFFFFF
3TTFFFF
34TTFFFF
4TTTFFF
12TTTFFF
5TTTTTF
2TTTTTT

Answer: YES, sum of 9 can be formed
Subset found: {3, 4, 2} or {4, 5}

Recurrence: dp[i][j] = dp[i-1][j] OR dp[i-1][j-arr[i]]
Time: O(n Γ— sum), Space: O(n Γ— sum)
5M
Q3. Sort [50, 70, 60, 90, 40, 80, 10, 20, 30] using Quick Sort. Show partitioning at each step. Write C++ code.
Answer:

Array: [50, 70, 60, 90, 40, 80, 10, 20, 30]
Pivot = 50 (first element)

Partition Step:
i scans leftβ†’right for element > pivot, j scans rightβ†’left for element ≀ pivot

Initial: [50, 70, 60, 90, 40, 80, 10, 20, 30]   i=1, j=8
i stops at 70 (i=1), j stops at 30 (j=8) β†’ swap β†’ [50, 30, 60, 90, 40, 80, 10, 20, 70]
i stops at 60 (i=2), j stops at 20 (j=7) β†’ swap β†’ [50, 30, 20, 90, 40, 80, 10, 60, 70]
i stops at 90 (i=3), j stops at 10 (j=6) β†’ swap β†’ [50, 30, 20, 10, 40, 80, 90, 60, 70]
i stops at 80 (i=5), j stops at 40 (j=4) β†’ i>j, STOP
Swap pivot (50) with arr[j=4] β†’ [40, 30, 20, 10, 50, 80, 90, 60, 70]

Pivot 50 is at correct position (index 4)
Left subarray: [40, 30, 20, 10] β€” recursively Quick Sort
Right subarray: [80, 90, 60, 70] β€” recursively Quick Sort

Final sorted: [10, 20, 30, 40, 50, 60, 70, 80, 90]

C++ Code:
#include <iostream> using namespace std; int partition(int arr[], int low, int high) { int pivot = arr[low]; int i = low + 1, j = high; while (i <= j) { while (i <= high && arr[i] <= pivot) i++; while (arr[j] > pivot) j--; if (i < j) swap(arr[i], arr[j]); } swap(arr[low], arr[j]); return j; } void quickSort(int arr[], int low, int high) { if (low < high) { int pi = partition(arr, low, high); quickSort(arr, low, pi - 1); quickSort(arr, pi + 1, high); } } int main() { int arr[] = {50, 70, 60, 90, 40, 80, 10, 20, 30}; int n = 9; quickSort(arr, 0, n - 1); for (int i = 0; i < n; i++) cout << arr[i] << " "; // Output: 10 20 30 40 50 60 70 80 90 }
Time: Best/Avg = O(n log n), Worst = O(nΒ²)  |  Space: O(log n) recursive stack
5M
Q12. Solve 0/1 Knapsack using DP: Profits = [1, 2, 5, 6], Weights = [2, 3, 4, 5], W = 8. Show DP table and traceback. Write C++ code.
Answer:

Given: n=4, W=8, Profits = {1, 2, 5, 6}, Weights = {2, 3, 4, 5}

Recurrence:
K[i][w] = max(K[i-1][w], p[i] + K[i-1][w-w[i]]) if w[i] ≀ w
K[i][w] = K[i-1][w] if w[i] > w

DP Table:
i\w012345678
0000000000
1 (w=2,p=1)001111111
2 (w=3,p=2)001223333
3 (w=4,p=5)001255678
4 (w=5,p=6)001256678

Max Profit = K[4][8] = 8

Traceback (finding selected items):
K[4][8]=8 = K[3][8]=8 β†’ Item 4 NOT selected
K[3][8]=8 β‰  K[2][8]=3 β†’ Item 3 selected (w=4, p=5), remaining w = 8-4 = 4
K[2][4]=2 β‰  K[1][4]=1 β†’ Item 2 selected (w=3, p=2), remaining w = 4-3 = 1
K[1][1]=0 = K[0][1]=0 β†’ Item 1 NOT selected

Selected Items: {2, 3}, Total Profit = 2 + 5 = 7... wait, let me recheck.
Actually K[4][8]=8, K[3][8]=8 means item 4 not taken. K[3][8]=8 β‰  K[2][8]=3, so item 3 taken. Remaining w=4. K[2][4]=2 β‰  K[1][4]=1, item 2 taken. Remaining w=1. K[1][1]=0, item 1 not taken.
Profit = 5+2 = 7? But table shows 8. Let me recheck: K[3][8] = max(K[2][8], 5+K[2][4]) = max(3, 5+2) = 7. Then K[4][8] = max(K[3][8], 6+K[3][3]) = max(7, 6+2) = 8. So item 4 IS selected.

Correct Traceback:
K[4][8]=8 β‰  K[3][8]=7 β†’ Item 4 selected (w=5, p=6), remaining w = 3
K[3][3]=2 = K[2][3]=2 β†’ Item 3 NOT selected
K[2][3]=2 β‰  K[1][3]=1 β†’ Item 2 selected (w=3, p=2), remaining w = 0

Selected Items: {2, 4}, Total Profit = 2 + 6 = 8, Total Weight = 3 + 5 = 8

C++ Code:
#include <iostream> using namespace std; int knapsack(int W, int wt[], int val[], int n) { int K[n+1][W+1]; for (int i = 0; i <= n; i++) { for (int w = 0; w <= W; w++) { if (i == 0 || w == 0) K[i][w] = 0; else if (wt[i-1] <= w) K[i][w] = max(val[i-1] + K[i-1][w-wt[i-1]], K[i-1][w]); else K[i][w] = K[i-1][w]; } } return K[n][W]; } int main() { int val[] = {1, 2, 5, 6}; int wt[] = {2, 3, 4, 5}; int W = 8, n = 4; cout << "Max Profit: " << knapsack(W, wt, val, n); // Output: Max Profit: 8 }
Time: O(nW)  |  Space: O(nW)
5M
Q13. Find LCS of "ABCBDAB" and "BDCAB" using DP. Show complete DP table and traceback. Write C++ code.
Answer:

X = "ABCBDAB" (m=7), Y = "BDCAB" (n=5)

Recurrence:
If X[i] == Y[j]: L[i][j] = L[i-1][j-1] + 1
Else: L[i][j] = max(L[i-1][j], L[i][j-1])

DP Table:
0BDCAB
0000000
A000011
B011112
C011222
B011223
D012223
A012233
B012234

LCS Length = L[7][5] = 4

Traceback (bottom-right to top-left):
L[7][5]: X[7]=B, Y[5]=B β†’ Match! Add 'B', go diagonal β†’ L[6][4]
L[6][4]: X[6]=A, Y[4]=A β†’ Match! Add 'A', go diagonal β†’ L[5][3]
L[5][3]: X[5]=D, Y[3]=C β†’ No match. L[4][3]=2 > L[5][2]=2 β†’ go up β†’ L[4][3]
L[4][3]: X[4]=B, Y[3]=C β†’ No match. L[3][3]=2 > L[4][2]=1 β†’ go up β†’ L[3][3]
L[3][3]: X[3]=C, Y[3]=C β†’ Match! Add 'C', go diagonal β†’ L[2][2]
L[2][2]: X[2]=B, Y[2]=D β†’ No match. L[1][2]=0, L[2][1]=1 β†’ go left β†’ L[2][1]
L[2][1]: X[2]=B, Y[1]=B β†’ Match! Add 'B', go diagonal β†’ L[1][0]
L[1][0] = 0, done.

LCS = "BCAB" (read in reverse of collection order)

C++ Code:
#include <iostream> #include <cstring> using namespace std; void lcs(char X[], char Y[], int m, int n) { int L[m+1][n+1]; for (int i = 0; i <= m; i++) { for (int j = 0; j <= n; j++) { if (i == 0 || j == 0) L[i][j] = 0; else if (X[i-1] == Y[j-1]) L[i][j] = L[i-1][j-1] + 1; else L[i][j] = max(L[i-1][j], L[i][j-1]); } } cout << "LCS Length: " << L[m][n] << endl; // Traceback to find actual LCS int idx = L[m][n]; char result[idx + 1]; result[idx] = '\0'; int i = m, j = n; while (i > 0 && j > 0) { if (X[i-1] == Y[j-1]) { result[--idx] = X[i-1]; i--; j--; } else if (L[i-1][j] > L[i][j-1]) i--; else j--; } cout << "LCS: " << result; } int main() { char X[] = "ABCBDAB", Y[] = "BDCAB"; lcs(X, Y, strlen(X), strlen(Y)); // Output: LCS Length: 4, LCS: BCAB }
Time: O(mn)  |  Space: O(mn)
5M
Q14. Solve 4-Queens problem using Backtracking. Show the state space tree and all solutions.
Answer:

Problem: Place 4 queens on a 4Γ—4 chessboard so that no two queens attack each other (same row, column, or diagonal).

Constraints:
β€’ x[i] β‰  x[j] (no two queens in same column)
β€’ |x[i] βˆ’ x[j]| β‰  |i βˆ’ j| (no two queens on same diagonal)

State Space Tree (DFS with Backtracking):
Row 1: Try Q1 at col 1 β†’ x[1]=1
  Row 2: col 1 βœ— (same col), col 2 βœ— (diagonal), col 3 βœ“ β†’ x[2]=3
    Row 3: all cols fail βœ— β†’ BACKTRACK to Row 2
  Row 2: col 4 βœ“ β†’ x[2]=4
    Row 3: col 2 βœ“ β†’ x[3]=2
      Row 4: all cols fail βœ— β†’ BACKTRACK
Row 1: Try Q1 at col 2 β†’ x[1]=2
  Row 2: col 4 βœ“ β†’ x[2]=4
    Row 3: col 1 βœ“ β†’ x[3]=1
      Row 4: col 3 βœ“ β†’ x[4]=3
      SOLUTION 1 found: (2, 4, 1, 3)

Similarly, SOLUTION 2: (3, 1, 4, 2)

Board Representations:
Solution 1: (2,4,1,3) Solution 2: (3,1,4,2) 1 2 3 4 1 2 3 4 1 . Q . . 1 . . Q . 2 . . . Q 2 Q . . . 3 Q . . . 3 . . . Q 4 . . Q . 4 . Q . .
C++ Code:
#include <iostream> #include <cmath> using namespace std; int x[20]; int n = 4, count = 0; bool canPlace(int row, int col) { for (int i = 1; i < row; i++) if (x[i] == col || abs(x[i] - col) == abs(i - row)) return false; return true; } void nQueens(int row) { if (row > n) { count++; cout << "Solution " << count << ": "; for (int i = 1; i <= n; i++) cout << x[i] << " "; cout << endl; return; } for (int col = 1; col <= n; col++) { if (canPlace(row, col)) { x[row] = col; nQueens(row + 1); } } } int main() { nQueens(1); // Output: Solution 1: 2 4 1 3 // Solution 2: 3 1 4 2 }
Time: O(n!) in worst case  |  Total solutions for 4-Queens: 2
5M
Q15. Solve Fractional Knapsack: Profits = [10, 5, 15, 7, 6, 18, 3], Weights = [2, 3, 5, 7, 1, 4, 1], W = 15. Write C++ code.
Answer:

Step 1: Calculate Profit/Weight ratio for each item:
ItemProfitWeightp/w Ratio
11025.0
2531.67
31553.0
4771.0
5616.0
61844.5
7313.0

Step 2: Sort by ratio (descending):
Item 5 (6.0) β†’ Item 1 (5.0) β†’ Item 6 (4.5) β†’ Item 3 (3.0) β†’ Item 7 (3.0) β†’ Item 2 (1.67) β†’ Item 4 (1.0)

Step 3: Greedy selection (W = 15):
PickItemWeightFractionProfit AddedRemaining W
1511 (full)614
2121 (full)1012
3641 (full)188
4351 (full)153
5711 (full)32
6232/35Γ—(2/3) = 3.330

Total Profit = 6 + 10 + 18 + 15 + 3 + 3.33 = 55.33

C++ Code:
#include <iostream> #include <algorithm> using namespace std; struct Item { int profit, weight; double ratio; }; bool compare(Item a, Item b) { return a.ratio > b.ratio; } double fractionalKnapsack(Item items[], int n, int W) { for (int i = 0; i < n; i++) items[i].ratio = (double)items[i].profit / items[i].weight; sort(items, items + n, compare); double totalProfit = 0; int remaining = W; for (int i = 0; i < n && remaining > 0; i++) { if (items[i].weight <= remaining) { totalProfit += items[i].profit; remaining -= items[i].weight; } else { totalProfit += items[i].profit * ((double)remaining / items[i].weight); remaining = 0; } } return totalProfit; } int main() { Item items[] = {{10,2}, {5,3}, {15,5}, {7,7}, {6,1}, {18,4}, {3,1}}; int n = 7, W = 15; cout << "Max Profit: " << fractionalKnapsack(items, n, W); // Output: Max Profit: 55.3333 }
Time: O(n log n) for sorting  |  Space: O(1)
MST-2 Proof
MST-2 me Java, OS aur Software Engineering ke saare questions notes se cover hue.

Isliye pehle Full Notes padho, phir Quick Revision, last me Important Questions karo.