π§ͺChapter 1 β Software Testing
π§ͺ SOFTWARE TESTING β MIND MAP
Fundamentals β Objectives, Principles, Verification vs
Validation
Strategy β Unit β Integration β System β Acceptance
Validation Testing β Alpha & Beta Testing
System Testing β Performance, Stress, Security, Recovery
Black-Box β Equivalence Partitioning, BVA, Decision Table
White-Box β Statement, Branch, Path Coverage
Basis Path β Cyclomatic Complexity V(G) = E-N+2P
1.1 Software Testing Fundamentals
π Software Testing: The process of executing a program with the intent of
FINDING ERRORS. It verifies the software works correctly and meets specified
requirements.
Testing β Debugging:
Testing = Finding that something is wrong (WHAT is wrong?)
Debugging = Fixing what is wrong (HOW to fix?)
π‘ Real-life Analogy β Car Manufacturing:
Before selling a car, factory tests: brakes work? engine starts? AC cools? speedometer
accurate? β This is TESTING.
When brake fails β mechanic finds and fixes the fault β This is DEBUGGING.
Objectives of Software Testing:
β
1. Find Errors:
Uncover defects before software reaches users.
β
2. Verify Requirements:
Ensure software does what it was supposed to
do.
β
3. Build Confidence:
Give stakeholders confidence the software is
reliable.
β
4. Prevent Future Defects:
Process improvements from testing
lessons.
β
5. Reduce Risk:
Bug found early = cheap fix. Bug found in production =
expensive.
7 Key Testing Principles:
1. Testing shows presence of bugs, NOT absence:
Even 100% test pass
doesn't mean bug-free.
2. Exhaustive testing is impossible:
Can't test all input combinations β
test smartly.
3. Early testing saves cost:
Bug in design phase = $1 fix. In production
= $100 fix.
4. Defect clustering (80/20 Rule):
80% of bugs found in 20% of
modules.
5. Pesticide Paradox:
Repeating same tests won't find new bugs β update
test cases!
6. Testing is context-dependent:
ATM testing β Game testing.
7. Absence-of-errors fallacy:
Bug-free software that doesn't meet user
needs = useless.
Verification vs Validation (PYQ Topic!):
| Aspect |
Verification |
Validation |
| Key Question |
"Are we building the product RIGHT?" |
"Are we building the RIGHT product?" |
| Focus |
Process β follows specifications |
Product β meets user needs |
| Activities |
Reviews, walkthroughs, inspections |
Testing with actual users |
| When |
During development (each phase) |
End of development (acceptance) |
| Who |
Internal team |
Customer / end users |
| Example |
Code review against design doc |
User acceptance testing |
π‘ Easy Trick:
Verification = Internal (Team checks process). Validation
= External (User checks product).
"Verify = right process | Validate = right product"
1.2 Strategic Approach to Software Testing
π Testing Strategy: A structured plan β what to test, when, how much, and
who tests. Software is tested from small units upward to the full system.
π Testing Strategy Levels
Unit Testing: Test each module/function alone. Developer tests own
code.
Example: Test calculateSalary() gives correct output for all salary inputs.
Integration Testing: Test modules combined. Find interface defects between
modules.
Example: Login module + Database module β does password pass correctly?
Approaches: Top-Down, Bottom-Up, Big Bang
System Testing: Test complete integrated system against requirements
(black-box).
Example: Entire e-commerce site β browse β add to cart β pay β receipt?
Acceptance Testing: Customer tests the system against their real business
needs.
Alpha Testing = at developer's site | Beta Testing = at customer's site
1.3 Validation Testing
π Validation Testing: Ensures software meets actual user needs. Uses
black-box techniques against requirements. Answers: "Did we build the RIGHT product?"
| Aspect |
Alpha Testing |
Beta Testing |
| Location |
Developer's site |
Customer's site (real env) |
| Users |
Selected end-users |
Real end-users |
| Developer Present? |
Yes, observes & records |
No |
| Environment |
Controlled |
Real-world |
| Happens When |
Before beta |
Before public release |
| Example |
Google internal testing |
Android Beta Program |
1.4 System Testing Types
π System Testing: Testing the complete integrated system against all
requirements β both functional and non-functional.
1. Performance Testing:
Tests speed &
responsiveness under load.
Example: Page load under 2 seconds for 10,000 users?
2. Stress Testing:
Push system BEYOND limits to find
breaking point.
Example: Add users until server crashes. At what point?
3. Recovery Testing:
System recovers properly after
crash/failure.
Example: Power cut during DB write β data restored correctly?
4. Security Testing:
Verify system prevents
unauthorized access.
Example: Is SQL injection protected? Passwords encrypted?
5. Usability Testing:
Is system user-friendly? Real
users try tasks.
Example: Can user checkout in under 5 clicks?
6. Compatibility Testing:
Works across browsers, OS,
devices.
Example: Works on Chrome, Firefox, mobile AND desktop?
1.5 Black-Box Testing
π Black-Box Testing (Behavioral Testing): Testing WITHOUT knowledge of
internal code. Tester only knows inputs and expected outputs. Tests WHAT the system does,
not HOW.
π‘ Analogy β TV Remote: Press Volume+ β Volume increases. You don't know
the electronics inside β just input β expected output. Black-box!
Black-Box Techniques:
1. Equivalence Partitioning: Divide inputs into groups where all values
behave same. Test one value per group.
Age (valid: 18β60): Test 10 (invalid), 35 (valid), 70 (invalid) β only 3 tests
needed!
2. Boundary Value Analysis (BVA): Test AT and NEAR boundaries. Most bugs
happen at boundaries.
Age (valid: 18β60): Test 17, 18, 19, 59, 60, 61
3. Decision Table Testing: Table of all condition combinations and expected
actions.
Loan: Ageβ₯18 AND Salaryβ₯25000 AND Creditβ₯700 β Approve
4. State Transition Testing: Test system state changes based on events.
ATM: Idle β Card In β PIN entered β Menu β Transaction β Done
| Aspect |
Black-Box |
White-Box |
| Code Knowledge |
None needed |
Full code access |
| Also Called |
Behavioral, Functional |
Structural, Glass-Box |
| Tests |
WHAT system does |
HOW system works internally |
| Tester Skills |
Non-programmer OK |
Must know programming |
| Finds |
Functional bugs |
Logic & path bugs |
1.6 White-Box Testing
π White-Box Testing (Structural/Glass-Box Testing): Testing WITH full
knowledge of internal code. Tests cover all paths, branches, and statements. Tests HOW the
system works.
π‘ Analogy β Car Mechanic: Mechanic opens the hood, sees every part, tests
each component individually with full knowledge of the engine internals.
White-Box Techniques:
1. Statement Coverage: Every line of code executed at
least once. Goal: 100% statement coverage.
2. Branch Coverage (Decision Coverage): Every True/False
branch of every decision tested.
if(age>18): test with 25 (True) AND 15
(False)
3. Path Coverage: Every possible path through code
tested. Most thorough but impractical for large code.
Problem: 10 decisions = 2^10 =
1024 paths!
4. Condition Coverage: Each condition in a decision takes
True and False values independently.
β
Advantages: Tests internal logic, finds dead code, good for security
testing, optimizes code paths.
β Disadvantages: Requires programming skills β expensive. Doesn't test user
perspective. Can't find missing features.
1.7 Basis Path Testing
π Basis Path Testing: A white-box technique by Tom McCabe that uses
Control Flow Graph (CFG) to find the MINIMUM number of test cases to cover all independent
paths. Uses Cyclomatic Complexity V(G) to calculate this number.
Steps:
Step 1: Draw the Control Flow Graph (CFG) from the code.
Step 2: Calculate Cyclomatic Complexity V(G).
Step 3: Identify independent paths (count = V(G)).
Step 4: Write one test case per independent path.
Cyclomatic Complexity Formulas:
Formula 1: V(G) = E β N + 2P
E = Number of Edges in Control Flow Graph
N = Number of Nodes in Control Flow Graph
P = Number of connected components (usually P = 1)
Formula 2 (Simpler):
V(G) = Number of Decision Nodes (if, while, for, switch) + 1
Formula 3:
V(G) = Number of Regions/closed areas in CFG + 1
Example:
Code has 3 if-statements:
V(G) = 3 + 1 = 4
β Need minimum 4 independent test cases!
π CFG for Basis Path Testing
π Cyclomatic Complexity Scale:
β’ V(G) = 1β10: Simple, low risk β
β’ V(G) = 11β20: Moderate complexity
β’ V(G) = 21β50: Complex, high risk β οΈ
β’ V(G) > 50: Untestable β refactor the code! β
β
Advantages: Guarantees every statement executed once. Minimum test cases
= efficient. Provides measurable coverage metric. Identifies overly complex code.
βChapter 2 β Quality Management
β QUALITY MANAGEMENT β MIND MAP
Software Quality β Correctness, Reliability, Efficiency,
Maintainability
Software Reliability β MTTF, Failure rate, Availability
Software Reviews β Informal, Walkthrough, Inspection
Formal Technical Reviews (FTR) β Roles, Process, Output
Statistical SQA β Defect tracking, Pareto analysis
ISO 9000 β International quality standards
SQA Plan β Document listing QA activities
SEI CMM β 5 Maturity Levels
2.1 Software Quality
π Software Quality: The degree to which software possesses a desired
combination of attributes β meaning how well software meets user requirements, standards,
and expectations.
Simple definition: "Does the software do what it's supposed to do,
correctly, reliably, and efficiently?"
π‘ Analogy β Smartphone Quality:
A high-quality phone: calls work (correctness), battery lasts (reliability), apps open fast
(performance), easy to use (usability), drops don't break it (robustness). Same for
software!
McCall's Quality Factors (3 Categories):
| Category |
Factor |
Meaning |
Product Operation (How it runs) |
Correctness |
Does it do what user wants? |
| Reliability |
Does it work consistently without failure? |
| Efficiency |
Does it use minimum resources (CPU, RAM)? |
Product Revision (Ease of change) |
Maintainability |
Easy to find and fix bugs? |
| Flexibility |
Easy to modify/extend features? |
Product Transition (New environments) |
Portability |
Works on different OS/hardware? |
| Interoperability |
Works with other systems? |
π‘ Mnemonic β "CREMP I" for key quality factors:
Correctness
Reliability
Efficiency
Maintainability
Portability
Interoperability
2.2 Software Reliability
π Software Reliability: The probability that software will perform its
required functions under stated conditions for a specified period of time WITHOUT
FAILURE.
Example: "Our banking app has 99.99% reliability" = it fails only 0.01% of the
time.
Key Reliability Measures:
1. MTTF (Mean Time To Failure):
Average time the system works correctly before failing.
Higher MTTF = More reliable system
Example: MTTF = 500 hours β system fails once every 500 hours
2. MTTR (Mean Time To Repair):
Average time to fix the system after failure.
Lower MTTR = Faster recovery
3. MTBF (Mean Time Between Failures):
MTBF = MTTF + MTTR
Time between one failure and the next failure
4. Availability:
Availability = MTTF / (MTTF + MTTR) Γ 100%
Example: MTTF=90h, MTTR=10h β Availability = 90/100 = 90%
5. Failure Rate (Ξ»):
Ξ» = 1 / MTTF
Number of failures per unit time
π High Reliability Systems: Hospitals, banks, airlines need 99.999% uptime
(Five Nines). This means max 5.26 minutes downtime per YEAR!
2.3 Software Reviews
π Software Reviews: A systematic examination of software artifacts (code,
design, requirements) by a team to find errors early β before testing. Reviews are the MOST
cost-effective way to find defects.
β
Why Reviews?
Finding a bug in review = $1 cost | Finding in testing = $10 | Production = $100+
Reviews can find 60-70% of ALL defects before testing even starts!
Types of Software Reviews:
1. Informal Review:
Ad-hoc review β developer asks colleague to look at code. No formal process, no
documentation.
Example: "Hey, can you quickly check my login function?"
Lightweight but catches obvious issues.
2. Walkthrough:
Author presents their work to a small team. Author "walks" reviewers through the
material.
β’ Author drives the review β explains each part
β’ Reviewers ask questions and suggest problems
β’ Not adversarial β learning-focused
Example: Developer walks team through new module design on whiteboard.
3. Formal Technical Review (FTR) / Inspection:
Most rigorous type. Structured process with defined roles, checklists, and
documentation.
More detail in next section.
| Type |
Formality |
Leader |
Documentation |
Defects Found |
| Informal Review |
Lowest |
Author |
None |
Low |
| Walkthrough |
Medium |
Author |
Meeting notes |
Moderate |
| FTR/Inspection |
Highest |
Moderator |
Full report |
Highest (60β70%) |
2.4 Formal Technical Reviews (FTR)
π Formal Technical Review (FTR): A structured, formal meeting where a team
(3β5 people) systematically reviews a software artifact using checklists to find defects.
Also called Inspection or Fagan Inspection.
FTR Roles:
1. Author/Producer: Person who created the work being reviewed. Presents
the document but does NOT defend it.
2. Moderator: Leads the review meeting. Controls discussion, ensures
process is followed.
3. Reviewers (2β3): Team members who read the material BEFORE the meeting
and prepare a list of issues.
4. Recorder/Scribe: Documents all errors found during the meeting.
FTR Process (Steps):
β
FTR Guidelines:
β’ Review the PRODUCT, not the producer (don't blame the person)
β’ Set agenda β stick to it. Max 2 hours per session
β’ Limit to 3β5 reviewers (too many = inefficient)
β’ Distribute materials BEFORE meeting
β’ Fix errors β don't debate solutions in the meeting
β’ Maintain a checklist for common error types
2.5 Statistical SQA (Software Quality Assurance)
π SQA (Software Quality Assurance): A set of activities that ensure the
software development PROCESS follows defined standards and procedures, and the software
PRODUCT meets quality requirements.
SQA vs QC (Quality Control):
QA = Process-focused (Are we following the right process?)
QC = Product-focused (Does the product have defects?)
π Statistical SQA: Uses statistical methods to analyze defects and find
their ROOT CAUSES to improve the process systematically.
Statistical SQA Steps:
Step 1 β Collect Defect Data: Record every defect found during testing β
what it was, where it was found, severity.
Step 2 β Categorize Defects: Group defects by type (logic error, UI error,
data error, etc.) and by source (requirements, design, code).
Step 3 β Pareto Analysis (80/20 Rule): Find the ~20% of causes responsible
for 80% of defects. Focus efforts on eliminating these top causes.
Example: 80% of bugs come from 3 specific modules β fix those modules first.
Step 4 β Root Cause Analysis: Dig deep β WHY did these defects occur? Was
it unclear requirements? Lack of code review? Poor design?
Step 5 β Process Improvement: Fix the ROOT CAUSE in the process itself so
the same category of defect never occurs again.
| Aspect |
SQA |
QC |
| Focus |
Process |
Product |
| Goal |
Prevent defects |
Find defects |
| When |
Throughout development |
After product built |
| Example |
Code review standards |
Testing the app |
2.6 ISO 9000 Standards
π ISO 9000: A family of international standards by the International
Organization for Standardization (ISO) that defines requirements for QUALITY MANAGEMENT
SYSTEMS. ISO 9001 is the most common β organizations get certified to prove they follow
quality processes.
ISO 9001 for Software: Defines standards for software development processes
to ensure consistent quality.
π‘ Real-life Analogy:
ISO 9001 certification is like a "quality restaurant certification" β the restaurant doesn't
just say "our food is good," they prove it by following a documented, audited cooking
process. ISO says: "We don't just say we write quality software β we can PROVE our process
ensures quality."
Key ISO 9000 Principles:
β
1. Customer Focus: All quality efforts target customer satisfaction.
β
2. Leadership: Management creates the environment for quality.
β
3. Process Approach: Results achieved more efficiently when activities
managed as processes.
β
4. Continual Improvement: Organization constantly improves its overall
performance.
β
5. Evidence-based Decision Making: Decisions based on analysis of data
and information.
β
6. Relationship Management: Good supplier/partner relationships improve
quality.
π ISO 9001 Certification Process:
Document processes β Implement β Internal audit β External audit by ISO auditor β
Certification issued β Annual audits to maintain
2.7 SQA Plan
π SQA Plan: A document that describes the quality assurance activities to
be performed for a specific software project. It defines WHAT quality checks will happen,
WHEN, and WHO is responsible.
SQA Plan Contents:
1. Purpose & Scope: What software is covered? What quality standards
apply?
2. Reference Documents: Standards, guidelines being followed (IEEE, ISO,
etc.)
3. Management: Who is responsible for quality? Organization structure.
4. Documentation: What documents must be created (SRS, SDD, Test Plan)?
5. Standards, Practices, Conventions: Coding standards, naming
conventions.
6. Reviews and Audits: Schedule of FTRs and audits.
7. Testing: Test plan reference, types of testing to be done.
8. Problem Reporting: How defects are reported, tracked, and resolved.
9. Tools and Methodologies: Tools used for code review, testing, etc.
2.8 SEI CMM (Capability Maturity Model)
π SEI CMM: Capability Maturity Model developed by Software Engineering
Institute (SEI), Carnegie Mellon. A framework that describes the maturity of an
organization's software development processes in 5 levels. Higher level = more mature =
better quality software.
π‘ Analogy β Student Learning Levels:
Level 1: "I'll study whenever I feel like it" (chaotic)
Level 5: "I have a study system, track results, optimize continuously" (optimized)
CMM measures the same progression for software organizations!
| Level |
Name |
Key Characteristic |
Example |
| 1 |
Initial |
Ad-hoc, chaotic. Works by heroic individual effort. |
Startup with no process |
| 2 |
Repeatable |
Basic project management. Past success can be repeated. |
Track cost & schedule |
| 3 |
Defined |
Standard documented process used across all projects. |
Company-wide coding standards |
| 4 |
Managed |
Process measured with metrics. Quantitative control. |
Track defects per KLOC |
| 5 |
Optimizing |
Continuous improvement. Use data to prevent defects. |
TCS, Infosys, Wipro |
π‘ Memory trick β "I Remained Doing My Outstanding":
Initial β Repeatable β Defined β
Managed β Optimizing
π§Chapter 3 β Software Maintenance & Reuse
π§ MAINTENANCE & REUSE β MIND MAP
Definition β Post-delivery modification of software
Types β Corrective, Adaptive, Perfective, Preventive
Reverse Engineering β Recover design from code
Maintenance Models β Quick-fix, Iterative Enhancement,
Boehm's
Reuse Issues β Not-invented-here syndrome, licensing,
quality
Reuse Approach β Component libraries, COTS, Open Source
3.1 Definition of Software Maintenance
π Software Maintenance: The process of modifying a software system or
component AFTER DELIVERY to correct faults, improve performance, adapt to a changed
environment, or add new features.
Key fact: Maintenance is the LONGEST and MOST EXPENSIVE phase of SDLC β
typically 60-80% of total software cost is spent on maintenance!
π‘ Real-life Analogy β House Maintenance:
After building a house: fix broken pipe (corrective), renovate kitchen (perfective), add new
room (adaptive), repaint walls before they crack (preventive). Software maintenance is the
same!
π Why Maintenance is Expensive:
β’ Understanding someone else's code is hard
β’ Original developers may have left the company
β’ Poor documentation makes changes risky
β’ One change can break other parts of the system
3.2 Types of Software Maintenance
1. Corrective Maintenance (Bug Fixing) β ~20%:
Fixing defects discovered AFTER the software is delivered. Repairing errors in logic,
design, or code.
Example: App crashes when user enters special characters β developer fixes the
validation bug.
Trigger: User reports a bug or crash.
2. Adaptive Maintenance (Environment Changes) β ~25%:
Modifying software to work in a CHANGED ENVIRONMENT β new OS, hardware, database, or
laws.
Example: GST law changed β accounting software updated with new tax rates and
forms.
Trigger: External environment changes.
3. Perfective Maintenance (Enhancement) β ~50%:
Adding NEW FEATURES or improving performance based on user feedback. Most common type!
Example: Users want dark mode β developer adds dark mode to existing app.
Trigger: User requests new features.
4. Preventive Maintenance (Restructuring) β ~5%:
Improving software's internal structure to prevent future problems. Making it easier to
maintain.
Example: Refactoring messy code, adding documentation, improving database
indexes.
Trigger: Programmer foresees future problems.
| Type |
Trigger |
Goal |
% of Work |
| Corrective |
Bug found after delivery |
Fix defect |
~20% |
| Adaptive |
Environment changed |
Adapt to new env |
~25% |
| Perfective |
User wants more features |
Enhance software |
~50% |
| Preventive |
Future risk spotted |
Improve structure |
~5% |
π‘ Memory Trick β "CAPP":
Corrective Β· Adaptive Β· Perfective Β·
Preventive
3.3 Software Reverse Engineering
π Software Reverse Engineering: The process of analyzing an existing
software system to RECOVER its design, architecture, or requirements β when the original
design documents are lost or unavailable.
Direction: Normal Engineering = Requirements β Design β Code
Reverse Engineering = Code β Design β Requirements (BACKWARDS!)
π‘ Analogy β Reverse Engineering a Recipe:
You eat a delicious cake, but don't have the recipe. You analyze the taste, texture,
ingredients you can identify β write down the recipe. That's reverse engineering β figuring
out "how it was made" from the end product.
π Reverse Engineering Process
β
When Reverse Engineering is needed:
- Original developers left β no one understands the system
- Documentation is lost or outdated
- Legacy systems that need modernization
- Security analysis to find vulnerabilities
- Migrating old system to new platform
β Challenges of Reverse Engineering:
- Complex, poorly written code is very hard to understand
- Time-consuming and expensive
- May raise legal/ethical issues (competitors' software)
- Cannot always recover original design intent
3.4 Software Maintenance Models
π Maintenance Models: Different approaches/processes for carrying out
maintenance activities. Choose the right model based on the type and urgency of maintenance.
1. Quick-Fix Model:
Simplest model β make the fastest possible fix without worrying about design impact.
Emergency patches.
Problem: Quick fixes create "band-aid" solutions that accumulate and make future
maintenance harder.
Use when: Production is down, need immediate fix NOW.
2. Iterative Enhancement Model:
Software is maintained through repeated cycles of small enhancements. Each iteration
adds/fixes something.
Process: Understand β Modify β Test β Release β Repeat
Best for: Systems that need continuous improvement (like websites, mobile apps).
3. Boehm's Model:
Based on economic cost/benefit analysis β only perform maintenance when benefit justifies
the cost.
Factors considered: Number of users affected, severity, cost of fix, risk of new bugs from
change.
Best for: Large enterprise systems where changes are expensive.
4. Reuse-Oriented Model:
During maintenance, replace old custom components with reusable/standard components.
Reduces future maintenance cost and improves reliability.
Example: Replace custom-built authentication with standard OAuth library.
3.5 Basic Issues in Any Reuse Program
π Software Reuse: The practice of using existing software components,
libraries, frameworks, or designs in new projects instead of building everything from
scratch. "Don't reinvent the wheel!"
π‘ Analogy β Building with LEGO:
Instead of molding each plastic brick from scratch, LEGO reuses standard bricks across many
sets. Software reuse = using proven components (bricks) in new projects.
Issues/Challenges in Software Reuse:
β 1. Not-Invented-Here (NIH) Syndrome:
Developers prefer writing their own code rather than using existing code ("I can do it
better!").
Solution: Management must encourage and reward reuse. Set reuse targets.
β 2. Finding the Right Component:
Difficult to search through large libraries to find a component that exactly fits the
need.
Solution: Good cataloging, search tools, and classification systems.
β 3. Quality Assurance of Components:
How do we know the reused component is reliable and bug-free?
Solution: Maintain certified component libraries with quality ratings.
β 4. Legal and Licensing Issues:
Open-source licenses may restrict how code can be reused commercially (GPL, MIT,
Apache).
Solution: Legal review of all reused components.
β 5. Adaptation Cost:
Existing component may need modification to fit new context β adaptation can be
expensive.
Solution: Build components with reuse in mind (modular, configurable,
well-documented).
β 6. Maintenance of Reused Components:
When original component is updated, all systems using it need to be updated too.
Solution: Versioning and dependency management.
3.6 Reuse Approaches
π Reuse Approaches: Different strategies and methods organizations use to
achieve software reuse across projects.
1. Component Libraries:
Maintain a repository of reusable, tested, and documented components. Developers search and
use components from this library.
Examples: Java class libraries, React component libraries, npm packages.
2. COTS (Commercial Off-The-Shelf Software):
Buy ready-made commercial software instead of building. Integrate purchased software into
your system.
Examples: Using Microsoft Azure for cloud, Salesforce for CRM, Stripe for
payments.
Pros: Faster, proven quality | Cons: Expensive, less control, vendor dependency
3. Open Source Reuse:
Use freely available open-source libraries and frameworks. Huge community maintains
quality.
Examples: Using React.js, Spring Boot, Django instead of building frameworks from
scratch.
Pros: Free, community support | Cons: License restrictions, support risks
4. Design Patterns Reuse:
Reuse proven design solutions (not code, but architecture/design ideas).
Examples: Singleton, Factory, Observer patterns β reuse the design template.
5. Product Lines / Frameworks:
Build a common platform/framework once, then develop specific products on top of it.
Example: Google builds Android platform (reusable), then different phone makers build
products on it.
| Approach |
Cost |
Control |
Speed |
Best For |
| Component Library |
Low |
High |
High |
Internal reuse |
| COTS |
High |
Low |
Highest |
Standard functions |
| Open Source |
Free |
Medium |
High |
Common functionality |
| Design Patterns |
Low |
High |
Medium |
Architecture design |
| Product Lines |
High initially |
High |
High later |
Family of products |
β‘
Ready for Exam?
Sab padh liya? Ab Quick Revision karo β formulas, key points aur exam
tips ek jagah!
Quick Revision Karo β
β‘
Quick Revision β Last Minute Exam Prep!
π How to use: Read this 15 minutes before the exam! We've packed it with hit
mnemonics, difference tables, definitions, and exact tricks you need to score maximum marks.
π§ͺ Chapter 1 β Software Testing
π What is Software Testing?
Executing a program with the INTENT to FIND ERRORS. Goal = find bugs before the customers
do.
π‘ Testing vs Debugging (100% PYQ):
Testing = Finding errors (WHAT is wrong? done by Testers)
Debugging = Fixing errors (HOW to fix? done by Developers)
π Verification vs Validation (PYQ!):
Verification = "Building product RIGHT?" β Process-focused, Internal team, Reviews,
Walkthroughs
Validation = "Building RIGHT product?" β Product-focused, Customer/users,
Acceptance Testing
Trick: Verify = right PROCESS | Validate = right PRODUCT
CYCLOMATIC COMPLEXITY FORMULAS:
V(G) = E β N + 2P (Edges - Nodes + 2*Connected)
V(G) = Decision Nodes + 1
V(G) = Regions in CFG + 1
SCALE: 1-10=Simple β
| 11-20=Moderate | 21-50=High Risk β οΈ | 50+=Untestable β
TESTING LEVELS (BottomβTop):
Unit (Dev) β Integration (Dev) β System (Testers) β Acceptance (Users)
β
7 Principles Mnemonic β "EE DC PP A":
Exhaustible testing impossible
Early testing saves money
Defect clustering (80/20 rule)
Context-dependent
Pesticide paradox (tests wear out)
Presence of defects, not absence
Absence-of-errors fallacy
π Alpha vs Beta Testing (Real Example):
Alpha (Laboratory): Internal testing by Google map developers inside the Google
office before release.
Beta (Wild): Giving early access to 10,000 real drivers out in the actual world (no
devs present) to find bugs.
β οΈ Debugging Strategies:
1. Brute Force: Memory dumps with printf() - least efficient.
2. Backtracking: Trace backward from where error was observed.
3. Cause Elimination: Binary partitioning the code to isolate the bug (like binary
search).
π Testing Types Comparison:
| Testing |
Also Called/Type |
Code Knowledge |
Tests/Example |
| Black-Box |
Behavioral/Functional |
None needed |
Equivalence Partitioning, BVA |
| White-Box |
Structural/Glass-Box |
Full access |
Basis Path, Loop Testing |
| Top-Down |
Integration |
Stubs needed |
Main module tested first |
| Bottom-Up |
Integration |
Drivers needed |
Leaf modules tested first |
β Chapter 2 β Quality Management
π SQA vs QC (Never Confuse!):
SQA (Quality Assurance) = Process-focused β PREVENT defects (proactive). e.g.,
Setting up code review policies.
QC (Quality Control) = Product-focused β DETECT defects (reactive). e.g., Executing
test cases to find bugs.
RELIABILITY FORMULAS:
MTBF (Mean Time Between Failures) = MTTF + MTTR
Availability = MTTF / MTBF Γ 100%
Failure Rate Ξ» = 1 / MTTF
FTR Process (Formal Technical Review):
Plan β Prepare β Review (max 2hrs) β Rework β Follow-up
Statistical SQA:
Collect Defect Info β Categorize β Pareto (80/20 Rule) β Root Cause β Improve
β
McCall's Quality Factors β "CREMP I":
Correctness (Does what I want?)
Reliability (Does it accurately?)
Efficiency (Will it run on my hardware?)
Maintainability (Can I fix it?)
Portability (Can I run it elsewhere?)
Interoperability (Can it talk to others?)
π SEI CMM β 5 Levels (Mnemonic: "I Remained Doing My Outstanding"):
1 - Initial (Chaotic, Ad-hoc, Heroics required)
2 - Repeatable (Basic project tracking, can repeat past success)
3 - Defined (Standardized process across the company)
4 - Managed (Quantitative tracking, statistics, predictable)
5 - Optimizing π (Continuous improvement, piloting new tech)
π€΅ FTR Roles (Very Important):
| Role |
Responsibility in FTR |
| Producer |
Author of the code/document being reviewed. |
| Review Leader |
Moderates meeting, ensures 2-hour max rule is kept. |
| Reviewer |
Finds bugs, reviews the product (NOT the person). |
| Recorder |
Logs all defects found during the meeting safely. |
| ISO 9000 |
SEI CMM |
| Generic standard for ANY industry |
Specific strictly to SOFTWARE Engineering |
| Focuses on Quality systems audit |
Focuses on Process Maturity & Improvement |
| Pass/Fail Certification |
Graded framework (Levels 1 to 5) |
π§ Chapter 3 β Software Maintenance & Reuse
π Key Facts:
β’ Maintenance = 60-80% of TOTAL software cost over its lifetime!
β’ It is the most expensive and longest phase of the SDLC.
β οΈ Why is Maintenance so Costly?
1. Staff Turnover: Original coders leave, new guys cannot understand the
unstructured code.
2. Spaghetti Code: Code becomes complicated without good documentation.
3. Ripple Effect: Fixing one bug causes two new bugs in another module.
π 4 Types of Maintenance (Mnemonic: CAPP):
Corrective (~20%) β Fix bugs identified after delivery (e.g. repairing checkout
glitch)
Adaptive (~25%) β Adapt to environment changes (e.g. updating app to support iOS
18)
Perfective (~50%) β Add new features requested by user (e.g. Dark Mode) - MOST
COMMON!
Preventive (~5%) β Restructure code (Re-engineering) to prevent future bugs.
REVERSE ENGINEERING vs FORWARD ENGINEERING:
Forward: Requirements β Design β Implementation (Code)
*Starting from scratch to build something new.*
Reverse: Implementation (Code) β Design β Requirements
*Analyzing old undocumented legacy code to extract the original design.*
Software Re-engineering: Reverse Engineering + Forward Engineering. (Take old app
apart, understand it, rebuild it better).
β
Software Reuse (Pros & Cons):
Advantage: Saves massive Development time, Increases Reliability (already
tested).
Obstacle 1 (NIH Syndrome): "Not Invented Here" β Developers egos prevent them from
trusting others' code.
Obstacle 2: Licensing constraints, finding the right component, adaptation efforts.
π Lehman's Laws of Software Evolution (Top 3!):
1. Continuing Change: A system must continually adapt or it becomes less
useful.
2. Increasing Complexity: As it changes, its structure degrades. Extra effort is
needed to maintain structure.
3. Declining Quality: Quality will appear to decline unless rigorously maintained.
| Topic |
Key Formula/Fact |
Mnemonic/Trick |
| Cyclomatic Complexity |
V(G) = E-N+2P or Decisions+1 |
Calculates Path complexity |
| Testing Levels |
UnitβIntegrationβSystemβAcceptance |
U I S A |
| Maintenance Types |
Corrective~20%, Perfective~50% |
CAPP |
| CMM Levels |
5 levels: Initial to Optimizing |
I Remained Doing My Outstanding |
| McCall's Quality |
Product Op, Revision, Transition |
CREMP I |
| Testing vs Debugging |
Test=Find bug, Debug=Fix bug |
QA vs Dev |
β
Important Questions β Expected Exam Based
π Note: These 15 selected questions represent the exact exam pattern and the most
highly-weighted topics for Unit 3. Click on any question to see the clear, point-wise answer.
Section A β 2 Marks Questions (Short Answer)
2M
Expected
Q1. What is the fundamental difference between Alpha Testing and
Beta Testing?
βΌ
Answer:
Alpha Testing: Carried out by the internal team/developers inside the
organization before releasing the software to the public.
Beta Testing: Carried out by a limited number of real users (customers) in a
real-world environment before the final release.
2M
Expected
Q2. Define Cyclomatic Complexity.
βΌ
Answer:
Cyclomatic Complexity is a software metric used to find the logical complexity of a program. It
measures the number of independent paths through the source code. Higher complexity means the
code is harder to test and maintain.
Formula: V(G) = E - N + 2P (Edges - Nodes + 2*Connected Components).
2M
Expected
Q3. What is Regression Testing and why is it important?
βΌ
Answer:
Regression Testing: Re-testing the software after making changes, fixing bugs,
or adding new features.
Importance: It ensures that the recent code changes have not unintentionally
broken any existing, previously working features.
2M
Expected
Q4. Differentiate between Black Box and White Box Testing.
βΌ
Answer:
Black Box: Tester doesn't know the internal code. Checks only inputs and
outputs (e.g., trying to login with wrong password).
White Box: Tester knows the internal code. Checks loops, conditions, and
line-by-line logic inside the program.
2M
Expected
Q5. What is the difference between Quality Assurance (QA) and
Quality Control (QC)?
βΌ
Answer:
Quality Assurance (QA): Process-oriented. Focuses on preventing defects by
improving the software development process.
Quality Control (QC): Product-oriented. Focuses on identifying and fixing
defects in the final software product (Testing).
2M
Expected
Q6. Define Reliability in Software Engineering.
βΌ
Answer:
Reliability: The probability that a software system will perform its required
functions without failure for a specified period of time under specific conditions.
2M
Expected
Q7. What is McCall's Quality Model? List its three main
categories.
βΌ
Answer:
McCall's Model measures software quality based on 11 factors grouped into three categories:
1. Product Revision: Maintainability, Flexibility, Testability.
2. Product Transition: Portability, Reusability, Interoperability.
3. Product Operations: Correctness, Reliability, Efficiency, Integrity,
Usability.
2M
Expected
Q8. List the four types of software maintenance.
βΌ
Answer:
1. Corrective Maintenance: Fixing bugs/errors found by users.
2. Adaptive Maintenance: Adapting software to a new environment (e.g., new OS
version).
3. Perfective Maintenance: Adding new features or improving performance.
4. Preventive Maintenance: Refactoring code to prevent future bugs.
2M
Expected
Q9. What is Software Re-engineering?
βΌ
Answer:
Software Re-engineering: The process of examining and modifying an existing
software system to reconstitute it in a new form, improving its structure while keeping the
original functionality intact.
2M
Expected
Q10. State two advantages of Software Reuse.
βΌ
Answer:
1. Reduces Development Time: Using existing code cuts down the time needed to
write and test from scratch.
2. Increases Reliability: Reused code has already been tested and proven to
work in previous projects, meaning fewer bugs.
Section B β 5 Marks Questions (Long Answer)
5M
Expected
Q11. Explain various levels of software testing in detail (Unit,
Integration, System, Acceptance testing).
βΌ
Answer:
The four major levels of software testing are performed sequentially:
1. Unit Testing: Testing individual components or functions of the software in
isolation. Usually done by developers using White Box testing. Goal: verify that each piece of
code works right.
2. Integration Testing: Combining the tested unit modules and testing them as a
group. Identifies interface defects between modules as they interact.
3. System Testing: Testing the complete, fully integrated application as a
whole against the functional and non-functional requirements. Takes place in an environment
mimicking production.
4. Acceptance Testing: The final test done by the client/end-user to determine
if the software is ready for release (e.g., Alpha and Beta testing).
5M
Expected
Q12. Discuss White Box testing techniques (Basis Path Testing and
Control Structure Testing).
βΌ
Answer:
White Box testing requires full access and knowledge of the internal source code.
1. Basis Path Testing:
A technique first proposed by Tom McCabe. The tester derives a logical complexity measure
(Cyclomatic Complexity) from the code, and uses it to define the exact number of independent
paths that must be executed to guarantee every statement has been executed at least
once.
2. Control Structure Testing:
Focuses on the logical structures (loops and conditions):
β’ Condition Testing: Tests every logical condition in the program.
β’ Data Flow Testing: Tests paths driven by the selection of variables.
β’ Loop Testing: Focuses exclusively on the validity of simple, concatenated, nested,
and unstructured loops.
5M
Expected
Q13. Explain the SEI Capability Maturity Model (CMM) and describe
its five levels.
βΌ
Answer:
The SEI CMM is a framework used to assess the maturity and capability of a software
organization's development process.
Level 1: Initial: Chaotic and unorganized. Success depends purely on individual
heroic effort rather than documented processes.
Level 2: Repeatable: Basic project tracking processes are established.
Successes on earlier projects can be repeated.
Level 3: Defined: Processes for both management and engineering are documented,
standardized, and integrated across the whole organization.
Level 4: Managed: Detailed quantitative quality measures are collected. The
process and product quality are statistically understood and controlled.
Level 5: Optimizing: Continuous process improvement is enabled by quantitative
feedback from the process and from piloting innovative ideas.
5M
Expected
Q14. Explain the different categories of software maintenance
with examples.
βΌ
Answer:
Software maintenance happens after the software is deployed.
1. Corrective Maintenance (20%): Fixing bugs discovered by users after the
software is live. (Example: Fixing a broken checkout button).
2. Adaptive Maintenance (25%): Modification of a software product performed
after delivery to keep it usable in a changed or changing environment. (Example: Updating an iOS
app to support the newest iOS version).
3. Perfective Maintenance (50%): Improving performance or maintainability, or
adding new features requested by users. (Example: Adding a 'Dark Mode' or improving database
search speed).
4. Preventive Maintenance (5%): Refactoring code to prevent future bugs or
issues from occurring. (Example: Updating outdated libraries).
5M
Expected
Q15. What is Reverse Engineering? How is it different from
Forward Engineering in context of Software Re-engineering?
βΌ
Answer:
Reverse Engineering: The process of analyzing an existing software system to
identify its components and their interrelationships and create representations of the system in
another form or at a higher level of abstraction. (Going backwards from Source Code to
Design).
Differences:
β’ Forward Engineering: Starts from Requirements β Design β Implementation
(Code). It focuses on building the system from scratch.
β’ Reverse Engineering: Starts from Implementation (Code) β Design β
Requirements. It focuses on understanding how an existing undocumented system works.
Re-engineering Process: Both combined form "Software Re-engineering", where you
reverse engineer to understand the old system, and then forward engineer to rebuild it better.