Manual testing is performed at different levels to ensure comprehensive software quality. Here’s a detailed explanation of each testing level, its purpose, activities, and real-world applications:
1. Unit Testing
Objective: Verify individual components/modules work correctly in isolation.
Key Characteristics:
- Tests smallest testable parts (functions, methods, classes)
- Usually performed by developers (white-box testing)
- Fastest and most granular level of testing
Testing Process:
- Identify units/modules to test
- Write test cases for each function
- Execute tests and verify outputs
- Fix defects and retest
Documents:
- Unit test cases
- Code coverage reports
Roles:
- Developers (primary)
- Sometimes QA for review
Real-World Example:
Testing a “Calculate Discount” function in Myntra’s pricing module:
def test_calculate_discount():
assert calculate_discount(1000, 20) == 800 # 20% off ₹1000
assert calculate_discount(500, 0) == 500 # No discount
2. Integration Testing
Objective: Verify interactions between integrated modules/components.
Types:
- Big Bang: Test all components together at once
- Top-Down: Test from main module downward
- Bottom-Up: Test from sub-modules upward
- Sandwich: Combination of top-down and bottom-up
Testing Process:
- Identify integration points
- Create test scenarios for module interactions
- Execute tests with sample data
- Verify data flow and communication
Documents:
- Integration test plan
- Interface verification checklist
Roles:
- QA Engineers (primary)
- Developers assist
Real-World Example:
Testing Myntra’s “Add to Cart” integration:
- Product Service → Inventory Service → Cart Service
- Verify:
- Product availability check
- Price synchronization
- Cart updates correctly
3. System Testing
Objective: Validate complete end-to-end system functionality.
Types of System Tests:
- Functional: Core feature testing
- Non-Functional: Performance, security, etc.
- Recovery: System failure handling
- Migration: Data transfer testing
Testing Process:
- Prepare full system test environment
- Execute end-to-end user scenarios
- Verify against requirements
- Log and track defects
Documents:
- System test cases
- Test data sheets
- Environment configuration docs
Roles:
- QA Team (primary)
- Business Analysts verify requirements
Real-World Example:
Testing Myntra’s complete order flow:
- Search product → Select size → Add to cart → Checkout → Payment → Order confirmation
- Verify all systems work together (frontend, backend, payment gateway, inventory)
4. Acceptance Testing
Objective: Determine if system meets business requirements.
Types:
- UAT (User Acceptance Testing): By end-users
- BAT (Business Acceptance Testing): By business stakeholders
- OAT (Operational Acceptance Testing): IT operations team
Testing Process:
- Validate against business requirements
- Use real-world scenarios
- Focus on usability and business flow
- Get sign-off before production
Documents:
- UAT test cases
- Sign-off checklist
- Business requirement validation report
Roles:
- End Users (for UAT)
- Business Analysts
- Product Owners
Real-World Example:
Myntra’s Fashion Stylist feature UAT:
- Actual stylists test recommendation engine
- Verify suggestions match fashion trends
- Check loading time is acceptable
- Business team approves for launch
Comparison Table: Testing Levels
Level | Scope | Performed By | Artifacts | Example |
---|---|---|---|---|
Unit | Single function/module | Developers | Unit test cases | Price calculation function |
Integration | Module interactions | QA Engineers | Interface test reports | Cart ↔ Inventory sync |
System | Complete application | QA Team | System test logs | End-to-end order placement |
Acceptance | Business requirements | End Users | UAT sign-off document | Stylist feature validation |
Best Practices Across All Levels
- Early Testing: Start testing as early as possible
- Traceability: Maintain RTM (Requirement Traceability Matrix)
- Environment: Match production as closely as possible
- Data: Use realistic test data
- Documentation: Maintain detailed test records
Common Mistakes to Avoid
❌ Testing only at system level (missing unit/integration)
❌ Using unrealistic test data
❌ Not involving business users in UAT
❌ Poor defect documentation
Real-World Workflow Example (Myntra New Feature)
- Unit: Developers test new “Size Recommender” algorithm
- Integration: QA tests size recommender ↔ product catalog integration
- System: Full test of size selection → add to cart → checkout flow
- UAT: Actual shoppers validate size recommendations before launch
Conclusion
Understanding these testing levels helps implement a structured, comprehensive quality assurance process. Each level serves a specific purpose in verifying software quality from different perspectives – from individual code units to complete business solutions.
Key Insight:
“Just like building a house needs inspections at foundation, framing, and final stages – software needs testing at each development level.”