Unit IV: Testing
Fundamentals of Testing
Introduction to Software Testing
Software Testing is a critical phase in the software development life cycle that involves executing a program or application with the intent of finding software bugs or defects. It is a process used to identify the correctness, completeness, and quality of developed software.
- Definition:
- Testing is the process of evaluating a system or its components to determine whether it satisfies specified requirements.
- It involves the execution of a software component or system component to evaluate one or more properties of interest.
Purpose of Testing:
Defect Identification:
- Detect and fix errors that were made during the development phases.
- Ensure that the software product is defect-free and reliable.
Verification and Validation:
- Verification: Confirming that the software meets its specified requirements (Are we building the product right?).
- Validation: Ensuring the software meets the user's needs and expectations (Are we building the right product?).
Quality Assurance:
- Improve the overall quality of the software product.
- Deliver a product that is fit for use and satisfies customer requirements.
Objectives of Testing
Verification and Validation:
Verification:
- Involves reviews, inspections, and walkthroughs.
- Checks if the software conforms to specifications and design documents.
Validation:
- Ensures the final product meets the user's needs.
- Involves running the software and checking outputs against expected results.
Defect Prevention:
- Early Detection:
- Identifying defects early in the development process to reduce cost and effort in fixing them.
- Using methodologies like Test-Driven Development (TDD).
Quality Assurance:
- Process Improvement:
- Implementing testing strategies that improve the development process.
- Adhering to standards and best practices.
Reliability Estimation:
- Assessing Performance:
- Evaluating the system's performance under different conditions.
- Stress testing, load testing, and performance testing.
Principles of Testing
Understanding the fundamental principles of testing enhances the effectiveness and efficiency of the testing process.
Testing Shows Presence of Defects
- Concept:
- Testing can demonstrate that defects are present, but not that no defects exist.
- It reduces the probability of undiscovered defects but cannot guarantee their absence.
Exhaustive Testing is Impossible
- Explanation:
- Testing all possible inputs and scenarios is not feasible except for trivial cases.
- Resources should be focused on the most critical areas.
Early Testing
- Strategy:
- Testing activities should begin at the earliest stages of the software development life cycle (SDLC).
- Early testing reduces the cost of fixing defects.
Defect Clustering
- Observation:
- A small number of modules contain most of the defects.
- Focus testing efforts on these critical areas.
Pesticide Paradox
- Phenomenon:
- Repeating the same test cases over time will no longer find new defects.
- Test cases need to be regularly reviewed and updated.
Testing is Context Dependent
- Importance:
- Testing approach depends on the context of the software (e.g., safety-critical systems require rigorous testing).
Absence-of-Errors Fallacy
- Warning:
- A system may be free of defects but still fail to meet user expectations or business needs.
- Testing should ensure that the software fulfills its intended purpose.
Black Box Testing Techniques
Definition
Black Box Testing is a software testing method where the internal structure, design, or implementation of the item being tested is not known to the tester. The focus is on validating the functionality according to the requirements.
- Characteristics:
- Tests are based on requirements and functionality.
- Testers are unaware of the internal workings of the application.
Techniques
Several techniques are employed in black box testing to create effective test cases.
Equivalence Partitioning
Equivalence Partitioning divides input data into partitions of equivalent data from which test cases can be derived.
Process:
- Identify Input Domain:
- Determine the range of inputs expected by the software.
- Create Equivalence Classes:
- Partition inputs into valid and invalid classes where the system should handle all inputs in the same way.
- Design Test Cases:
- Select representative values from each class.
- Identify Input Domain:
Example:
- For an input field that accepts numbers 1-100:
- Valid Equivalence Class: 1-100
- Invalid Equivalence Classes: Values less than 1, values greater than 100, non-numeric inputs.
- For an input field that accepts numbers 1-100:
Boundary Value Analysis
Boundary Value Analysis focuses on the values at the boundaries rather than those within the ranges.
Principle:
- Defects are more likely to occur at the boundaries of input ranges.
Test Cases:
- Minimum value
- Just above the minimum
- Nominal value (middle value)
- Just below the maximum
- Maximum value
Example:
- For an input field accepting 1-10:
- Test inputs: 0, 1, 5, 10, 11
- For an input field accepting 1-10:
Decision Table Testing
Decision Table Testing uses tabular representations to capture the different combinations of inputs and their corresponding system behaviors.
Components:
- Conditions: Input variables that can affect the output.
- Actions: The expected system behaviors or outputs.
- Rules: Combinations of conditions and their corresponding actions.
Steps:
- Identify all possible conditions.
- Determine all possible actions.
- Create a table mapping conditions to actions.
- Design test cases for each rule.
Example:
- A loan approval system where conditions are "Credit Score" and "Income Level."
- Actions are "Approve Loan," "Reject Loan," or "Request Additional Information."
State Transition Testing
State Transition Testing is used to test different states of the system and the transitions between those states based on events.
Components:
- States: Different modes in which the system can exist.
- Events: Triggers that cause transitions.
- Actions: Activities that result from transitions.
Application:
- Useful for systems where behavior changes based on the current state (e.g., ATMs, vending machines).
Example:
- A user account system where states are "Logged Out," "Logged In," and "Locked."
- Events include "Enter Correct Password," "Enter Incorrect Password," "Password Reset."
Use Case Testing
Use Case Testing derives test cases based on use cases that describe system interactions from the user's perspective.
Advantages:
- Ensures that the system meets user requirements.
- Covers end-to-end transactions.
Process:
- Identify use cases from requirements.
- For each use case, define the normal flow and alternative flows.
- Design test cases to cover all flows.
Example:
- An online shopping use case where the user searches for a product, adds it to the cart, and checks out.
Advantages of Black Box Testing
- User Perspective:
- Tests the system from the end-user's point of view.
- Tester Independence:
- Testers are unbiased by not knowing the internal code structure.
- Detect Missing Functions:
- Helps identify discrepancies in expected functionality.
White Box Testing Techniques
Definition
White Box Testing, also known as clear box or structural testing, involves testing the internal structures or workings of an application, as opposed to its functionality (i.e., black box testing).
- Characteristics:
- The tester has knowledge of the internal workings.
- Focuses on code structure, internal design, and data flow.
Techniques
Various techniques help ensure thorough testing of the code's logical paths and structures.
Statement Coverage
Statement Coverage aims to execute all the executable statements in the source code at least once.
Objective:
- Ensure that every line of code is tested.
Measure:
- (Number of statements executed / Total number of statements) * 100%
Benefits:
- Identifies statements that are not executed, possibly indicating dead code.
Branch Coverage (Decision Coverage)
Branch Coverage ensures that all possible branches (true and false conditions) from each decision point are executed.
Objective:
- Validate that all decision outcomes (if-else, switch cases) are tested.
Measure:
- (Number of decision outcomes executed / Total number of decision outcomes) * 100%
Example:
- For an if-else condition, test cases should cover both the true and false paths.
Path Coverage
Path Coverage involves testing all possible execution paths through the code.
Objective:
- Execute every unique path in the program.
Challenge:
- The number of paths can be exponential, making it impractical for complex programs.
Approach:
- Focus on critical or high-risk paths.
Condition Coverage
Condition Coverage tests each boolean expression in the code to ensure that it evaluates to both true and false.
Types:
- Simple Condition Coverage:
- Each condition in a decision takes on all possible outcomes.
- Multiple Condition Coverage:
- All possible combinations of conditions are tested.
- Condition/Decision Coverage:
- Combines condition coverage and decision coverage.
- Simple Condition Coverage:
Objective:
- Detect errors in complex conditional statements.
Loop Testing
Loop Testing focuses on validating loops in the code to ensure they operate correctly under different conditions.
Considerations:
- Simple Loops:
- Test zero iterations, one iteration, typical number of iterations, and maximum iterations.
- Nested Loops:
- Start with the innermost loop and work outward.
- Concatenated Loops:
- Test loops in sequence.
- Simple Loops:
Example:
- For a loop that should execute 1 to N times:
- Test with N=0, N=1, N=typical value, N=maximum value.
- For a loop that should execute 1 to N times:
Advantages of White Box Testing
- Thoroughness:
- Allows for detailed examination of internal logic and structures.
- Optimization:
- Helps discover hidden errors and inefficiencies in code.
- Security:
- Identifies vulnerabilities and potential security breaches.
Levels of Testing
Software testing is performed at various levels during the development process to ensure that all components function correctly both individually and collectively.
Unit Testing
Unit Testing involves testing individual units or components of the software in isolation.
Purpose:
- Validate that each unit performs as intended.
Performed By:
- Usually developers who wrote the code.
Tools:
- JUnit for Java
- NUnit for .NET
- PyUnit for Python
Approach:
- Write test cases for each function or method.
- Mock external dependencies.
Integration Testing
Integration Testing focuses on verifying the interactions between integrated units or components.
- Objective:
- Expose defects in the interfaces and interactions between integrated components or systems.
Approaches:
Big Bang Integration
- Method:
- Combines all components at once and tests them together.
- Advantages:
- Simple and straightforward.
- Disadvantages:
- Difficult to isolate defects.
- Requires that all components are ready.
Incremental Integration
Method:
- Components are integrated and tested one at a time until the entire system is integrated.
Types:
Top-Down Integration:
- Begin with high-level modules and integrate lower-level modules step by step.
- Stubs may be used to simulate lower modules.
Bottom-Up Integration:
- Start with lower-level modules and integrate upwards.
- Drivers may be used to simulate higher modules.
Sandwich/Hybrid Integration:
- Combines both top-down and bottom-up approaches.
Advantages:
- Easier defect isolation.
- Continuous testing of components.
System Testing
System Testing involves testing the complete integrated system to verify that it meets specified requirements.
Objective:
- Evaluate the system's compliance with the functional and non-functional requirements.
Types:
- Functional Testing:
- Verifies functionalities described in requirements.
- Non-Functional Testing:
- Includes performance testing, security testing, usability testing, etc.
- Functional Testing:
Performed By:
- Independent testing team.
Acceptance Testing
Acceptance Testing is the final level of testing before the system goes live.
- Purpose:
- Validate the end-to-end business flow.
- Ensure the system is ready for deployment.
Types:
1. User Acceptance Testing (UAT)
- Conducted By:
- End-users or clients.
- Focus:
- Validating that the system meets their needs and is acceptable for use.
2. Operational Acceptance Testing (OAT)
- Conducted By:
- System administrators or operations team.
- Focus:
- Ensuring system stability in the production environment.
- Testing backup/restore, disaster recovery, maintenance tasks.
Other Levels
Regression Testing
- Purpose:
- Confirm that recent program or code changes have not adversely affected existing features.
- Approach:
- Re-execute test cases that were previously run against the software.
Alpha and Beta Testing
Alpha Testing
- Performed By:
- Internal staff under controlled conditions.
- Purpose:
- Identify bugs before releasing to real users.
Beta Testing
- Performed By:
- Actual users in a real-world environment.
- Purpose:
- Gather feedback on product usage and discover defects missed during earlier tests.
Test Cases
Definition of Test Case
A Test Case is a set of conditions, inputs, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Components of a Test Case
Test Case ID
- Unique identifier for reference.
- Example: TC_Login_01
Test Description
- Brief explanation of the test case purpose.
- Example: Test login functionality with valid credentials.
Preconditions
- Any requirements that must be met before executing the test.
- Example: User account must exist.
Test Steps
- Detailed, step-by-step instructions to perform the test.
- Example:
- Navigate to the login page.
- Enter valid username.
- Enter valid password.
- Click the "Login" button.
Test Data
- Input values or data to be used in the test.
- Example: Username: user@example.com, Password: SecurePass123
Expected Result
- The expected outcome if the software behaves correctly.
- Example: User is redirected to the dashboard page.
Actual Result
- The actual outcome observed when the test is executed.
- Example: User receives an error message.
Postconditions
- The state of the system after test execution.
- Example: User is logged in and session is active.
Status
- Indicates whether the test passed, failed, or was blocked.
- Example: Pass/Fail
Comments
- Additional observations or remarks.
- Example: Login failed due to server timeout.
Writing Effective Test Cases
Clarity and Conciseness
- Use clear and unambiguous language.
- Keep steps simple and to the point.
Specificity
- Define specific inputs and expected results.
- Avoid general statements.
Traceability
- Link test cases to specific requirements or user stories.
- Helps ensure coverage and facilitates impact analysis.
Reusability
- Design test cases that can be used across multiple cycles or versions.
- Modularize test steps where possible.
Maintainability
- Keep test cases updated with changes in requirements or applications.
- Regularly review and revise test cases.
Test Case Management
Test Plan
- Definition:
- A document detailing the scope, approach, resources, and schedule of intended testing activities.
- Components:
- Test objectives, test scope, resources, schedule, deliverables.
Test Suites
- Definition:
- A collection of test cases intended to test a specific feature or functionality.
- Purpose:
- Organize test cases for efficient execution.
Test Management Tools
- Purpose:
- Facilitate the creation, execution, and tracking of test cases and test plans.
- Examples:
- TestRail: Comprehensive test case management.
- HP ALM (Application Lifecycle Management): Integrated tool for requirements, testing, and defects.
- Jira with Zephyr or Xray Add-ons: Issue tracking and test case management.
- Microsoft Azure DevOps: Provides test planning and execution.
Introduction to Selenium
Features of Selenium
Selenium is a widely-used open-source framework for automating web browser interactions.
Key Features:
Cross-Browser Compatibility:
- Works with major browsers like Chrome, Firefox, Safari, Edge.
Multi-Language Support:
- Supports scripting in languages such as Java, C#, Python, Ruby, JavaScript, and more.
Support for Various Operating Systems:
- Compatible with Windows, macOS, Linux.
Integration Capabilities:
- Can be integrated with tools like TestNG, JUnit, Maven for test management and reporting.
Community Support:
- Large user base and active community contributing to its continuous improvement.
Versions of Selenium
Selenium IDE (Integrated Development Environment)
Description:
- A browser plugin for Chrome and Firefox that allows recording and playback of user interactions with the browser.
Features:
- Record and Playback:
- Easy creation of test cases through recording actions.
- Simple Learning Curve:
- Suitable for beginners to learn basics of web automation.
- Record and Playback:
Limitations:
- Best suited for simple scenarios.
- Lacks the flexibility of programming language support.
Selenium WebDriver
Description:
- A programming interface that allows more advanced and flexible testing by directly communicating with the web browser.
Features:
- Programming Language Support:
- Write tests in preferred languages.
- Direct Communication:
- Interacts with browser engine directly for faster execution.
- Supports Dynamic Web Pages:
- Handles elements that change without page reload.
- Programming Language Support:
Usage:
- Preferred for creating robust, scalable test suites.
- Suitable for complex test scenarios and integration with frameworks.
Selenium Grid
Description:
- A tool used together with Selenium WebDriver to run parallel tests across different machines and browsers.
Features:
- Parallel Execution:
- Speeds up test execution by running tests concurrently.
- Distributed Testing:
- Execute tests on multiple environments simultaneously.
- Scalability:
- Supports large test suites requiring extensive resources.
- Parallel Execution:
Record and Playback with Selenium IDE
Recording Tests
Process:
- Start the Selenium IDE plugin.
- Navigate through the application while Selenium records actions.
- Actions like clicking, typing, and navigation are captured.
Best Practices:
- Use clear and decisive actions to ensure correct recording.
- Annotate steps with comments if possible.
Playback Tests
- Process:
- Execute the recorded test cases directly within Selenium IDE.
- Observe the execution and check for errors or failures.
Editing Tests
- Capabilities:
- Modify recorded steps to fine-tune the test.
- Insert commands or adjust parameters.
Exporting Scripts
- Feature:
- Export recorded tests into scripts in various programming languages.
- Benefits:
- Allows further customization and integration with Selenium WebDriver.
- Enables version control and collaboration.
Use Cases of Selenium
Functional Testing:
- Automate testing of user interactions and application functionalities.
Regression Testing:
- Repeatedly run test suites to ensure new changes do not break existing functionality.
Cross-Browser Testing:
- Validate the application across different browsers for compatibility.
Data-Driven Testing:
- Execute tests with multiple sets of input data by integrating with data sources.
Continuous Integration and Delivery (CI/CD):
- Integrate with tools like Jenkins, Travis CI for automated testing in CI/CD pipelines.
Best Practices with Selenium
Use Explicit Waits:
- Purpose:
- Handle asynchronous web elements and synchronization issues.
- Techniques:
- WebDriverWait class to wait for conditions like visibility, clickability.
- Purpose:
Implement Page Object Model (POM):
- Concept:
- A design pattern that creates an object repository for web elements.
- Benefits:
- Enhances test maintenance and reduces code duplication.
- Concept:
Exception Handling:
- Importance:
- Ensure tests are robust and can handle unexpected states.
- Strategies:
- Use try-catch blocks, custom exception classes.
- Importance:
Modularize Test Code:
- Approach:
- Break down tests into reusable functions or methods.
- Advantage:
- Simplifies maintenance and improves readability.
- Approach:
Use Descriptive Naming:
- Practice:
- Name test methods and variables clearly to reflect their purpose.
- Practice:
Logging and Reporting:
- Integration:
- Use libraries or frameworks to generate reports (e.g., TestNG reports, Allure).
- Integration:
Keep Tests Independent:
- Purpose:
- Ensure tests can run in any order and do not depend on each other's results.
- Purpose:
Maintain Clean Test Data:
- Strategy:
- Set up and tear down test data before and after tests to maintain consistency.
- Strategy: