Summary
Overview
This course session provides a comprehensive, hands-on training on API testing and automation using Postman, with a focus on workflow design, environment management, dynamic variable handling, collection runner usage, scheduled runs, monitoring, and report generation via Newman. The instructor demonstrates end-to-end testing workflows for bank account operations, including GET, DELETE, and sampling-based request logic, and guides learners through automating test executions using CSV/JSON data files, CLI tools, and HTML report generation. The session emphasizes best practices for portable scripting, naming conventions, and environment configuration in Postman and PowerShell.
Topic (Timeline)
1. API Workflow Design and Delete Endpoint Implementation [00:00:09 - 00:05:37]
- Clarifies that the DELETE operation in the MAC API uses a dynamic endpoint format:
/slash/ID, not a static collection endpoint. - Emphasizes that deletion must be implemented within the workflow itself, not as a standalone collection action.
- Demonstrates the need to capture and reuse the customer/passbook ID within the workflow to enable deletion.
- Identifies a key error: direct DELETE requests are unsupported; deletion must follow a retrieve-and-delete sequence within a defined workflow.
2. Bank Account Closure Workflow Construction [00:05:37 - 00:07:33]
- Builds a workflow for closing a bank account:
GET customer→ 2.GET passbook→ 3.STRIP(variable extraction) → 4.DELETEusing captured ID.
- Confirms environment context must be preserved: the “open bank account” environment is reused for the close workflow.
- Highlights the importance of conditional logic: if GET requests fail, an error must be triggered before proceeding.
3. Variable Management and ID Capture [00:07:33 - 00:10:05]
- Demonstrates how to extract and store output values (e.g., passbook ID) as variables within the workflow.
- Identifies a common mistake: variables not being properly captured or reused, leading to null references.
- Shows how to use “get variable” and “set variable” blocks to pass data between steps.
- Notes that variable names must be explicitly defined and referenced in subsequent steps to avoid execution failure.
4. Dynamic ID Generation and Sampling Logic [00:32:34 - 00:36:10]
- Generates a random product ID by multiplying a random value by 100 and applying ceiling function to ensure integer output.
- Assigns the generated ID to the environment for reuse in subsequent requests.
- Implements a 50% sampling logic: if the random value exceeds 50%, the request is terminated (
PM.SendRequest = null); otherwise, it proceeds. - Uses the product ID to construct a dynamic URL:
{{baseURL}}/product/{{productID}}. - Demonstrates how sampling reduces test load while maintaining coverage.
5. Collection Runner and Data-Driven Testing [00:36:10 - 00:41:08]
- Introduces data-driven testing using CSV and JSON files to feed multiple test cases (e.g., ID 1, 10, 30, etc.).
- Explains how to upload a data file (CSV/JSON) to the Collection Runner and map headers to variables.
- Configures a manual run with 5 iterations and 0.5-second delays between executions.
- Notes that test failures occur when IDs in the data file (e.g., ID 79) do not exist in the target system, returning 404s.
6. Scheduled Runs and Monitoring [00:41:08 - 00:47:50]
- Configures scheduled runs (hourly/weekly) to automatically execute test collections.
- Sets up email notifications to alert on test failures or timeouts.
- Introduces Postman Monitor as a continuous, server-side tool for tracking API health, response times, and availability.
- Emphasizes that Monitor is resource-intensive and best used for critical endpoints, not high-volume testing.
7. Report Generation with Newman and HTML Export [00:47:50 - 00:55:07]
- Demonstrates generating test reports locally using Newman CLI in PowerShell.
- Installs required dependencies:
npm install newman reporter-html. - Executes a command:
newman run "collection.json" -e "environment.json" -d "data.csv" --reporters html --reporter-html-export "report.html" - Interprets report output: 12 iterations, all failed due to expected 100 records but system returned 200, indicating mismatched data expectations.
- Highlights that reports are temporary unless explicitly exported to a persistent file.
8. Command-Line Best Practices and Naming Conventions [00:55:07 - 01:32:28]
- Reinforces correct Newman syntax:
-dfor data file,-efor environment,--reporters htmlfor output format. - Warns against using backslashes (
\) in file paths in PowerShell/Linux; use forward slashes (/) for portability. - Advises using camelCase or snake_case for collection and file names to avoid CI/CD integration issues.
- Notes that
newman runis the only essential CLI command; all other functionality (e.g., data files, environments) are passed as parameters. - Recommends keeping all files (collection, environment, data) in the same working directory to prevent path errors.
Appendix
Key Principles
- Delete operations must be embedded in workflows using dynamic
/IDendpoints, not standalone collection actions. - Variables must be explicitly captured and reused to maintain state across request steps.
- Sampling logic (e.g., 50% request termination) reduces test load while preserving coverage.
- Data-driven testing via CSV/JSON files enables scalable, repeatable test execution without manual input.
Tools Used
- Postman: For building, debugging, and running API collections.
- Postman Collection Runner: For executing data-driven test iterations.
- Postman Monitor: For continuous, scheduled API health monitoring.
- Newman (CLI): For automated, headless test execution and report generation.
- PowerShell: For running Newman commands and managing file paths.
Common Pitfalls
- Using backslashes (
\) in file paths in PowerShell → use forward slashes (/). - Not capturing IDs as variables → leads to null references in DELETE steps.
- Hardcoding IDs in collections → reduces reusability and portability.
- Ignoring naming conventions → causes CI/CD pipeline failures.
- Assuming direct DELETE endpoints exist → must use workflow-based deletion.
Practice Suggestions
- Rebuild the bank account closure workflow from scratch using only the GET → GET → DELETE pattern.
- Create a CSV file with 10 invalid customer IDs and run it through the Collection Runner to observe 404 failures.
- Generate an HTML report using Newman and verify the file is saved in the correct directory.
- Rename a collection using snake_case (e.g.,
close_bank_account.json) and test execution in a new environment. - Set up a scheduled Postman Monitor for a critical endpoint and verify email alerts on failure.