Lesson: Test Plan Generation for Project Business Requirements Using AI

Module Overview

This module covers how to use AI effectively to generate detailed and comprehensive test plans based on project requirements. By structuring prompts well, participants can streamline the planning process to ensure thorough test coverage, aligned closely with the project’s business needs. This course can be done in Chatgpt, Gemini or any ai you like to prompt with

Key Learning Objectives

  • Define project requirements so AI can interpret them effectively.
  • Specify testing resources, timelines, and tools within the prompt to create structured test plans.
  • Develop prompts to address different testing types (performance, compatibility) based on business requirements.
  • Understand why AI responses vary with each prompt iteration and how to handle these variations.

Example Requirements for Test Plan Generation

To illustrate how to generate a performance test plan using AI, we’ll use the following project requirements:

  • Requirements Document: AI and Software Testing GitHub Repository
  • Test Resources: 3 testers
  • Timeline: 45 days
  • Testing Type: Performance Testing
  • Operating Systems: Mac and Windows
  • Browsers: Chrome and Safari

Prompting for an AI-Generated Test Plan

Based on the project requirements above, we need to craft a clear and comprehensive prompt to ensure the generated test plan aligns with project goals. This involves copying relevant details from the requirements document and using these to guide the AI.

Example Prompt

Prompt:

“Generate a comprehensive performance test plan for the ‘Book Style’ online bookstore platform. The platform is a premium bookstore targeting book enthusiasts aged 30-55 with features like user accounts, book browsing, secure checkout, and a seller dashboard. The test plan should cover performance testing on both Mac and Windows, focusing on Chrome and Safari. Three testers will execute the tests over 45 days, with specific milestones. Focus is on performance testing, assessing page load speed, responsiveness, and system resource usage. Include metrics and benchmarks for response times and user experience.”


Reasons for Variability in AI Responses

AI responses may vary slightly due to several factors:

  1. Randomness in Response Generation: Some AI models introduce slight variations in responses to make results more natural and diverse, resulting in small differences in phrasing or content emphasis.
  2. Clarifications and Context Adjustments: AI may adjust based on the nuances within the prompt. For example, emphasizing “user experience” or “resource usage” may lead the AI to focus more on these areas.
  3. Content Focus: If multiple aspects are described in the prompt, AI may prioritize certain elements, like performance metrics or milestones, depending on prompt wording and structure.

Note: For consistency, try experimenting with different wording, adding specific directives to the prompt, and providing feedback iteratively. This helps reduce variance in responses and aligns output with project expectations.


Sample Performance Test Plan

Below is a structured performance test plan for the “Book Style” online bookstore platform, based on the example requirements and prompt.


Performance Test Plan for “Book Style” Online Bookstore Platform

Project Overview

  • Platform: “Book Style” Premium Online Bookstore
  • Target Audience: Book enthusiasts aged 30-55
  • Focus: Performance testing to ensure a smooth user experience under anticipated traffic

Testing Objectives

  • Assess critical functionalities: User registration, browsing, recommendations, and checkout.
  • Measure page load speed for both desktop and mobile.
  • Evaluate responsiveness and resource usage across devices.
  • Confirm platform stability under expected traffic volumes.

Scope of Performance Testing

  1. Page Load Speed: Test home, catalog, detail, and checkout pages on Mac & Windows (desktop & mobile).
  2. Resource Usage: Monitor CPU, memory, and bandwidth usage.
  3. Responsiveness: Evaluate interactions across Chrome and Safari.
  4. Traffic Simulation: Simulate various levels of traffic, from typical to peak loads.
  5. Compatibility: Validate performance across both Mac and Windows on Chrome and Safari.

Testing Roles and Responsibilities

  • Lead Performance Tester: Oversees testing progress, ensures deadlines, and documents findings.
  • Tester 1: Focuses on user registration and browsing functionalities.
  • Tester 2: Tests checkout process and recommendations.

Test Environment and Tools

  • Devices: MacBook Pro (macOS), Windows 11 desktop, iPhone (Safari), Android (Chrome)
  • Software: Chrome, Safari (latest versions)
  • Tools:
    • JMeter: Load testing and throughput simulation
    • Lighthouse: Page load speed, accessibility analysis
    • Fiddler: Network performance analysis
    • BrowserStack: Compatibility testing

Performance Test Cases

  1. Page Load Speed

    • Homepage: Target ≤ 3 seconds on desktop, ≤ 5 seconds on mobile
    • Catalog Browsing: Target ≤ 3 seconds for load, with smooth pagination
    • Book Detail Page: Target ≤ 2 seconds on desktop, ≤ 4 seconds on mobile
    • Checkout Page: Target ≤ 3 seconds on desktop, ≤ 5 seconds on mobile
  2. Resource Usage

    • Browsing: CPU usage < 50%, Memory usage < 300MB
    • Checkout: CPU usage < 60%, Memory usage < 400MB
  3. Responsiveness

    • Desktop: Smooth browsing and scrolling with minimal lag
    • Mobile: Smooth interactions on mobile devices
    • Checkout: No delays during checkout interactions
  4. Load and Scalability

    • Average Load: 500 users with ≤ 3 seconds load time
    • Peak Load: 1000 users with ≤ 5 seconds load time
    • Stress Test: 2000+ users with controlled degradation

Test Execution Timeline

  • Duration: 45 days
  • Milestones:
    • Week 1: Setup and baseline measurements
    • Weeks 2-3: Page load and resource usage testing
    • Week 4: Responsiveness and compatibility testing
    • Weeks 5-6: Load and scalability testing
    • Week 7: Regression testing and reporting

Performance Metrics and Benchmarks

  • Page Load Time: ≤ 3 seconds for homepage on desktop, ≤ 5 seconds on mobile
  • CPU Usage: Average < 50%, Peak < 60%
  • Memory Usage: ≤ 300MB during browsing, ≤ 400MB during checkout

Success Criteria

  • All pages meet load time benchmarks, with resource usage within defined limits.
  • No critical errors or issues during peak load simulations.
  • System stability confirmed under peak and stress load conditions.

Summary

This structured performance test plan ensures the “Book Style” platform is rigorously tested for its performance and user experience standards, providing a stable and responsive platform for its target audience.


Tips for Consistent AI Responses

To reduce variations and inconsistencies in AI responses, consider the following:

  1. Provide Specific, Clear Prompts: Be explicit about each aspect (e.g., “List exact load times in seconds” or “Outline tools for load testing only”).
  2. Break Down Complex Prompts: If a prompt is too detailed, break it down into smaller questions or sub-tasks for more controlled responses.
  3. Use Iterative Feedback: Tweak the prompt based on the output, guiding the AI to refine its focus in areas where responses vary.
  4. Specify the Format: Instruct the AI on preferred output formatting, such as listing sections or bullet points, to improve response structure.

Try It Yourself: Functional Test Plan Activity

Goal

Practice generating a functional test plan for the “Book Style” online bookstore platform using AI. This exercise will help you apply what you’ve learned about structuring prompts and creating test plans aligned with project requirements.

Instructions

  1. Use the Same Project Requirements:

    • Use the “Book Style” platform as the project.
    • The platform includes user accounts, book browsing, secure checkout, and a seller dashboard.
    • Focus on functionality across critical features instead of performance.
  2. Draft Your Prompt:

    • Craft a prompt to generate a functional test plan covering essential functionalities like user registration, browsing, checkout, and account management.

    • Use this example prompt to get started:

      Example Prompt:

      “Generate a detailed functional test plan for the ‘Book Style’ online bookstore platform. The platform includes features like user accounts, book browsing, secure checkout, and a seller dashboard. The test plan should ensure that each feature works as expected for users. Cover both positive and negative test cases for critical functionalities, including user registration, login, browsing, and checkout. Specify metrics, tools, and success criteria.”

  3. Run the Prompt Using AI:

    • Open your AI tool (ChatGPT, Gemini, or Microsoft Copilot).
    • Paste the prompt and review the generated test plan.
  4. Refine the Prompt (Optional):

    • If the output is missing certain details (like specific test cases or success metrics), modify the prompt with additional guidance.
    • For example:
      • “Add detailed test cases for each functionality.”
      • “List specific tools for functional testing.”
  5. Review the Functional Test Plan:

    • Ensure the AI-generated test plan includes:
      • Testing Objectives: Clear goals, such as verifying the login process, browsing functionality, and checkout workflow.
      • Scope: Test cases that cover both typical user interactions and edge cases.
      • Tools and Environment: Defined tools and environments for testing.
      • Success Criteria: Benchmarks for functionality, such as “successful login within 3 seconds” or “successful checkout without errors.”
  6. Save Your Test Plan:

    • Copy and save the AI-generated functional test plan for reference or further refinement.

Reflection Questions

  1. What details in the prompt helped produce a more comprehensive test plan?
  2. How did prompt modifications improve the AI’s response?
  3. Are there additional details or constraints you could include to refine future test plans?