Module 10: Using Git with AI Tools: Automation and Code Generation

Leverage AI tools to enhance your Git workflow for test automation. Use AI for writing commit messages, generating test code reviews, creating test scripts from requirements, and automating repository maintenance. Integrate ChatGPT, GitHub Copilot, and other AI tools into your daily Git operations.

Test Script Generation from Requirements Using AI

Why This Matters

As a test engineer, you’ve likely spent countless hours manually translating requirements documents, user stories, and acceptance criteria into test scripts. This repetitive process is time-consuming, error-prone, and often creates a bottleneck in agile development cycles. By the time you’ve finished writing comprehensive test coverage, requirements may have already changed.

The real-world problem: Test automation teams struggle to keep pace with rapid development cycles. Writing test scripts manually from requirements can take 40-60% of a test engineer’s time, and ensuring complete coverage becomes increasingly difficult as applications grow in complexity.

When you’ll use this skill:

  • When starting a new sprint with multiple user stories requiring test coverage
  • During requirement analysis sessions to quickly prototype test scenarios
  • When onboarding new team members who need to understand testing patterns
  • For generating regression test suites from legacy documentation
  • When refactoring existing tests to align with updated requirements

Common pain points addressed:

  • Slow test creation: Reduce the time from requirements to executable tests from days to hours
  • Incomplete coverage: AI can identify edge cases and scenarios you might overlook
  • Inconsistent test patterns: Generate standardized test scripts that follow team conventions
  • Documentation drift: Keep tests synchronized with requirements through regeneration
  • Context switching: Let AI handle boilerplate code while you focus on test strategy

AI tools like ChatGPT and GitHub Copilot have fundamentally changed how we approach test automation. Rather than replacing test engineers, these tools amplify your expertise, allowing you to focus on critical thinking, test strategy, and quality advocacy while automating the mechanical aspects of test script creation.

Learning Objectives Overview

In this lesson, you’ll transform from manually writing every test line-by-line to orchestrating AI tools that generate comprehensive test coverage from requirements. Here’s what you’ll accomplish:

Generate test scripts from requirements documents using ChatGPT: You’ll learn effective prompt engineering techniques to feed requirements, specifications, or feature descriptions into ChatGPT and receive structured, executable test scripts. We’ll cover how to format your prompts, specify your testing framework, and iterate on outputs to achieve production-ready tests.

Use GitHub Copilot to create test cases from user stories: You’ll discover how to leverage Copilot’s contextual awareness within your IDE to generate test cases inline as you work. We’ll explore how to structure comments and code context so Copilot suggests relevant test scenarios, assertions, and test data based on user story acceptance criteria.

Convert acceptance criteria into automated test code: You’ll master the process of taking Given-When-Then scenarios or bullet-point acceptance criteria and systematically transforming them into executable test automation code. This includes handling complex scenarios, parameterized tests, and ensuring traceability between requirements and test code.

Integrate AI-generated tests into Git workflows: You’ll learn best practices for committing, reviewing, and managing AI-generated test code in your repository. This includes creating meaningful commit messages for AI-assisted work, organizing test files, and using Git branches to experiment with AI-generated tests before merging.

Review and refine AI-generated test scripts: You’ll develop critical skills for evaluating AI output, identifying potential issues, enhancing test coverage, and applying your domain expertise to improve generated tests. We’ll cover common AI pitfalls, validation techniques, and strategies for maintaining high-quality test code standards even when using AI assistance.

By the end of this lesson, you’ll have a practical workflow for incorporating AI into your test automation process, complete with real examples, Git integration patterns, and quality assurance techniques that ensure AI-generated tests meet your team’s standards.


Core Content

Core Content: Test Script Generation from Requirements Using AI

1. Core Concepts Explained

Understanding AI-Powered Test Generation

AI-powered test script generation leverages machine learning models to interpret natural language requirements and convert them into executable test code. This approach bridges the gap between business requirements and technical test automation, reducing manual coding effort and accelerating test creation.

Key Components:

  1. Requirements Analysis: AI models parse user stories, acceptance criteria, or test scenarios written in plain English
  2. Code Generation: The AI translates requirements into structured test scripts using frameworks like Selenium, Playwright, or Cypress
  3. Context Understanding: Modern AI models understand testing patterns, best practices, and common assertions
  4. Iterative Refinement: Generated scripts can be refined through prompt engineering and feedback loops

How AI Interprets Test Requirements

AI models trained on code repositories understand:

  • Test structure patterns (arrange-act-assert, given-when-then)
  • Common web interactions (click, type, navigate, wait)
  • Assertion strategies (element presence, text content, visibility)
  • Page object patterns and modular test design

The Test Generation Workflow

graph LR
    A[Requirements Document] --> B[AI Model Processing]
    B --> C[Generated Test Code]
    C --> D[Review & Validation]
    D --> E{Tests Pass?}
    E -->|No| F[Refine Requirements]
    F --> B
    E -->|Yes| G[Integrate to Suite]

2. Practical Implementation

Setting Up Your AI Test Generation Environment

Step 1: Install Required Dependencies

# Install Node.js test framework (Playwright example)
npm init -y
npm install -D @playwright/test
npm install -D dotenv

# Initialize Playwright
npx playwright install

Step 2: Configure AI Integration

Create a .env file for API keys:

# .env file
OPENAI_API_KEY=your_api_key_here
ANTHROPIC_API_KEY=your_api_key_here

Step 3: Create AI Helper Module

// ai-test-generator.js
const OpenAI = require('openai');
require('dotenv').config();

class AITestGenerator {
  constructor() {
    this.client = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY
    });
  }

  async generateTest(requirements, framework = 'playwright') {
    const systemPrompt = `You are an expert test automation engineer. 
    Generate ${framework} test scripts from requirements. 
    Use best practices including:
    - Page Object Model pattern
    - Proper waits and assertions
    - Clear test descriptions
    - Error handling`;

    const response = await this.client.chat.completions.create({
      model: "gpt-4",
      messages: [
        { role: "system", content: systemPrompt },
        { role: "user", content: this.formatRequirements(requirements) }
      ],
      temperature: 0.3, // Lower temperature for more consistent code
      max_tokens: 2000
    });

    return response.choices[0].message.content;
  }

  formatRequirements(requirements) {
    return `Generate a complete test script for the following requirements:

Requirements:
${requirements}

Target Website: https://practiceautomatedtesting.com
Framework: Playwright with JavaScript
Include: Imports, test setup, test case, and assertions`;
  }
}

module.exports = AITestGenerator;

Example 1: Generating a Login Test

Requirements Input:

// generate-login-test.js
const AITestGenerator = require('./ai-test-generator');

const requirements = `
User Story: User Login
As a registered user
I want to log in to my account
So that I can access my profile

Acceptance Criteria:
1. Navigate to login page
2. Enter valid email and password
3. Click login button
4. Verify user is redirected to account page
5. Verify welcome message is displayed
`;

async function generateLoginTest() {
  const generator = new AITestGenerator();
  const testCode = await generator.generateTest(requirements);
  
  console.log('Generated Test Code:');
  console.log(testCode);
  
  // Save to file
  const fs = require('fs');
  fs.writeFileSync('tests/login.spec.js', testCode);
}

generateLoginTest();

Expected Generated Output:

// tests/login.spec.js
const { test, expect } = require('@playwright/test');

test.describe('User Login', () => {
  test('should allow registered user to login successfully', async ({ page }) => {
    // Navigate to login page
    await page.goto('https://practiceautomatedtesting.com/login');
    
    // Wait for page to load
    await expect(page.locator('h1')).toContainText('Login');
    
    // Enter valid credentials
    await page.fill('#email', 'test@example.com');
    await page.fill('#password', 'Password123');
    
    // Click login button
    await page.click('button[type="submit"]');
    
    // Verify redirect to account page
    await expect(page).toHaveURL(/.*account/);
    
    // Verify welcome message
    await expect(page.locator('.welcome-message')).toBeVisible();
    await expect(page.locator('.welcome-message')).toContainText('Welcome');
  });
});

Example 2: Generating E-commerce Cart Tests

Requirements with Multiple Scenarios:

// generate-cart-test.js
const AITestGenerator = require('./ai-test-generator');

const requirements = `
Feature: Shopping Cart Management

Scenario 1: Add product to cart
- Navigate to products page
- Click on first product
- Click "Add to Cart" button
- Verify cart badge shows quantity "1"
- Verify success message appears

Scenario 2: Remove product from cart
- Open cart page with one item
- Click remove button
- Verify cart is empty
- Verify "Your cart is empty" message appears
`;

async function generateCartTests() {
  const generator = new AITestGenerator();
  const testCode = await generator.generateTest(requirements);
  
  const fs = require('fs');
  fs.writeFileSync('tests/shopping-cart.spec.js', testCode);
  console.log('✅ Cart tests generated successfully!');
}

generateCartTests();

Expected Generated Output:

// tests/shopping-cart.spec.js
const { test, expect } = require('@playwright/test');

test.describe('Shopping Cart Management', () => {
  
  test('should add product to cart successfully', async ({ page }) => {
    // Navigate to products page
    await page.goto('https://practiceautomatedtesting.com/products');
    
    // Click on first product
    const firstProduct = page.locator('.product-item').first();
    await firstProduct.click();
    
    // Add to cart
    await page.click('button:has-text("Add to Cart")');
    
    // Verify cart badge quantity
    const cartBadge = page.locator('.cart-badge');
    await expect(cartBadge).toHaveText('1');
    
    // Verify success message
    await expect(page.locator('.success-message')).toBeVisible();
    await expect(page.locator('.success-message')).toContainText('added to cart');
  });

  test('should remove product from cart', async ({ page }) => {
    // Setup: Add item to cart first
    await page.goto('https://practiceautomatedtesting.com/products');
    await page.locator('.product-item').first().click();
    await page.click('button:has-text("Add to Cart")');
    
    // Navigate to cart
    await page.click('.cart-icon');
    await expect(page).toHaveURL(/.*cart/);
    
    // Remove item
    await page.click('.remove-button');
    
    // Verify cart is empty
    await expect(page.locator('.cart-items')).toHaveCount(0);
    await expect(page.locator('.empty-cart-message')).toBeVisible();
    await expect(page.locator('.empty-cart-message')).toContainText('Your cart is empty');
  });
});

Example 3: Creating a Prompt Template System

// prompt-templates.js
class TestPromptTemplates {
  static getBasePrompt(framework) {
    return `You are an expert QA automation engineer specializing in ${framework}.
Generate production-ready test code following these principles:
- Use async/await properly
- Include proper waits (no hardcoded sleeps)
- Add meaningful assertions
- Use descriptive variable names
- Include error handling
- Follow ${framework} best practices`;
  }

  static getE2ETestPrompt(requirements, baseUrl) {
    return `${this.getBasePrompt('Playwright')}

Create an end-to-end test for:
${requirements}

Base URL: ${baseUrl}
Include:
1. Complete test file with imports
2. beforeEach hook for navigation
3. Clear test descriptions
4. Multiple assertions per test
5. Proper element selectors (prefer data-testid)`;
  }

  static getAPITestPrompt(requirements) {
    return `${this.getBasePrompt('Playwright + API')}

Create API tests for:
${requirements}

Include:
1. API request setup
2. Response validation
3. Status code checks
4. Response body assertions
5. Error handling`;
  }
}

module.exports = TestPromptTemplates;

Using the Template System:

// advanced-test-generation.js
const AITestGenerator = require('./ai-test-generator');
const TestPromptTemplates = require('./prompt-templates');

async function generateAdvancedTest() {
  const generator = new AITestGenerator();
  
  const requirements = `
  Test: User Registration Form Validation
  - Verify email format validation
  - Verify password strength requirements
  - Verify matching password confirmation
  - Verify successful registration with valid data
  `;
  
  const prompt = TestPromptTemplates.getE2ETestPrompt(
    requirements,
    'https://practiceautomatedtesting.com'
  );
  
  const testCode = await generator.generateTest(prompt);
  
  const fs = require('fs');
  fs.writeFileSync('tests/registration-validation.spec.js', testCode);
}

generateAdvancedTest();

Running Generated Tests

# Run all generated tests
npx playwright test

# Run specific test file
npx playwright test tests/login.spec.js

# Run with UI mode for debugging
npx playwright test --ui

# Generate HTML report
npx playwright test --reporter=html

Expected Terminal Output:

$ npx playwright test

Running 5 tests using 3 workers

  ✓ tests/login.spec.js:3:3 › User Login › should allow registered user to login successfully (2.3s)
  ✓ tests/shopping-cart.spec.js:3:3 › Shopping Cart › should add product to cart successfully (1.8s)
  ✓ tests/shopping-cart.spec.js:18:3 › Shopping Cart › should remove product from cart (2.1s)

  3 passed (6.2s)

3. Best Practices for AI Test Generation

Structuring Requirements for Better Generation

✅ Good Requirements Format:

const goodRequirements = `
Feature: Product Search
Background: User is on homepage

Scenario: Search with valid keyword
  Given I am on the search page
  When I enter "laptop" in search box
  And I click search button
  Then I should see at least 1 product result
  And results should contain "laptop" in title or description
  
Expected Elements:
- Search input: #search-input
- Search button: button[type="submit"]
- Results container: .search-results
- Product title: .product-title
`;

❌ Poor Requirements Format:

const poorRequirements = `
Test the search functionality to make sure it works correctly
`;

Validating Generated Tests

// test-validator.js
class TestValidator {
  static validate(generatedCode) {
    const checks = {
      hasImports: /require\(|import/.test(generatedCode),
      hasTestStructure: /test\(|it\(/.test(generatedCode),
      hasAssertions: /expect\(/.test(generatedCode),
      hasAsyncAwait: /async.*await/.test(generatedCode),
      noHardcodedWaits: !/setTimeout|sleep\(\d+\)/.test(generatedCode)
    };
    
    const issues = Object.entries(checks)
      .filter(([_, passes]) => !passes)
      .map(([check]) => check);
    
    if (issues.length > 0) {
      console.warn('⚠️ Validation issues:', issues);
      return false;
    }
    
    console.log('✅ Generated test passes validation');
    return true;
  }
}

module.exports = TestValidator;

4. Common Mistakes and Debugging

Common Mistakes to Avoid

Mistake 1: Over-reliance on AI without review

// ❌ BAD: Using generated code without review
const testCode = await generator.generateTest(requirements);
fs.writeFileSync('test.spec.js', testCode);
runTests(); // Dangerous!

// ✅ GOOD: Review and validate first
const testCode = await generator.generateTest(requirements);
console.log(testCode); // Review the code
if (TestValidator.validate(testCode)) {
  fs.writeFileSync('test.spec.js', testCode);
}

Mistake 2: Vague requirements leading to poor tests

// ❌ BAD: Too vague
"Test the login page"

// ✅ GOOD: Specific and detailed
"Navigate to /login, enter email 'user@test.com', 
password 'Pass123!', click submit, verify redirect to /dashboard 
and presence of element with data-testid='user-profile'"

Mistake 3: Not handling AI API errors

// ✅ GOOD: Proper error handling
async generateTest(requirements) {
  try {
    const response = await this.client.chat.completions.create({...});
    
    if (!response.choices || response.choices.length === 0) {
      throw new Error('No response from AI model');
    }
    
    return response.choices[0].message.content;
  } catch (error) {
    console.error('AI Generation Error:', error.message);
    
    // Fallback to template
    return this.getFallbackTemplate(requirements);
  }
}

Debugging Generated Tests

Issue: Generated test uses wrong selectors

// Before: AI-generated with generic selector
await page.click('button');

// After: Refine with specific requirement
const requirements = `
...
Click the submit button with data-testid="submit-form"
`;

Issue: Tests are too brittle

// Add to system prompt:
const improvedPrompt = `
Generate tests using this selector priority:
1. data-testid attributes
2. ARIA labels
3. Semantic HTML roles
4. CSS classes (as last resort)

Example: 
await page.click('[data-testid="login-button"]')
NOT: await page.click('.btn-primary')
`;

Quick Debugging Checklist:

  • ✅ Review generated code for syntax errors
  • ✅ Verify selectors exist on target website
  • ✅ Check async/await usage is correct
  • ✅ Ensure proper test isolation (no dependencies between tests)
  • ✅ Validate assertions are meaningful
  • ✅ Test with actual website before committing

Hands-On Practice

Test Script Generation from Requirements Using AI

🎯 Learning Objectives

By the end of this lesson, you will be able to:

  • Analyze user requirements and identify testable scenarios
  • Craft effective AI prompts to generate test automation scripts
  • Evaluate and refine AI-generated test code for quality and coverage
  • Integrate AI-generated tests into existing test frameworks
  • Apply best practices for maintaining AI-assisted test suites

💪 Hands-On Exercise

Task: Generate and Refine Test Scripts for an E-commerce Checkout Feature

You’ve received the following requirement for an e-commerce platform’s checkout feature:

Requirement Document:

Feature: Shopping Cart Checkout
As a customer, I want to complete my purchase so that I can receive my items.

Acceptance Criteria:
1. Users must be logged in to proceed to checkout
2. Shopping cart must contain at least one item
3. Users can apply a valid discount code (10% off for code "SAVE10")
4. Invalid discount codes show an error message
5. Payment information (card number, CVV, expiry) is validated
6. Successful checkout displays an order confirmation with order number
7. Cart is emptied after successful checkout

Step-by-Step Instructions

Step 1: Analyze Requirements (10 minutes)

  • Identify all testable scenarios from the acceptance criteria
  • List edge cases and negative test scenarios
  • Document expected test inputs and outputs

Step 2: Craft AI Prompts (15 minutes)

  • Create a structured prompt for your AI tool (ChatGPT, GitHub Copilot, etc.)
  • Include context about your testing framework (Selenium with Python/pytest recommended)
  • Specify test data requirements and assertion expectations

Step 3: Generate Test Scripts (20 minutes)

  • Use your AI tool to generate test automation code
  • Generate at least 5 test cases covering positive and negative scenarios
  • Request page object model structure if needed

Step 4: Review and Refine (25 minutes)

  • Analyze the generated code for:
    • Correct assertions and validations
    • Proper wait strategies and element locators
    • Test data management
    • Error handling
    • Code maintainability
  • Refactor and improve the AI-generated code
  • Add comments and documentation

Step 5: Integration (15 minutes)

  • Organize tests into a proper structure
  • Add test fixtures and setup/teardown methods
  • Create a configuration file for test data
  • Verify tests follow your team’s coding standards

Starter Code

# conftest.py - Starter fixture
import pytest
from selenium import webdriver

@pytest.fixture
def driver():
    driver = webdriver.Chrome()
    driver.implicitly_wait(10)
    yield driver
    driver.quit()

@pytest.fixture
def logged_in_user(driver):
    # TODO: Implement login logic
    pass

# test_checkout.py - Template structure
import pytest
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

class TestCheckout:
    
    def test_checkout_requires_login(self, driver):
        """Test that unauthenticated users cannot access checkout"""
        # Your AI-generated code here
        pass
    
    def test_valid_discount_code(self, driver, logged_in_user):
        """Test successful application of valid discount code"""
        # Your AI-generated code here
        pass
    
    # Add more test methods...

Expected Outcome

Your completed exercise should include:

5-7 test methods covering all acceptance criteria ✅ Page Object classes for Cart and Checkout pages ✅ Proper assertions validating expected behaviors ✅ Test data management (valid/invalid discount codes, payment info) ✅ Documentation explaining each test’s purpose ✅ Refactored code improving upon AI-generated output

Solution Approach

Click to view solution guidance

Test Scenarios to Generate:

  1. test_checkout_requires_login - Verify redirect to login
  2. test_empty_cart_cannot_checkout - Verify error message
  3. test_valid_discount_code_applied - Verify 10% discount
  4. test_invalid_discount_code_shows_error - Verify error message
  5. test_invalid_payment_info_rejected - Test card validation
  6. test_successful_checkout_flow - End-to-end happy path
  7. test_cart_emptied_after_checkout - Verify cart state

Example AI Prompt:

Generate pytest test automation code using Selenium for an e-commerce 
checkout feature. Requirements:
- Test framework: pytest with Selenium WebDriver
- Use Page Object Model pattern
- Include fixtures for authenticated user and cart with items
- Test scenarios: [list your scenarios]
- Include explicit waits and proper assertions
- Add docstrings for each test method

Key Refinements to Apply:

  • Replace time.sleep() with explicit waits
  • Extract magic strings into constants
  • Add validation of intermediate states
  • Improve error messages in assertions
  • Handle stale element exceptions

🎓 Key Takeaways

  • Requirements Analysis is Critical: AI generates better tests when given clear, structured requirements. Always identify testable scenarios, edge cases, and expected outcomes before prompting.

  • Prompt Engineering Matters: Specific, context-rich prompts that include your testing framework, patterns (like Page Object Model), and quality expectations yield significantly better results than generic requests.

  • AI Augments, Not Replaces: Generated code requires human review and refinement. Always evaluate for proper waits, maintainability, error handling, and alignment with coding standards before integration.

  • Iterative Refinement Works Best: Use AI as a collaborative partner—generate initial code, review it, then prompt for specific improvements rather than expecting perfect code in one attempt.

  • Maintain Test Suite Quality: AI-generated tests must follow the same quality standards as manually written tests. Implement proper test organization, documentation, and data management practices.


🚀 Next Steps

Practice These Skills:

  • Generate tests for API endpoints using different HTTP methods and status codes
  • Create data-driven tests by prompting AI to generate test data sets
  • Practice refactoring AI-generated code to improve maintainability
  • Experiment with different prompt structures and compare output quality
  • AI-Assisted Test Data Generation: Learn to create realistic, diverse test datasets
  • Visual Testing with AI: Explore AI tools for visual regression testing
  • Test Maintenance Strategies: Study how to update AI-generated tests as requirements evolve
  • CI/CD Integration: Implement generated tests in continuous integration pipelines
  • Prompt Engineering Advanced Techniques: Deep dive into few-shot learning and chain-of-thought prompting for better test generation
  • Practice with different AI tools (GitHub Copilot, ChatGPT, Amazon CodeWhisperer)
  • Review test automation design patterns to better guide AI outputs
  • Join communities discussing AI in testing (Ministry of Testing, Test Automation University)

Remember: The goal isn’t to let AI write all your tests, but to accelerate your productivity while maintaining high-quality, maintainable test automation suites.