Module 10: Using Git with AI Tools: Automation and Code Generation
Leverage AI tools to enhance your Git workflow for test automation. Use AI for writing commit messages, generating test code reviews, creating test scripts from requirements, and automating repository maintenance. Integrate ChatGPT, GitHub Copilot, and other AI tools into your daily Git operations.
Automated Repository Maintenance with AI Assistants
Why This Matters
As test automation repositories grow in complexity, maintaining code quality, documentation, and consistency becomes increasingly challenging. Test engineers spend significant time on repetitive tasks: crafting descriptive commit messages, reviewing pull requests for common issues, generating boilerplate test code, and keeping documentation synchronized with code changes.
Real-world problems this solves:
- Time drain on manual tasks: Teams waste 20-30% of development time on routine Git operations that could be automated or AI-assisted
- Inconsistent commit messages: Without standards enforcement, commit histories become difficult to navigate, making debugging and audits painful
- Code review bottlenecks: Manual reviews slow down delivery pipelines, especially when checking for common test automation anti-patterns
- Test script creation overhead: Converting requirements documents into executable test code is time-consuming and error-prone
- Repository drift: Without automated maintenance, repositories accumulate technical debt, unused files, and outdated dependencies
When you’ll use these skills:
You’ll apply AI-assisted Git workflows when managing test automation projects with frequent updates, multiple contributors, or strict compliance requirements. These techniques are particularly valuable when:
- Managing large test suites across multiple microservices or applications
- Working in teams with varying levels of Git and testing expertise
- Automating CI/CD pipelines that require intelligent commit analysis
- Converting user stories or acceptance criteria into automated test cases
- Performing regular repository audits and cleanup operations
- Maintaining documentation that must stay synchronized with test code
Common pain points addressed:
This lesson tackles the frustrations test engineers face daily: spending too much time on Git housekeeping instead of actual test development, struggling with unclear commit histories during incident investigations, and manually translating requirements into test code. By integrating AI assistants into your Git workflow, you’ll reclaim hours of productive time while improving repository quality and team collaboration.
Learning Objectives Overview
This advanced lesson transforms how you interact with Git repositories by introducing AI-powered automation at every stage of your test engineering workflow.
What you’ll accomplish:
-
AI Integration Setup: You’ll configure GitHub Copilot, ChatGPT, and other AI tools to work seamlessly with your Git environment, learning to set up API access, authentication, and workspace integration for various AI assistants.
-
Intelligent Commit Messages: You’ll implement systems that analyze your staged changes and generate descriptive, convention-compliant commit messages automatically, reducing the cognitive load of context switching and ensuring consistent commit history.
-
Automated Code Reviews: You’ll create AI-powered review workflows that catch common test automation issues—flaky selectors, missing assertions, improper waits—before human reviewers even look at the code, accelerating your PR approval process.
-
Requirements-to-Tests Translation: You’ll master prompt engineering techniques to convert user stories, acceptance criteria, and test scenarios into executable test code, dramatically reducing the time from specification to implementation.
-
Repository Maintenance Automation: You’ll build scripts that leverage AI to identify unused test files, outdated dependencies, duplicate test cases, and documentation gaps, keeping your repository clean and maintainable.
-
Custom AI Configurations: You’ll learn to train AI tools on your team’s specific conventions, coding standards, and test patterns, ensuring generated content aligns with your organization’s guidelines.
-
Quality and Security Validation: You’ll develop critical evaluation skills to assess AI-generated content for correctness, security vulnerabilities, and adherence to best practices, understanding when to trust AI suggestions and when human judgment is required.
Throughout this lesson, you’ll work with hands-on examples from real test automation scenarios, building practical automation scripts you can immediately apply to your projects. By the end, you’ll have a complete AI-enhanced Git workflow that accelerates development while maintaining—or even improving—code quality.
Core Content
Core Content: Automated Repository Maintenance with AI Assistants
1. Core Concepts Explained
Understanding AI-Assisted Repository Maintenance
AI assistants (like GitHub Copilot, ChatGPT, and Claude) can automate repetitive repository maintenance tasks, reducing manual overhead and improving code quality consistency. This lesson focuses on leveraging AI for automated code reviews, dependency updates, test generation, and documentation maintenance.
Key Components of Automated Repository Maintenance
graph TD
A[Repository Event] --> B{AI Assistant}
B --> C[Code Analysis]
B --> D[Test Generation]
B --> E[Documentation Update]
B --> F[Dependency Check]
C --> G[Automated PR]
D --> G
E --> G
F --> G
G --> H[Human Review]
H --> I[Merge]
Architecture Overview
1. Event-Driven Automation
- GitHub Actions/GitLab CI triggers on commits, PRs, or schedules
- AI assistant receives repository context
- Generates maintenance tasks automatically
2. AI Integration Patterns
- API-based integration (OpenAI, Anthropic APIs)
- GitHub Actions with AI steps
- Custom scripts with AI SDKs
3. Quality Gates
- Automated tests run before AI suggestions
- Human review required for critical changes
- Rollback mechanisms for failed updates
2. Setting Up AI-Assisted Repository Maintenance
Step 1: Install Required Dependencies
# Create a new Node.js project for automation scripts
mkdir repo-maintenance-ai
cd repo-maintenance-ai
npm init -y
# Install core dependencies
npm install @anthropic-ai/sdk @octokit/rest dotenv
# Install testing and linting tools
npm install --save-dev jest eslint prettier
Step 2: Configure Environment Variables
Create a .env
file:
# .env
ANTHROPIC_API_KEY=your_anthropic_api_key_here
GITHUB_TOKEN=your_github_personal_access_token
GITHUB_REPO_OWNER=your-username
GITHUB_REPO_NAME=your-repo-name
Step 3: Initialize AI Client
// ai-client.js
import Anthropic from '@anthropic-ai/sdk';
import { Octokit } from '@octokit/rest';
import dotenv from 'dotenv';
dotenv.config();
// Initialize Anthropic client
export const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
// Initialize GitHub client
export const octokit = new Octokit({
auth: process.env.GITHUB_TOKEN,
});
export const repoConfig = {
owner: process.env.GITHUB_REPO_OWNER,
repo: process.env.GITHUB_REPO_NAME,
};
3. Automated Code Review with AI
Creating an Automated Code Reviewer
// code-reviewer.js
import { anthropic, octokit, repoConfig } from './ai-client.js';
async function reviewPullRequest(prNumber) {
// Fetch PR details and diff
const { data: pr } = await octokit.pulls.get({
...repoConfig,
pull_number: prNumber,
});
const { data: files } = await octokit.pulls.listFiles({
...repoConfig,
pull_number: prNumber,
});
// Build context for AI review
const fileChanges = files
.map(file => `
File: ${file.filename}
Status: ${file.status}
Changes:
${file.patch || 'Binary file or no changes'}
`)
.join('\n---\n');
// Request AI review
const message = await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 2000,
messages: [
{
role: 'user',
content: `Review this pull request for code quality, security issues, and best practices:
PR Title: ${pr.title}
PR Description: ${pr.body}
Changes:
${fileChanges}
Provide:
1. Critical issues (security, bugs)
2. Code quality suggestions
3. Best practice recommendations
4. Overall assessment (APPROVE/REQUEST_CHANGES/COMMENT)`,
},
],
});
const review = message.content[0].text;
// Post review as comment
await octokit.issues.createComment({
...repoConfig,
issue_number: prNumber,
body: `## 🤖 AI Code Review\n\n${review}`,
});
return review;
}
// Usage
reviewPullRequest(42).then(review => {
console.log('Review posted:', review);
});
Example Output
$ node code-reviewer.js
Review posted:
## Critical Issues
- Line 45: SQL query vulnerable to injection
- Missing error handling in async function
## Code Quality
- Consider extracting repeated logic into helper function
- Add JSDoc comments for public methods
## Assessment: REQUEST_CHANGES
4. Automated Test Generation
Generating Tests for Untested Code
// test-generator.js
import { anthropic, octokit, repoConfig } from './ai-client.js';
import { promises as fs } from 'fs';
async function generateTestsForFile(filePath) {
// Read source code
const sourceCode = await fs.readFile(filePath, 'utf-8');
// Request test generation from AI
const message = await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 3000,
messages: [
{
role: 'user',
content: `Generate comprehensive Jest tests for this code:
\`\`\`javascript
${sourceCode}
\`\`\`
Requirements:
1. Test all public functions
2. Include edge cases and error scenarios
3. Use Jest best practices
4. Add meaningful test descriptions
5. Mock external dependencies`,
},
],
});
const testCode = message.content[0].text;
// Extract code from markdown if present
const codeMatch = testCode.match(/```(?:javascript|js)?\n([\s\S]+?)\n```/);
const cleanTestCode = codeMatch ? codeMatch[1] : testCode;
// Write test file
const testFilePath = filePath.replace(/\.js$/, '.test.js');
await fs.writeFile(testFilePath, cleanTestCode);
return testFilePath;
}
// Batch generate tests for multiple files
async function generateTestsForUncoveredFiles() {
// Get test coverage report (assumes Jest coverage is configured)
const { execSync } = await import('child_process');
try {
execSync('npm test -- --coverage --silent', { stdio: 'pipe' });
} catch (error) {
// Coverage report generated even on test failures
}
// Parse coverage to find untested files
const coverageData = JSON.parse(
await fs.readFile('./coverage/coverage-summary.json', 'utf-8')
);
const uncoveredFiles = Object.entries(coverageData)
.filter(([file, stats]) => stats.lines.pct < 80)
.map(([file]) => file)
.filter(file => !file.includes('.test.'));
console.log(`Found ${uncoveredFiles.length} files needing tests`);
// Generate tests for each file
for (const file of uncoveredFiles) {
console.log(`Generating tests for ${file}...`);
const testFile = await generateTestsForFile(file);
console.log(`✓ Created ${testFile}`);
}
}
// Usage
generateTestsForUncoveredFiles();
Example Generated Test
// Before: calculator.js (no tests)
export function divide(a, b) {
return a / b;
}
// After: calculator.test.js (AI-generated)
import { divide } from './calculator.js';
describe('divide', () => {
test('divides two positive numbers correctly', () => {
expect(divide(10, 2)).toBe(5);
});
test('handles division by zero', () => {
expect(divide(10, 0)).toBe(Infinity);
});
test('handles negative numbers', () => {
expect(divide(-10, 2)).toBe(-5);
});
test('handles decimal results', () => {
expect(divide(5, 2)).toBe(2.5);
});
});
5. Automated Dependency Updates
Creating Dependency Update Bot
// dependency-updater.js
import { anthropic, octokit, repoConfig } from './ai-client.js';
import { execSync } from 'child_process';
import { promises as fs } from 'fs';
async function checkAndUpdateDependencies() {
// Check for outdated packages
const outdated = execSync('npm outdated --json', {
encoding: 'utf-8',
stdio: ['pipe', 'pipe', 'ignore']
});
let outdatedPackages;
try {
outdatedPackages = JSON.parse(outdated);
} catch {
console.log('All dependencies up to date');
return;
}
// Analyze updates with AI
const packageList = Object.entries(outdatedPackages)
.map(([name, info]) =>
`${name}: ${info.current} → ${info.latest} (wanted: ${info.wanted})`
)
.join('\n');
const message = await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1500,
messages: [
{
role: 'user',
content: `Analyze these dependency updates and categorize:
${packageList}
For each package, provide:
1. Risk level (LOW/MEDIUM/HIGH)
2. Breaking changes summary
3. Recommendation (UPDATE/WAIT/SKIP)
Format as JSON array.`,
},
],
});
const analysis = JSON.parse(message.content[0].text);
// Update low-risk dependencies automatically
const safeUpdates = analysis.filter(pkg => pkg.risk === 'LOW');
for (const pkg of safeUpdates) {
console.log(`Updating ${pkg.name}...`);
execSync(`npm update ${pkg.name}`, { stdio: 'inherit' });
}
// Create PR for medium/high risk updates
if (analysis.some(pkg => pkg.risk !== 'LOW')) {
await createDependencyUpdatePR(analysis);
}
}
async function createDependencyUpdatePR(analysis) {
// Create branch
const branchName = `dependency-updates-${Date.now()}`;
execSync(`git checkout -b ${branchName}`);
execSync('git add package.json package-lock.json');
execSync('git commit -m "chore: update dependencies"');
execSync(`git push origin ${branchName}`);
// Create PR with analysis
const prBody = `## 🤖 Automated Dependency Updates
${analysis.map(pkg => `
### ${pkg.name}
- **Risk:** ${pkg.risk}
- **Changes:** ${pkg.breakingChanges}
- **Recommendation:** ${pkg.recommendation}
`).join('\n')}
Please review and merge if tests pass.`;
await octokit.pulls.create({
...repoConfig,
title: '🤖 Automated Dependency Updates',
head: branchName,
base: 'main',
body: prBody,
});
}
// Schedule with cron or GitHub Actions
checkAndUpdateDependencies();
6. GitHub Actions Integration
Complete Workflow File
# .github/workflows/ai-maintenance.yml
name: AI Repository Maintenance
on:
pull_request:
types: [opened, synchronize]
schedule:
- cron: '0 0 * * 1' # Weekly on Mondays
workflow_dispatch: # Manual trigger
jobs:
code-review:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm ci
- name: Run AI Code Review
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: node code-reviewer.js ${{ github.event.pull_request.number }}
test-generation:
runs-on: ubuntu-latest
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm ci
- name: Generate tests for uncovered code
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: node test-generator.js
- name: Create PR with new tests
uses: peter-evans/create-pull-request@v5
with:
commit-message: 'test: add AI-generated tests'
title: '🤖 AI-Generated Tests'
body: 'Automated test generation for improved coverage'
branch: ai-test-generation
dependency-updates:
runs-on: ubuntu-latest
if: github.event_name == 'schedule' || github.event_name == 'workflow_dispatch'
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Check and update dependencies
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: node dependency-updater.js
7. Common Mistakes and Troubleshooting
Mistake 1: Not Validating AI Output
Problem:
// Bad: Blindly accepting AI suggestions
const testCode = await generateTests(source);
await fs.writeFile('test.js', testCode); // May contain invalid code
Solution:
// Good: Validate generated code
const testCode = await generateTests(source);
// Syntax check
try {
new Function(testCode); // Test if code is parseable
} catch (error) {
console.error('Generated code has syntax errors:', error);
return;
}
// Run generated tests
const { execSync } = require('child_process');
try {
execSync('npm test -- test.js', { stdio: 'pipe' });
await fs.writeFile('test.js', testCode);
} catch (error) {
console.error('Generated tests failed:', error);
}
Mistake 2: Excessive API Costs
Problem: Running AI review on every
Hands-On Practice
Hands-On Exercise
🛠️ Task: Build an AI-Assisted Repository Health Monitor
Create an automated system that uses AI assistants to monitor and maintain a repository’s health by analyzing code quality, updating dependencies, and generating maintenance reports.
What You’ll Build
A Python-based automation tool that:
- Scans a repository for common issues (outdated dependencies, code smells, missing documentation)
- Uses an AI assistant API to generate fix recommendations
- Automatically creates pull requests for approved maintenance tasks
- Generates a weekly maintenance report
Prerequisites
- Python 3.9+
- GitHub account and personal access token
- OpenAI API key (or similar AI service)
- Git installed locally
Step-by-Step Instructions
Step 1: Set Up Your Project
mkdir repo-health-monitor
cd repo-health-monitor
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install openai PyGithub gitpython requests
Create a .env
file:
GITHUB_TOKEN=your_github_token
OPENAI_API_KEY=your_openai_key
TARGET_REPO=username/repository
Step 2: Create the Repository Scanner
Create scanner.py
:
import os
from github import Github
from git import Repo
import json
class RepoScanner:
def __init__(self, repo_name):
self.github = Github(os.getenv('GITHUB_TOKEN'))
self.repo = self.github.get_repo(repo_name)
def scan_dependencies(self):
"""Check for outdated dependencies in requirements.txt"""
# TODO: Implement dependency scanning
# Hint: Get requirements.txt content and parse versions
pass
def scan_code_quality(self):
"""Identify files that need refactoring"""
# TODO: Implement code quality checks
# Hint: Look for large files, duplicate code patterns
pass
def scan_documentation(self):
"""Find missing or outdated documentation"""
# TODO: Check for README, docstrings, comments
pass
Step 3: Integrate AI Assistant
Create ai_assistant.py
:
import openai
import os
class MaintenanceAssistant:
def __init__(self):
openai.api_key = os.getenv('OPENAI_API_KEY')
def analyze_issue(self, issue_type, context):
"""Send issue to AI for analysis and recommendations"""
prompt = f"""
Repository Maintenance Issue:
Type: {issue_type}
Context: {context}
Provide:
1. Severity assessment (low/medium/high)
2. Recommended fix
3. Implementation steps
4. Potential risks
"""
# TODO: Implement AI API call
# Hint: Use chat completion with structured output
pass
def generate_pr_description(self, fixes):
"""Generate comprehensive PR description"""
# TODO: Create detailed PR description from fixes
pass
Step 4: Create Automation Workflow
Create maintenance_bot.py
:
from scanner import RepoScanner
from ai_assistant import MaintenanceAssistant
from github import Github
import os
class MaintenanceBot:
def __init__(self, repo_name):
self.scanner = RepoScanner(repo_name)
self.assistant = MaintenanceAssistant()
self.github = Github(os.getenv('GITHUB_TOKEN'))
def run_weekly_maintenance(self):
"""Execute full maintenance cycle"""
issues = self.collect_issues()
analyzed_issues = self.analyze_with_ai(issues)
critical_fixes = self.prioritize_fixes(analyzed_issues)
for fix in critical_fixes:
if self.should_auto_fix(fix):
self.create_fix_pr(fix)
self.generate_report(analyzed_issues)
def collect_issues(self):
# TODO: Run all scanners and aggregate results
pass
def create_fix_pr(self, fix):
# TODO: Create branch, apply fix, open PR
pass
Step 5: Implement Safety Controls
Add validation and approval mechanisms:
class SafetyController:
def validate_fix(self, fix):
"""Ensure fix meets safety criteria"""
# TODO: Check if fix is safe to auto-apply
# Consider: file type, change scope, test coverage
pass
def requires_human_review(self, fix):
"""Determine if human approval needed"""
# TODO: Define rules for automatic vs manual approval
pass
Step 6: Test Your System
Create test_maintenance.py
:
def test_scanning():
scanner = RepoScanner('your-test-repo')
issues = scanner.scan_dependencies()
assert len(issues) >= 0
def test_ai_analysis():
assistant = MaintenanceAssistant()
result = assistant.analyze_issue('outdated_dep', 'requests==2.0.0')
assert 'severity' in result
assert 'recommendation' in result
# Add more tests for each component
Expected Outcome
Your completed system should:
- ✅ Successfully scan a target repository for maintenance issues
- ✅ Use AI to analyze each issue and generate recommendations
- ✅ Automatically create PRs for low-risk fixes (like updating minor dependencies)
- ✅ Flag high-risk items for human review
- ✅ Generate a weekly maintenance report with metrics
- ✅ Include proper error handling and logging
Solution Approach
Key Implementation Tips:
- Scanning Strategy: Start with simple checks (file existence, basic parsing) before complex analysis
- AI Integration: Use structured prompts with clear output format expectations (JSON works well)
- Safety First: Implement a whitelist approach - only auto-fix specific, proven-safe scenarios
- Incremental Automation: Begin with reporting-only, then gradually add auto-fix capabilities
- Monitoring: Log all AI interactions and decisions for audit trails
Testing Strategy:
- Use a dedicated test repository to avoid production issues
- Start with read-only operations
- Test AI responses with various input scenarios
- Validate PR creation in a sandbox environment
Key Takeaways
🎓 What You’ve Learned:
-
AI-Assisted Code Analysis: Leveraging LLMs to understand code context, identify issues, and generate actionable recommendations that go beyond simple pattern matching
-
Automated Maintenance Workflows: Building end-to-end automation that combines repository scanning, AI analysis, and automated remediation with appropriate safety controls
-
Safe Automation Practices: Implementing validation layers, approval mechanisms, and audit trails to ensure AI-driven changes maintain code quality and security standards
-
Integration Patterns: Connecting multiple APIs (GitHub, AI services, Git) into a cohesive system that can operate autonomously while respecting human oversight requirements
-
Balancing Automation and Control: Determining which maintenance tasks are safe for full automation versus those requiring human review based on risk assessment and change impact
Next Steps
🔄 Practice These Skills
- Extend Your Bot: Add more scanners (security vulnerabilities, test coverage, performance issues)
- Improve AI Prompts: Experiment with prompt engineering to get more accurate and actionable recommendations
- Add Observability: Implement metrics tracking (issues found, PRs created, merge rates) with dashboards
- Multi-Repo Support: Scale your solution to monitor multiple repositories simultaneously
📚 Related Topics to Explore
- Advanced AI Integration: Function calling, agent frameworks (LangChain, AutoGPT), and multi-step reasoning
- CI/CD Pipeline Integration: Incorporate maintenance checks into existing workflows (GitHub Actions, GitLab CI)
- Code Quality Tools: Integrate with Sonarqube, CodeClimate, or Snyk for deeper analysis
- Prompt Engineering: Learn advanced techniques for reliable AI outputs in production systems
- Policy as Code: Define maintenance policies declaratively using tools like Open Policy Agent
🚀 Challenge Yourself
- Build a self-improving system where the AI learns from merged vs. rejected PRs
- Create a Slack/Discord bot interface for interactive maintenance requests
- Implement rollback automation for failed automated fixes
- Add cost optimization by caching AI responses for similar issues