Module 3: Branching and Switching: Isolated Test Development
Learn to create and manage branches for developing new test features, fixing bugs, and experimenting with test frameworks without affecting the main test suite. Understand branch strategies specific to test automation projects and practice switching between different testing contexts.
Experiment Branches and Safe Framework Migration
Why This Matters
As a test automation engineer, you’ve likely experienced the anxiety of modifying your test suite while others depend on it. Perhaps you wanted to try a new testing framework but feared breaking the existing automation. Or maybe you needed to debug a flaky test without disrupting the team’s CI/CD pipeline. These scenarios represent one of the most common pain points in test automation: how do you innovate and experiment safely while maintaining a stable, reliable test suite?
Real-World Problem
Imagine your team runs a critical Selenium-based test suite that executes on every deployment. You’ve discovered Playwright offers better performance and stability for your web application, but you can’t just replace everything overnight. Meanwhile, QA engineers are adding new test cases daily, developers depend on the current tests for their pull requests, and you have three flaky tests that need immediate attention. How do you:
- Evaluate Playwright without breaking the existing Selenium tests?
- Fix urgent test failures without affecting ongoing feature development?
- Allow team members to work on different test scenarios simultaneously?
- Revert quickly if an experiment goes wrong?
This is where Git branching becomes your safety net.
When You’ll Use This Skill
You’ll apply these branching techniques whenever you:
- Migrate testing frameworks (Selenium → Playwright, JUnit → TestNG, Mocha → Jest)
- Experiment with new tools (trying different reporting libraries, assertion frameworks, or test runners)
- Develop new test features (adding API tests to your existing UI test suite)
- Fix flaky or failing tests (isolating your fixes from ongoing development)
- Test infrastructure changes (trying Docker-based test execution or cloud testing platforms)
- Work in teams (preventing your experiments from blocking teammates)
Common Pain Points Addressed
This lesson directly solves challenges that test automation engineers face daily:
❌ Without proper branching: You modify tests directly on the main branch, causing CI/CD failures that block the entire team
✅ With branching: You work in isolation, test thoroughly, and merge only when ready
❌ Without proper branching: Framework migrations are risky all-or-nothing deployments
✅ With branching: You can run old and new frameworks side-by-side, migrate incrementally, and compare results
❌ Without proper branching: One person’s broken test affects everyone’s work
✅ With branching: Each engineer works in their own context without interference
Learning Objectives Overview
By the end of this lesson, you’ll have hands-on experience with branching strategies tailored specifically for test automation projects. Here’s what you’ll accomplish:
🌿 Creating Feature Branches for Test Development
You’ll learn to create dedicated branches for developing new test cases, page objects, and automation utilities. We’ll cover naming conventions like feature/login-tests
and test/api-integration
that clearly communicate the branch purpose to your team.
🔬 Using Experiment Branches for Framework Evaluation
You’ll practice creating experiment branches to safely evaluate new testing tools. We’ll walk through a real scenario: evaluating a new assertion library while keeping your production tests running, complete with comparison strategies and decision-making criteria.
🐛 Implementing Bug-Fix Branches
You’ll create hotfix branches specifically for addressing test failures and flaky tests. You’ll learn when to branch from main
versus develop
, and how to get urgent fixes deployed quickly without waiting for feature development to complete.
🔄 Switching Between Testing Contexts
You’ll master the git checkout
and git switch
commands to move between branches, understanding how your test environment, dependencies, and configurations change with each context switch. We’ll cover handling uncommitted changes and avoiding common pitfalls.
📝 Branch Naming for Test Projects
You’ll understand and apply naming conventions that make sense for test automation: experiment/playwright-evaluation
, bugfix/flaky-checkout-test
, feature/mobile-tests
. You’ll learn prefixes that integrate well with CI/CD and issue tracking systems.
🚀 Safe Framework Migration Strategies
You’ll practice the most powerful technique: running old and new frameworks in parallel branches. We’ll guide you through migrating a subset of tests to a new framework, comparing results, and gradually shifting your entire suite with zero downtime.
Throughout this lesson, you’ll work with realistic test automation scenarios, execute actual Git commands, and build muscle memory for safe, confident experimentation. Let’s transform how you approach test development and framework migrations.
Core Content
Core Content: Experiment Branches and Safe Framework Migration
1. Core Concepts Explained
Understanding Experiment Branches
Experiment branches are isolated Git branches used to test new automation frameworks, tools, or approaches without affecting your main test suite. They serve as a safe sandbox for:
- Framework migrations (e.g., moving from WebDriverIO to Playwright)
- Tool evaluations (comparing test runners or assertion libraries)
- Proof-of-concept implementations
- Breaking changes that need validation before team-wide adoption
The Safe Migration Strategy
A safe framework migration follows this pattern:
graph LR
A[Main Branch] --> B[Create Experiment Branch]
B --> C[Migrate Small Test Subset]
C --> D[Validate & Compare]
D --> E{Tests Pass?}
E -->|Yes| F[Expand Migration]
E -->|No| G[Fix Issues]
G --> D
F --> H[Complete Migration]
H --> I[Merge to Main]
Key Principles:
- Incremental approach - Migrate tests in small batches
- Parallel validation - Run old and new tests side-by-side
- Rollback capability - Always maintain a working main branch
- Team communication - Document findings and blockers
2. Practical Implementation
Step 1: Creating an Experiment Branch
Start by creating a dedicated branch for your migration experiment:
# Ensure you're on the latest main branch
$ git checkout main
$ git pull origin main
# Create and switch to experiment branch
$ git checkout -b experiment/playwright-migration
# Verify you're on the new branch
$ git branch
main
* experiment/playwright-migration
Step 2: Setting Up a Parallel Framework Structure
Create a directory structure that allows both frameworks to coexist:
# Original structure
tests/
└── webdriverio/
├── login.test.js
├── checkout.test.js
└── profile.test.js
# Add new framework alongside
tests/
├── webdriverio/ # Keep original tests
│ ├── login.test.js
│ ├── checkout.test.js
│ └── profile.test.js
└── playwright/ # New framework tests
├── login.spec.js # Migrated version
└── package.json # Separate dependencies
Step 3: Migrating Your First Test
Let’s migrate a login test from a generic framework to Playwright:
Before (Original WebDriverIO test):
// tests/webdriverio/login.test.js
describe('Login functionality', () => {
it('should login with valid credentials', async () => {
await browser.url('https://practiceautomatedtesting.com/login');
await $('#username').setValue('testuser@example.com');
await $('#password').setValue('Test1234!');
await $('button[type="submit"]').click();
await expect($('.welcome-message')).toBeDisplayed();
});
});
After (Migrated Playwright test):
// tests/playwright/login.spec.js
const { test, expect } = require('@playwright/test');
test.describe('Login functionality', () => {
test('should login with valid credentials', async ({ page }) => {
// Navigate to login page
await page.goto('https://practiceautomatedtesting.com/login');
// Fill credentials
await page.fill('#username', 'testuser@example.com');
await page.fill('#password', 'Test1234!');
// Submit form
await page.click('button[type="submit"]');
// Verify successful login
await expect(page.locator('.welcome-message')).toBeVisible();
});
});
Step 4: Installing and Configuring the New Framework
# Navigate to your experiment branch directory
$ cd tests/playwright
# Initialize package.json if not exists
$ npm init -y
# Install Playwright
$ npm install -D @playwright/test
# Install browsers
$ npx playwright install
Create a Playwright configuration file:
// tests/playwright/playwright.config.js
const { defineConfig } = require('@playwright/test');
module.exports = defineConfig({
testDir: './',
timeout: 30000,
use: {
baseURL: 'https://practiceautomatedtesting.com',
screenshot: 'only-on-failure',
video: 'retain-on-failure',
},
projects: [
{
name: 'chromium',
use: { browserName: 'chromium' },
},
],
});
Step 5: Running Parallel Validation
Create a comparison script to run both test suites:
#!/bin/bash
# scripts/validate-migration.sh
echo "=== Running Original Tests ==="
cd tests/webdriverio
npm test -- login.test.js
ORIGINAL_EXIT=$?
echo "\n=== Running Migrated Tests ==="
cd ../playwright
npx playwright test login.spec.js
MIGRATED_EXIT=$?
echo "\n=== Results Comparison ==="
if [ $ORIGINAL_EXIT -eq 0 ] && [ $MIGRATED_EXIT -eq 0 ]; then
echo "✓ Both test suites passed!"
exit 0
else
echo "✗ Test suite comparison failed"
echo "Original: $ORIGINAL_EXIT | Migrated: $MIGRATED_EXIT"
exit 1
fi
Step 6: Documenting Migration Progress
Create a migration tracking document:
# Migration Progress Tracker
## Completed ✓
- [x] login.test.js → login.spec.js (2024-01-15)
- All scenarios passing
- Performance: 2.3s (old) vs 1.8s (new)
## In Progress 🔄
- [ ] checkout.test.js
- Issue: Payment gateway timeout handling differs
- Blocker: Need to investigate Playwright's network mocking
## Pending ⏳
- [ ] profile.test.js
- [ ] search.test.js
- [ ] filters.test.js
## Findings & Blockers
- Playwright's auto-waiting reduces flakiness significantly
- File upload handling requires different approach
Step 7: Committing Your Experiment
# Stage your changes
$ git add tests/playwright/
$ git add scripts/validate-migration.sh
$ git add MIGRATION_PROGRESS.md
# Commit with descriptive message
$ git commit -m "feat: add Playwright migration for login tests
- Migrated login.test.js to Playwright
- Added parallel validation script
- Documented migration approach and findings"
# Push experiment branch
$ git push origin experiment/playwright-migration
Step 8: Creating a Pull Request for Review
# Create PR from command line (using GitHub CLI)
$ gh pr create \
--title "Experiment: Playwright Migration - Login Tests" \
--body "## Purpose
Testing Playwright as potential replacement for WebDriverIO
## What's Changed
- Migrated login test suite
- Added parallel validation
- Performance improved by ~20%
## How to Test
1. Checkout branch
2. Run ./scripts/validate-migration.sh
3. Review test execution time
## Questions for Review
- Is the new syntax more maintainable?
- Should we proceed with full migration?" \
--draft
3. Common Mistakes and Debugging
Mistake 1: Migrating Everything at Once
❌ Wrong Approach:
# Deleting all old tests immediately
$ rm -rf tests/webdriverio/
$ git commit -m "Migrated to Playwright"
✅ Correct Approach:
# Keep both frameworks running
# Migrate incrementally, test by test
Mistake 2: Not Documenting Framework Differences
❌ Missing Documentation: Tests fail in new framework, but no one knows why.
✅ Document Differences:
## Key Differences
| Feature | WebDriverIO | Playwright |
|---------|-------------|------------|
| Auto-wait | Manual waits | Built-in |
| Selectors | $() | page.locator() |
| Assertions | expect().toBe() | expect().toBe() |
Mistake 3: Ignoring Performance Metrics
Always compare execution times:
// Add timing to your validation script
console.time('Original Tests');
// run original tests
console.timeEnd('Original Tests');
console.time('Migrated Tests');
// run migrated tests
console.timeEnd('Migrated Tests');
Debugging Common Issues
Issue: Tests pass locally but fail in CI
# Check browser versions
$ npx playwright --version
$ node --version
# Ensure CI uses same versions in .github/workflows/test.yml
Issue: Can’t find elements after migration
// Debug selector issues with Playwright Inspector
await page.pause(); // Opens inspector
await page.locator('selector').highlight(); // Visual debugging
Issue: Merge conflicts when syncing with main
# Regularly sync your experiment branch
$ git checkout experiment/playwright-migration
$ git fetch origin
$ git merge origin/main
# Resolve conflicts, then continue
$ git add .
$ git commit -m "chore: sync with main branch"
Rollback Strategy
If migration proves problematic:
# Simply switch back to main - your original tests are untouched
$ git checkout main
# Or archive the experiment for future reference
$ git branch -m experiment/playwright-migration archived/playwright-experiment
$ git push origin archived/playwright-experiment
Next Steps: Once you’ve successfully migrated a subset of tests, gather team feedback through your PR, measure test stability over a week, then decide whether to proceed with full migration or adjust your approach.
Hands-On Practice
Exercise and Conclusion
🏋️ Hands-On Exercise
Task: Implement a Safe Framework Migration Using Experiment Branches
You’re tasked with migrating your test suite from TestNG to JUnit 5 while ensuring zero disruption to your CI/CD pipeline. You’ll use experiment branches to safely validate the new framework before fully committing to the migration.
Scenario
Your team has 50 existing TestNG tests. You need to:
- Create an experiment branch to test JUnit 5 migration
- Implement dual framework support temporarily
- Compare results between frameworks
- Safely roll out the migration
Step-by-Step Instructions
Step 1: Set Up Your Experiment Branch Strategy
# Create feature branch for migration work
git checkout -b feature/junit5-migration
# Create experiment branch for testing
git checkout -b experiment/junit5-validation
Step 2: Implement Framework Abstraction Layer
Create a starter framework that supports both testing libraries:
// TestFrameworkAdapter.java
public interface TestFrameworkAdapter {
void runTests(List<Class<?>> testClasses);
TestResults getResults();
void generateReport(String outputPath);
}
// TestNGAdapter.java
public class TestNGAdapter implements TestFrameworkAdapter {
// TODO: Implement TestNG execution
}
// JUnit5Adapter.java
public class JUnit5Adapter implements TestFrameworkAdapter {
// TODO: Implement JUnit 5 execution
}
Step 3: Create Configuration-Driven Test Runner
// TestRunnerConfig.java
public class TestRunnerConfig {
private String framework; // "testng", "junit5", or "both"
private boolean compareResults;
public static TestRunnerConfig fromEnvironment() {
// TODO: Read from environment variables or config file
// FRAMEWORK_MODE=experiment should enable both frameworks
}
}
// DualFrameworkRunner.java
public class DualFrameworkRunner {
public void execute(TestRunnerConfig config) {
// TODO: Execute tests based on configuration
// TODO: If mode is "both", run with both frameworks and compare
}
private void compareResults(TestResults testngResults,
TestResults junit5Results) {
// TODO: Compare test results and report discrepancies
}
}
Step 4: Implement Metrics Collection
// MigrationMetrics.java
public class MigrationMetrics {
public void recordTestExecution(String framework,
long duration,
int passed,
int failed) {
// TODO: Record metrics for comparison
}
public void generateComparisonReport() {
// TODO: Generate report comparing:
// - Execution time
// - Pass/fail rates
// - Memory usage
// - Any discrepancies in results
}
}
Step 5: Create Rollback Mechanism
// MigrationController.java
public class MigrationController {
private static final double FAILURE_THRESHOLD = 0.05; // 5%
public MigrationDecision evaluateMigration(
TestResults baseline,
TestResults experiment) {
// TODO: Calculate success rate difference
// TODO: Check if experiment exceeds failure threshold
// TODO: Return PROCEED, ROLLBACK, or NEEDS_REVIEW
}
}
Step 6: Update CI/CD Pipeline Configuration
# .github/workflows/test-migration.yml
name: Framework Migration Experiment
on: [push, pull_request]
jobs:
baseline-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run TestNG (Baseline)
run: |
export FRAMEWORK_MODE=testng
./gradlew test
- name: Upload TestNG Results
uses: actions/upload-artifact@v2
with:
name: testng-results
path: build/test-results/testng/
experiment-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run JUnit 5 (Experiment)
run: |
export FRAMEWORK_MODE=junit5
./gradlew test
- name: Upload JUnit5 Results
uses: actions/upload-artifact@v2
with:
name: junit5-results
path: build/test-results/junit5/
compare-results:
needs: [baseline-tests, experiment-tests]
runs-on: ubuntu-latest
steps:
# TODO: Download both result sets
# TODO: Run comparison tool
# TODO: Fail if discrepancies exceed threshold
Expected Outcomes
After completing this exercise, you should have:
- ✅ Dual-Framework Support: Tests can run on either TestNG or JUnit 5 based on configuration
- ✅ Comparison Report: Detailed metrics showing differences between frameworks
- ✅ Safe Rollback: Automated decision-making about proceeding or rolling back
- ✅ CI/CD Integration: Pipeline runs both frameworks in parallel during experiment phase
- ✅ Zero Production Impact: Main branch remains on TestNG until migration is validated
Solution Approach
Phase 1: Parallel Execution (Weeks 1-2)
- Run both frameworks side-by-side
- Collect metrics and identify discrepancies
- Fix compatibility issues
Phase 2: Validation (Week 3)
- Analyze metrics: execution time, reliability, flakiness
- Get team approval based on data
- Document migration findings
Phase 3: Gradual Rollout (Week 4)
- Merge to feature branch first
- Deploy to staging environment
- Monitor for issues
- Merge to main branch
Phase 4: Cleanup (Week 5)
- Remove TestNG dependencies
- Delete adapter layer
- Update documentation
🎓 Key Takeaways
-
Experiment branches isolate risk from your main development flow, allowing you to validate major changes (like framework migrations) without impacting production code or blocking other team members.
-
Feature flags and configuration-driven testing enable you to run experiments in production-like environments by toggling between implementations, making it possible to compare old vs. new approaches with real data.
-
Automated metrics and comparison tools are essential for making objective decisions about migrations—track execution time, pass rates, and reliability to determine if your experiment should proceed or rollback.
-
Gradual rollout strategies (canary deployments, percentage-based rollouts) minimize blast radius when transitioning to new frameworks, allowing you to catch issues early and rollback quickly if needed.
-
Always maintain a rollback plan with clear thresholds and automated decision points—know your failure criteria before starting the experiment, not when things go wrong.
🚀 Next Steps
What to Practice
- Implement a smaller migration in your current project (e.g., migrate a single test class to a new assertion library)
- Set up feature flags in your test configuration to toggle between two implementations
- Create a comparison dashboard that visualizes differences between experiment and control groups
- Practice emergency rollbacks by simulating failures and reverting changes quickly
Related Topics to Explore
- Advanced Feature Flag Patterns: LaunchDarkly, Split.io, or custom solutions
- A/B Testing in Test Automation: Statistical significance and sample sizes
- Observability and Monitoring: Integrating test metrics with Grafana, Datadog, or similar tools
- Trunk-Based Development: How short-lived branches complement experiment strategies
- Chaos Engineering: Applying experiment principles to system reliability testing
- Database Migration Strategies: Parallel run patterns for schema changes
- Blue-Green Deployments: Infrastructure-level experiment patterns
Recommended Reading
- “Accelerate” by Forsgren, Humble & Kim (Chapter on continuous delivery practices)
- Martin Fowler’s “Feature Toggles” article
- Google’s “Testing on the Toilet” series on experimentation