Module 6: Merge vs Rebase: Choosing the Right Strategy
Compare merge and rebase strategies side-by-side using real test automation scenarios. Understand the trade-offs between preserving complete history vs. maintaining clean linear history. Learn team conventions and when each approach is most appropriate for test projects.
Real-World Scenario: Multi-Developer Test Development
Why This Matters
Picture this: You’re working on automated tests for a critical login feature. Meanwhile, your teammate is updating tests for the checkout flow. Both of you are making multiple commits, fixing bugs, and refining your work. When it’s time to integrate your changes into the main branch, you face a critical decision: merge or rebase?
This decision impacts more than just your Git history—it affects:
- Code review efficiency: Can reviewers easily understand what changed and why?
- Bug tracking: When a test fails in production, can you quickly trace which change introduced the issue?
- Team collaboration: Does everyone follow the same conventions, or is your repository a confusing maze of different approaches?
- Rollback capability: If something breaks, can you cleanly revert changes without affecting other work?
Real-World Problem This Solves
In test automation teams, you’ll frequently encounter these scenarios:
- Parallel test development: Multiple engineers creating tests for different features simultaneously
- Test maintenance conflicts: Two people updating the same test framework files or configuration
- Long-running test branches: Feature branches that need to stay updated with main branch changes over days or weeks
- Release coordination: Integrating test suites from multiple team members before a deployment
Without understanding merge vs. rebase strategies, teams often end up with:
- Cluttered commit histories that obscure the actual test changes
- Repeated merge conflicts that slow down development
- Difficulty identifying when and why a test was modified
- Inconsistent practices that confuse team members
When You’ll Use This Skill
You’ll apply these strategies daily when:
- Integrating your test changes with the main branch
- Updating your feature branch with the latest test framework changes
- Preparing pull requests for code review
- Coordinating test releases with multiple contributors
- Maintaining test suites across long-lived branches
- Establishing Git workflows for your test automation team
Learning Objectives Overview
This lesson takes you through realistic test automation scenarios to build practical decision-making skills. Here’s what you’ll accomplish:
1. Compare Merge and Rebase Strategies Side-by-Side
You’ll work through hands-on scenarios where you’ll:
- Execute both merge and rebase operations on the same test code
- Visualize the resulting commit histories using Git tools
- See exactly how each strategy handles the same integration challenge
- Compare the outcomes when multiple developers work on related test files
By the end, you’ll have concrete examples showing when each approach produces better results.
2. Understand the Trade-offs Between History Preservation and Linearization
You’ll analyze real examples demonstrating:
- How merge commits preserve the complete development timeline (including all your “work in progress” commits)
- How rebase creates a clean, linear story (as if you wrote perfect code the first time)
- The implications for debugging test failures months later
- How each approach affects team code reviews and pull request readability
We’ll examine actual Git graphs and commit logs so you can make informed decisions for your projects.
3. Learn Team Conventions and When Each Approach is Most Appropriate
You’ll explore:
- Industry-standard workflows (GitHub Flow, GitFlow) and their merge/rebase conventions
- Decision frameworks: When to merge (integration commits, release branches) vs. when to rebase (feature branches, local cleanup)
- How to establish and document team conventions for test automation projects
- Practical examples from real test engineering teams
By the lesson’s end, you’ll be equipped to recommend and implement appropriate Git strategies for your team’s test development workflow.
Let’s dive into the scenarios that will make these concepts concrete and actionable.
Core Content
Core Content: Multi-Developer Test Development
1. Core Concepts Explained
Understanding Multi-Developer Collaboration Challenges
When multiple developers work on test automation simultaneously, several challenges emerge:
- Code Conflicts: Two developers modifying the same test file
- Inconsistent Test Data: Tests interfering with each other’s data setup
- Environment Conflicts: Tests running simultaneously on shared resources
- Version Control Issues: Merge conflicts in test code and configurations
- Communication Gaps: Developers unaware of ongoing test development
The Multi-Developer Workflow
graph TD
A[Main Branch] --> B[Developer 1: Feature Branch]
A --> C[Developer 2: Feature Branch]
B --> D[Local Tests]
C --> E[Local Tests]
D --> F[Pull Request]
E --> G[Pull Request]
F --> H[Code Review]
G --> H
H --> I[CI/CD Pipeline]
I --> J{Tests Pass?}
J -->|Yes| K[Merge to Main]
J -->|No| L[Fix Issues]
L --> D
L --> E
Key Principles for Collaborative Test Development
1. Test Isolation
Each test should be completely independent, with its own setup and teardown:
// ❌ BAD: Shared state between tests
let userId = null;
test('create user', async () => {
userId = await createUser('test@example.com');
expect(userId).toBeDefined();
});
test('update user', async () => {
// Depends on previous test
await updateUser(userId, { name: 'New Name' });
});
// ✅ GOOD: Each test is independent
test('create user', async () => {
const userId = await createUser('test-create@example.com');
expect(userId).toBeDefined();
await cleanup(userId);
});
test('update user', async () => {
const userId = await createUser('test-update@example.com');
await updateUser(userId, { name: 'New Name' });
const user = await getUser(userId);
expect(user.name).toBe('New Name');
await cleanup(userId);
});
2. Unique Test Data Generation
Prevent data collisions by using unique identifiers:
// helpers/testDataGenerator.js
export class TestDataGenerator {
static generateUniqueEmail() {
const timestamp = Date.now();
const random = Math.floor(Math.random() * 10000);
return `test-${timestamp}-${random}@example.com`;
}
static generateUniqueUsername() {
return `user_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
}
static generateTestData() {
return {
email: this.generateUniqueEmail(),
username: this.generateUniqueUsername(),
password: 'TestPass123!',
timestamp: Date.now()
};
}
}
// Usage in tests
import { TestDataGenerator } from '../helpers/testDataGenerator';
test('register new user', async ({ page }) => {
const testData = TestDataGenerator.generateTestData();
await page.goto('https://practiceautomatedtesting.com/register');
await page.fill('#email', testData.email);
await page.fill('#username', testData.username);
await page.fill('#password', testData.password);
await page.click('#register-button');
await expect(page.locator('.success-message')).toBeVisible();
});
2. Version Control Best Practices for Tests
Branch Naming Convention
# Feature branches for new tests
git checkout -b test/login-functionality
git checkout -b test/checkout-flow
# Bug fix branches for flaky tests
git checkout -b fix/flaky-payment-test
# Refactoring branches
git checkout -b refactor/page-objects-structure
Effective Commit Messages for Test Changes
# Good commit messages
git commit -m "test: add login validation test cases"
git commit -m "test: fix flaky search test by adding explicit waits"
git commit -m "refactor: extract common form helpers to utility"
git commit -m "test: update selectors after UI changes"
# Include context in commit body
git commit -m "test: add multi-step checkout test" -m "Covers full checkout flow including:
- Cart management
- Address entry
- Payment processing
- Order confirmation"
Handling Merge Conflicts in Test Code
// Example conflict in a test file
<<<<<<< HEAD
test('verify product search', async ({ page }) => {
await page.goto('https://practiceautomatedtesting.com');
await page.fill('#search-input', 'laptop');
await page.click('#search-button');
await expect(page.locator('.product-card')).toHaveCount(5);
=======
test('verify product search', async ({ page }) => {
await page.goto('https://practiceautomatedtesting.com');
await page.fill('[data-testid="search-input"]', 'laptop');
await page.click('[data-testid="search-button"]');
await expect(page.locator('[data-testid="product-card"]')).toBeVisible();
>>>>>>> feature/update-selectors
});
// Resolution: Combine both changes
test('verify product search', async ({ page }) => {
await page.goto('https://practiceautomatedtesting.com');
// Use updated selector from feature branch
await page.fill('[data-testid="search-input"]', 'laptop');
await page.click('[data-testid="search-button"]');
// Keep the improved assertion from HEAD
await expect(page.locator('[data-testid="product-card"]')).toHaveCount(5);
});
3. Code Review for Test Automation
Test Code Review Checklist
## Test Review Checklist
### Test Quality
- [ ] Tests are independent and can run in any order
- [ ] Test names clearly describe what is being tested
- [ ] Assertions are specific and meaningful
- [ ] No hard-coded waits (sleep/setTimeout)
- [ ] Proper use of waits and synchronization
### Code Quality
- [ ] Follows project coding standards
- [ ] No code duplication
- [ ] Appropriate use of page objects/helpers
- [ ] Good variable and function naming
- [ ] Comments explain complex logic
### Data Management
- [ ] Uses unique test data generation
- [ ] Proper cleanup after tests
- [ ] No dependencies on external data state
### Maintainability
- [ ] Selectors are resilient (data-testid preferred)
- [ ] Easy to understand and modify
- [ ] Well-organized file structure
Example PR Review Comments
// tests/checkout.spec.js
// 💬 Reviewer Comment: Consider extracting this repeated setup
test('checkout with credit card', async ({ page }) => {
await page.goto('https://practiceautomatedtesting.com/products');
await page.click('[data-testid="add-to-cart-1"]');
await page.click('[data-testid="cart-icon"]');
// ... checkout steps
});
test('checkout with paypal', async ({ page }) => {
await page.goto('https://practiceautomatedtesting.com/products');
await page.click('[data-testid="add-to-cart-1"]');
await page.click('[data-testid="cart-icon"]');
// ... checkout steps
});
// 💬 Suggested improvement:
// helpers/checkoutHelper.js
export async function addProductAndGoToCart(page, productId = 1) {
await page.goto('https://practiceautomatedtesting.com/products');
await page.click(`[data-testid="add-to-cart-${productId}"]`);
await page.click('[data-testid="cart-icon"]');
}
// Updated tests
import { addProductAndGoToCart } from '../helpers/checkoutHelper';
test('checkout with credit card', async ({ page }) => {
await addProductAndGoToCart(page);
// ... specific checkout steps
});
4. Parallel Test Execution Strategies
Configuring Playwright for Parallel Execution
// playwright.config.js
import { defineConfig } from '@playwright/test';
export default defineConfig({
testDir: './tests',
// Run tests in parallel across multiple workers
fullyParallel: true,
// Number of parallel workers
workers: process.env.CI ? 2 : 4,
// Retry failed tests
retries: process.env.CI ? 2 : 0,
// Configure projects for different browsers
projects: [
{
name: 'chromium',
use: {
browserName: 'chromium',
// Each worker gets isolated context
contextOptions: {
ignoreHTTPSErrors: true,
}
},
},
{
name: 'firefox',
use: { browserName: 'firefox' },
},
],
// Shared timeout settings
timeout: 30000,
expect: {
timeout: 5000,
},
});
Writing Parallel-Safe Tests
// tests/parallel-safe-example.spec.js
import { test, expect } from '@playwright/test';
import { TestDataGenerator } from '../helpers/testDataGenerator';
// ✅ Each test worker gets isolated context
test.describe('User Registration', () => {
test('register with valid data', async ({ page }) => {
// Generate unique data per test execution
const userData = TestDataGenerator.generateTestData();
await page.goto('https://practiceautomatedtesting.com/register');
await page.fill('[data-testid="email"]', userData.email);
await page.fill('[data-testid="username"]', userData.username);
await page.fill('[data-testid="password"]', userData.password);
await page.click('[data-testid="register-button"]');
await expect(page.locator('[data-testid="success-message"]'))
.toContainText('Registration successful');
});
test('register with existing email shows error', async ({ page }) => {
const userData = TestDataGenerator.generateTestData();
// First registration
await page.goto('https://practiceautomatedtesting.com/register');
await page.fill('[data-testid="email"]', userData.email);
await page.fill('[data-testid="username"]', userData.username);
await page.fill('[data-testid="password"]', userData.password);
await page.click('[data-testid="register-button"]');
// Try registering with same email but different username
await page.goto('https://practiceautomatedtesting.com/register');
await page.fill('[data-testid="email"]', userData.email);
await page.fill('[data-testid="username"]', TestDataGenerator.generateUniqueUsername());
await page.fill('[data-testid="password"]', userData.password);
await page.click('[data-testid="register-button"]');
await expect(page.locator('[data-testid="error-message"]'))
.toContainText('Email already exists');
});
});
Running Tests in Parallel
# Run all tests in parallel (default behavior)
npx playwright test
# Run with specific number of workers
npx playwright test --workers=4
# Run serially (one at a time)
npx playwright test --workers=1
# Run specific test file with parallelization
npx playwright test tests/checkout.spec.js --workers=2
# View parallel execution in UI mode
npx playwright test --ui
Expected terminal output:
$ npx playwright test
Running 24 tests using 4 workers
✓ tests/login.spec.js:5:1 › login with valid credentials (1s)
✓ tests/search.spec.js:8:1 › search for products (2s)
✓ tests/cart.spec.js:12:1 › add item to cart (1s)
✓ tests/checkout.spec.js:15:1 › complete checkout (3s)
24 passed (12s)
5. Common Mistakes and Debugging
Common Mistakes
1. Shared Test Data
// ❌ WRONG: Global shared data
const TEST_EMAIL = 'test@example.com'; // Will fail in parallel
// ✅ CORRECT: Generate unique data
const testEmail = TestDataGenerator.generateUniqueEmail();
2. Test Dependencies
// ❌ WRONG: Tests depend on execution order
test('create account', () => { /* ... */ });
test('login to account', () => { /* uses data from previous test */ });
// ✅ CORRECT: Each test is self-contained
test('login to account', () => {
const account = createTestAccount(); // Setup within test
login(account);
});
3. Poor Merge Conflict Resolution
// ❌ WRONG: Keeping only your changes
test('my test', () => { /* only your version */ });
// ✅ CORRECT: Evaluate both changes
test('integrated test', () => {
/* combine improvements from both developers */
});
Debugging Parallel Test Issues
// playwright.config.js - Debugging configuration
export default defineConfig({
// Disable parallelization for debugging
workers: 1,
// Enable detailed logging
use: {
trace: 'on-first-retry',
screenshot: 'only-on-failure',
video: 'retain-on-failure',
},
// Increase timeout for debugging
timeout: 60000,
});
Identifying Flaky Tests
# Run tests multiple times to identify flaky tests
npx playwright test --repeat-each=10
# Run with headed browser to observe behavior
npx playwright test --headed --workers=1
# Generate and view test report
npx playwright test
npx playwright show-report
Best Practices Summary
- Always use unique test data generation
- Write atomic, independent tests
- Use descriptive branch and commit names
- Review tests with the same rigor as production code
- Configure proper parallel execution settings
- Clean up test data after execution
- Communicate with team about ongoing test development
- Use proper waits instead of hard-coded delays
Hands-On Practice
EXERCISE and CONCLUSION
🎯 Hands-On Exercise
Task: Collaborative Test Suite Development
You’ll simulate a multi-developer environment by creating a test automation framework that multiple team members could work on simultaneously without conflicts.
Scenario
Your team is building tests for an e-commerce application. Three developers need to work on different features:
- Developer A: Login functionality
- Developer B: Product search
- Developer C: Shopping cart
Instructions
Step 1: Set Up Project Structure (15 minutes)
project-root/
├── tests/
│ ├── login/
│ ├── search/
│ └── cart/
├── pages/
│ ├── login_page.py
│ ├── search_page.py
│ └── cart_page.py
├── utils/
│ ├── driver_factory.py
│ └── test_data.py
├── config/
│ └── config.yaml
└── requirements.txt
Create this folder structure and initialize a Git repository.
Step 2: Create a Page Object Model (20 minutes)
Implement a base page class and one specific page object:
Starter Code - pages/base_page.py
:
class BasePage:
def __init__(self, driver):
self.driver = driver
def find_element(self, locator):
# TODO: Implement with explicit wait
pass
def click(self, locator):
# TODO: Implement
pass
Your Task:
- Complete the
BasePage
class with proper waits - Create
LoginPage
that inherits fromBasePage
- Add methods:
enter_username()
,enter_password()
,click_login()
Step 3: Implement Feature-Specific Tests (25 minutes)
Create test files for each feature area:
Starter Code - tests/login/test_login.py
:
import pytest
from pages.login_page import LoginPage
class TestLogin:
def test_valid_login(self, driver):
# TODO: Implement test
# 1. Navigate to login page
# 2. Enter valid credentials
# 3. Click login
# 4. Assert successful login
pass
def test_invalid_login(self, driver):
# TODO: Implement test
pass
Your Task:
- Complete the login tests
- Create at least 2 additional test files in different feature folders
- Use pytest fixtures for driver setup
Step 4: Implement Shared Utilities (15 minutes)
Starter Code - utils/driver_factory.py
:
from selenium import webdriver
def get_driver(browser="chrome"):
# TODO: Implement driver creation
# Support chrome, firefox, and headless mode
pass
def quit_driver(driver):
# TODO: Implement cleanup
pass
Your Task:
- Complete the driver factory
- Create a
conftest.py
with shared fixtures - Add a test data loader in
utils/test_data.py
Step 5: Add Version Control Best Practices (10 minutes)
Create necessary files:
.gitignore
- exclude virtual env, screenshots, logs, cacheREADME.md
- setup instructions and test execution commandsrequirements.txt
- list all dependencies
Step 6: Simulate Parallel Development (15 minutes)
- Create 3 branches:
feature/login
,feature/search
,feature/cart
- Make changes to different files in each branch
- Practice merging without conflicts
- Run full test suite to ensure integration
Expected Outcome
✅ You should have:
- A well-organized test project structure
- At least 6 test cases across 3 feature areas
- Reusable page objects and utilities
- Working pytest fixtures and configuration
- Clean Git history with no merge conflicts
- All tests passing when run with:
pytest tests/ -v
Solution Approach
Click to reveal solution hints
BasePage Implementation:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
class BasePage:
def __init__(self, driver, timeout=10):
self.driver = driver
self.wait = WebDriverWait(driver, timeout)
def find_element(self, locator):
return self.wait.until(EC.presence_of_element_located(locator))
Conftest.py:
import pytest
from utils.driver_factory import get_driver
@pytest.fixture(scope="function")
def driver():
driver = get_driver()
yield driver
driver.quit()
Git Workflow:
# Create branches
git checkout -b feature/login
# Make changes, commit
git checkout main
git merge feature/login
📚 Key Takeaways
What You Learned:
-
Modular Architecture: Organizing tests by feature prevents conflicts and allows parallel development. Page Object Model separates test logic from UI implementation.
-
Shared Resources Management: Centralized utilities (driver factory, fixtures) ensure consistency while minimizing duplication. Team members use the same foundation but work independently.
-
Version Control Strategy: Feature branches, clear folder structure, and proper
.gitignore
configuration enable seamless collaboration without stepping on each other’s toes. -
Test Independence: Each test should be self-contained with proper setup/teardown. Independent tests can be developed and executed in parallel without interference.
-
Configuration Management: External configuration files (YAML, JSON) allow different developers to run tests in their preferred environments without code changes.
When to Apply:
- Teams with 3+ automation engineers
- Projects with multiple features being developed simultaneously
- CI/CD pipelines requiring parallel test execution
- Legacy systems needing gradual test coverage expansion
🚀 Next Steps
Practice These Skills:
- Add More Complexity: Implement data-driven tests using external files (CSV, JSON)
- Enhance Reporting: Integrate Allure or pytest-html for better test reports
- Implement CI/CD: Set up GitHub Actions or Jenkins to run tests automatically
- Add Code Review Practices: Create pull request templates for test changes
Related Topics to Explore:
- Advanced Git Workflows: Gitflow, trunk-based development for test code
- Test Data Management: Factories, builders, and test data generation strategies
- Containerization: Running tests in Docker for environment consistency
- Cross-Browser Testing: Selenium Grid or cloud services (BrowserStack, Sauce Labs)
- API Test Integration: Combining UI and API tests in the same framework
- Performance Testing: Adding JMeter or Locust alongside functional tests
Recommended Resources:
- Martin Fowler’s “Page Object” pattern article
- “Continuous Delivery” by Jez Humble (testing strategies chapter)
- Selenium WebDriver documentation on best practices
- pytest documentation on fixtures and plugins
🎉 Congratulations! You’ve built a production-ready test automation framework designed for team collaboration. This foundation will scale as your team and application grow.