Module 9: Common Team Workflows: GitFlow and Trunk-Based Development

Implement professional Git workflows used by testing teams worldwide. Compare GitFlow (feature, develop, release, hotfix branches) with Trunk-Based Development for test projects. Learn when each workflow suits different team sizes and release cycles, with practical setup for test automation teams.

Trunk-Based Development: Continuous Testing with Feature Flags

Why This Matters

The Real-World Problem

You’re on a test automation team where multiple QA engineers work on different test suites simultaneously. Your team currently uses GitFlow with long-lived feature branches, but you’re experiencing painful merge conflicts every week. Test code sits in branches for days or weeks, becoming increasingly out of sync with the main codebase. When integration finally happens, tests break unexpectedly, and the team spends hours resolving conflicts instead of writing new tests.

Meanwhile, your developers have adopted trunk-based development and deploy multiple times per day. Your test code lags behind, creating a bottleneck in the release pipeline. You need a workflow that keeps pace with modern development practices while maintaining test quality.

When You’ll Use This Skill

Trunk-based development with feature flags is essential when:

  • Your team deploys frequently (multiple times per week or daily) and needs tests to keep pace
  • You’re working in cross-functional teams where test code must integrate continuously with application changes
  • Test suite stability is critical and you can’t afford broken tests sitting in long-lived branches
  • You need to test incomplete features without blocking the main test suite from running
  • Your organization practices continuous delivery and requires all code to be releasable at any moment

Common Pain Points Addressed

This lesson solves several persistent challenges in test automation workflows:

  • Merge hell: Eliminate week-long branches that create massive, error-prone merges
  • Test code drift: Keep test code synchronized with rapidly changing application code
  • Feature testing dilemmas: Test work-in-progress features without disrupting stable test suites
  • Integration delays: Discover integration issues within hours instead of days or weeks
  • Release coordination: Decouple test development from release schedules using feature flags
  • Team bottlenecks: Enable multiple testers to work independently without workflow conflicts

Learning Objectives Overview

By the end of this lesson, you’ll have hands-on experience implementing a professional trunk-based workflow for your test automation projects. Here’s what you’ll accomplish:

Understanding Trunk-Based Development Principles
We’ll start by examining the core philosophy behind trunk-based development and contrasting it with the GitFlow approach you learned previously. You’ll understand why major tech companies like Google, Facebook, and Netflix use this workflow for both application and test code, and how it fundamentally changes team collaboration dynamics.

Implementing Feature Flags for Test Control
You’ll learn to use feature flags (also called feature toggles) as your primary tool for managing incomplete or experimental tests. Through practical examples, you’ll configure flags that control which tests execute in different environments, allowing you to safely commit work-in-progress test code to the main branch without disrupting the CI pipeline.

Configuring CI Pipelines for Trunk-Based Workflows
We’ll walk through setting up continuous integration pipelines specifically designed for trunk-based test projects. You’ll configure automated checks that run on every commit to the trunk, implement fast feedback loops, and establish quality gates that maintain test suite stability even with frequent integrations.

Applying Best Practices for Test Code Management
You’ll master the techniques that make trunk-based development successful for test automation: keeping changes small, committing frequently, using short-lived branches (lasting hours, not days), and maintaining a relentlessly green build. We’ll cover practical strategies for refactoring test code incrementally without breaking existing tests.

Evaluating Workflow Appropriateness
Finally, you’ll develop decision-making criteria to determine when trunk-based development fits your team’s needs versus when GitFlow or other workflows might be more suitable. You’ll consider factors like team size, release frequency, test complexity, and organizational maturity to make informed workflow choices.

Let’s begin by setting up a practical trunk-based development environment for your test automation project.


Core Content

Core Content: Trunk-Based Development with Feature Flags

Core Concepts Explained

Understanding Trunk-Based Development (TBD)

Trunk-Based Development is a version control practice where developers collaborate on code in a single branch called “trunk” (typically main or master). Unlike feature branching where developers work on isolated branches for extended periods, TBD emphasizes:

  • Short-lived branches: Feature branches (if used) last hours or days, not weeks
  • Frequent integration: Code merges into trunk multiple times per day
  • Always releasable trunk: The main branch remains deployable at all times
graph LR
    A[Developer 1] -->|Merge daily| C[Trunk/Main]
    B[Developer 2] -->|Merge daily| C
    C -->|Deploy| D[Production]
    E[Feature Flag] -.Controls.-> D

The Role of Feature Flags

Feature flags (also called feature toggles) are conditional statements that enable or disable functionality without deploying new code. They’re essential for TBD because they allow:

  • Merging incomplete features into trunk safely
  • Testing in production without exposing features to all users
  • Quick rollback without code deployment
  • A/B testing and gradual rollouts
// Simple feature flag example
if (isFeatureEnabled('new-checkout-flow')) {
    // New code path
    processCheckoutV2();
} else {
    // Old code path (fallback)
    processCheckoutV1();
}

Continuous Testing in TBD with Feature Flags

In TBD, continuous testing ensures that every commit maintains trunk quality. With feature flags, your testing strategy must cover:

  1. Flag-on scenarios: New feature behavior
  2. Flag-off scenarios: Existing behavior (regression testing)
  3. Integration tests: Both code paths work correctly
  4. Flag removal tests: Prepare for flag cleanup

Practical Implementation

Setting Up Feature Flags in Your Tests

Let’s implement a feature flag system for an e-commerce automation testing project.

Step 1: Install a feature flag library

# For JavaScript/Node.js projects
npm install launchdarkly-node-server-sdk --save-dev

# Or use a simple custom implementation
npm install dotenv --save-dev

Step 2: Create a simple feature flag configuration

// config/featureFlags.js
require('dotenv').config();

class FeatureFlagManager {
    constructor() {
        this.flags = {
            'new-search-ui': process.env.FEATURE_NEW_SEARCH === 'true',
            'enhanced-filtering': process.env.FEATURE_ENHANCED_FILTER === 'true',
            'payment-v2': process.env.FEATURE_PAYMENT_V2 === 'true'
        };
    }

    isEnabled(flagName) {
        return this.flags[flagName] || false;
    }

    // For testing: override flags
    setFlag(flagName, value) {
        this.flags[flagName] = value;
    }
}

module.exports = new FeatureFlagManager();

Step 3: Configure environment files

# .env.development
FEATURE_NEW_SEARCH=false
FEATURE_ENHANCED_FILTER=false
FEATURE_PAYMENT_V2=false

# .env.staging
FEATURE_NEW_SEARCH=true
FEATURE_ENHANCED_FILTER=true
FEATURE_PAYMENT_V2=false

# .env.production
FEATURE_NEW_SEARCH=false
FEATURE_ENHANCED_FILTER=false
FEATURE_PAYMENT_V2=false

Writing Tests for Feature-Flagged Code

Test Structure: Testing Both Scenarios

// tests/search.spec.js
const { test, expect } = require('@playwright/test');
const featureFlags = require('../config/featureFlags');

test.describe('Product Search', () => {
    
    test.describe('with NEW search UI (flag ON)', () => {
        test.beforeEach(() => {
            featureFlags.setFlag('new-search-ui', true);
        });

        test('should display enhanced search bar', async ({ page }) => {
            await page.goto('https://practiceautomatedtesting.com');
            
            // New UI element
            const enhancedSearch = page.locator('[data-testid="search-enhanced"]');
            await expect(enhancedSearch).toBeVisible();
            
            // New feature: autocomplete
            await enhancedSearch.fill('hammer');
            const suggestions = page.locator('[data-testid="search-suggestions"]');
            await expect(suggestions).toBeVisible();
        });

        test('should filter results in real-time', async ({ page }) => {
            await page.goto('https://practiceautomatedtesting.com');
            
            const searchInput = page.locator('[data-testid="search-enhanced"]');
            await searchInput.fill('drill');
            
            // Real-time filtering in new UI
            await page.waitForTimeout(300);
            const results = page.locator('.product-card');
            const count = await results.count();
            expect(count).toBeGreaterThan(0);
        });
    });

    test.describe('with OLD search UI (flag OFF)', () => {
        test.beforeEach(() => {
            featureFlags.setFlag('new-search-ui', false);
        });

        test('should display standard search bar', async ({ page }) => {
            await page.goto('https://practiceautomatedtesting.com');
            
            // Old UI element
            const standardSearch = page.locator('[data-testid="search-standard"]');
            await expect(standardSearch).toBeVisible();
            
            // No autocomplete in old version
            const suggestions = page.locator('[data-testid="search-suggestions"]');
            await expect(suggestions).not.toBeVisible();
        });

        test('should require submit button click', async ({ page }) => {
            await page.goto('https://practiceautomatedtesting.com');
            
            const searchInput = page.locator('[data-testid="search-standard"]');
            await searchInput.fill('drill');
            
            // Old behavior: manual submit required
            const submitBtn = page.locator('[data-testid="search-submit"]');
            await submitBtn.click();
            
            await page.waitForURL('**/search?q=drill');
            const results = page.locator('.product-card');
            expect(await results.count()).toBeGreaterThan(0);
        });
    });
});

Continuous Integration Setup

Step 4: Configure CI pipeline to test all flag combinations

# .github/workflows/test-all-flags.yml
name: Test All Feature Flag Combinations

on: [push, pull_request]

jobs:
  test-flags-off:
    runs-on: ubuntu-latest
    name: Test with All Flags OFF
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: '18'
      
      - name: Install dependencies
        run: npm ci
      
      - name: Run tests (flags OFF)
        run: npm test
        env:
          FEATURE_NEW_SEARCH: false
          FEATURE_ENHANCED_FILTER: false
          FEATURE_PAYMENT_V2: false

  test-new-search-on:
    runs-on: ubuntu-latest
    name: Test with New Search ON
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: '18'
      
      - name: Install dependencies
        run: npm ci
      
      - name: Run tests (new search ON)
        run: npm test
        env:
          FEATURE_NEW_SEARCH: true
          FEATURE_ENHANCED_FILTER: false
          FEATURE_PAYMENT_V2: false

  test-all-flags-on:
    runs-on: ubuntu-latest
    name: Test with All Flags ON
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: '18'
      
      - name: Install dependencies
        run: npm ci
      
      - name: Run tests (all flags ON)
        run: npm test
        env:
          FEATURE_NEW_SEARCH: true
          FEATURE_ENHANCED_FILTER: true
          FEATURE_PAYMENT_V2: true
<!-- SCREENSHOT_NEEDED: BROWSER
     URL: https://github.com/username/repo/actions
     Description: GitHub Actions workflow showing multiple test jobs for different flag combinations
     Placement: After CI configuration section -->

Page Object Pattern with Feature Flags

Step 5: Implement feature flag awareness in Page Objects

// pages/SearchPage.js
const featureFlags = require('../config/featureFlags');

class SearchPage {
    constructor(page) {
        this.page = page;
        
        // Dynamic locators based on feature flag
        this.searchInput = featureFlags.isEnabled('new-search-ui')
            ? page.locator('[data-testid="search-enhanced"]')
            : page.locator('[data-testid="search-standard"]');
    }

    async search(query) {
        await this.searchInput.fill(query);
        
        if (featureFlags.isEnabled('new-search-ui')) {
            // New UI: auto-submits
            await this.page.waitForResponse(resp => 
                resp.url().includes('/api/search') && resp.status() === 200
            );
        } else {
            // Old UI: needs manual submit
            const submitBtn = this.page.locator('[data-testid="search-submit"]');
            await submitBtn.click();
            await this.page.waitForURL('**/search**');
        }
    }

    async getResults() {
        return this.page.locator('.product-card');
    }
}

module.exports = SearchPage;

Testing Flag Transitions

Step 6: Create tests for flag toggle scenarios

// tests/flag-transitions.spec.js
const { test, expect } = require('@playwright/test');
const featureFlags = require('../config/featureFlags');

test.describe('Feature Flag Transitions', () => {
    
    test('should handle flag toggle without breaking user session', async ({ page }) => {
        // User starts with old UI
        featureFlags.setFlag('new-search-ui', false);
        await page.goto('https://practiceautomatedtesting.com');
        
        const oldSearch = page.locator('[data-testid="search-standard"]');
        await expect(oldSearch).toBeVisible();
        
        // Simulate flag toggle (e.g., via admin panel or API)
        featureFlags.setFlag('new-search-ui', true);
        
        // Refresh or navigate to new page
        await page.reload();
        
        // New UI should appear
        const newSearch = page.locator('[data-testid="search-enhanced"]');
        await expect(newSearch).toBeVisible();
        
        // User data should persist
        const cart = page.locator('[data-testid="cart-count"]');
        await expect(cart).toHaveText('0'); // Still accessible
    });

    test('should gracefully fallback if new feature fails', async ({ page }) => {
        featureFlags.setFlag('new-search-ui', true);
        
        await page.goto('https://practiceautomatedtesting.com');
        
        // Mock API failure for new feature
        await page.route('**/api/search/suggest', route => 
            route.abort('failed')
        );
        
        // New UI should still allow basic search
        const searchInput = page.locator('[data-testid="search-enhanced"]');
        await searchInput.fill('drill');
        
        // Fallback behavior kicks in
        const submitBtn = page.locator('[data-testid="search-submit-fallback"]');
        await expect(submitBtn).toBeVisible();
    });
});

Monitoring and Reporting

Step 7: Add flag coverage reporting

// utils/flagCoverageReporter.js
class FlagCoverageReporter {
    constructor() {
        this.flagUsage = new Map();
    }

    recordFlagCheck(flagName, value) {
        if (!this.flagUsage.has(flagName)) {
            this.flagUsage.set(flagName, { on: 0, off: 0 });
        }
        
        const stats = this.flagUsage.get(flagName);
        value ? stats.on++ : stats.off++;
    }

    generateReport() {
        console.log('\nšŸ“Š Feature Flag Coverage Report\n');
        
        this.flagUsage.forEach((stats, flagName) => {
            const total = stats.on + stats.off;
            const coverage = total > 0 ? 
                ((Math.min(stats.on, stats.off) / total) * 200).toFixed(1) : 0;
            
            console.log(`${flagName}:`);
            console.log(`  ON:  ${stats.on} tests`);
            console.log(`  OFF: ${stats.off} tests`);
            console.log(`  Coverage: ${coverage}% (both paths tested)\n`);
        });
    }
}

module.exports = new FlagCoverageReporter();
# Example output
$ npm test

šŸ“Š Feature Flag Coverage Report

new-search-ui:
  ON:  8 tests
  OFF: 8 tests
  Coverage: 100.0% (both paths tested)

enhanced-filtering:
  ON:  5 tests
  OFF: 5 tests
  Coverage: 100.0% (both paths tested)

payment-v2:
  ON:  3 tests
  OFF: 0 tests
  Coverage: 0.0% (both paths tested)
  āš ļø  WARNING: OFF path not tested!

Common Mistakes and How to Avoid Them

Mistake 1: Testing Only the New Feature

Wrong approach:

// āŒ Only testing flag ON scenario
test('new search works', async ({ page }) => {
    featureFlags.setFlag('new-search-ui', true);
    // ... test new feature
});

Correct approach:

// āœ… Test BOTH scenarios
test.describe('Search functionality', () => {
    test('with new UI', async ({ page }) => {
        featureFlags.setFlag('new-search-ui', true);
        // ... test new feature
    });
    
    test('with old UI (regression)', async ({ page }) => {
        featureFlags.setFlag('new-search-ui', false);
        // ... test existing feature still works
    });
});

Mistake 2: Not Cleaning Up Old Flags

Feature flags should be temporary. Create a tracking system:

// flag-registry.js
module.exports = {
    flags: [
        {
            name: 'new-search-ui',
            createdDate: '2024-01-15',
            targetRemovalDate: '2024-03-15',
            status: 'active'
        },
        {
            name: 'old-checkout', // Flag to remove old code
            createdDate: '2023-11-01',
            targetRemovalDate: '2024-01-01',
            status: 'overdue' // āš ļø Should be removed!
        }
    ]
};

Mistake 3: Hardcoding Flag Values in Tests

Wrong:

// āŒ Hardcoded - difficult to change
if (true) { // new feature flag
    // test code
}

Correct:

// āœ… Centralized configuration
if (featureFlags.isEnabled('new-search-ui')) {
    // test code
}

Mistake 4: Forgetting Default Values

Always provide


Hands-On Practice

EXERCISE and CONCLUSION

šŸŽÆ Hands-On Exercise

Exercise: Implement Feature-Flagged Testing Pipeline

Objective: Build a trunk-based development workflow with feature flags and automated tests that verify both enabled and disabled feature states.

Task

Create a simple e-commerce cart feature with feature flags and automated tests that run in CI/CD, ensuring the application works correctly regardless of flag state.

Step-by-Step Instructions

Step 1: Set Up Feature Flag Configuration

// config/featureFlags.js
class FeatureFlags {
  constructor() {
    this.flags = {
      NEW_CHECKOUT_FLOW: process.env.FEATURE_NEW_CHECKOUT === 'true',
      EXPRESS_SHIPPING: process.env.FEATURE_EXPRESS_SHIPPING === 'true'
    };
  }

  isEnabled(flagName) {
    return this.flags[flagName] || false;
  }
}

module.exports = new FeatureFlags();

Step 2: Implement Feature-Flagged Code

// src/checkout.js
const featureFlags = require('../config/featureFlags');

class CheckoutService {
  calculateTotal(items, shippingOption = 'standard') {
    const subtotal = items.reduce((sum, item) => sum + item.price, 0);
    
    if (featureFlags.isEnabled('NEW_CHECKOUT_FLOW')) {
      // New checkout logic with different tax calculation
      const tax = subtotal * 0.08;
      let shipping = this.getShippingCost(shippingOption);
      return { subtotal, tax, shipping, total: subtotal + tax + shipping };
    } else {
      // Legacy checkout
      const tax = subtotal * 0.10;
      return { subtotal, tax, total: subtotal + tax + 5.99 };
    }
  }

  getShippingCost(option) {
    if (featureFlags.isEnabled('EXPRESS_SHIPPING') && option === 'express') {
      return 15.99;
    }
    return 5.99;
  }
}

module.exports = CheckoutService;

Step 3: Write Feature Flag Tests

// tests/checkout.test.js
const CheckoutService = require('../src/checkout');
const featureFlags = require('../config/featureFlags');

describe('Checkout Service with Feature Flags', () => {
  let checkout;
  const testItems = [
    { id: 1, price: 50 },
    { id: 2, price: 30 }
  ];

  beforeEach(() => {
    checkout = new CheckoutService();
  });

  describe('Legacy Checkout Flow (Flag OFF)', () => {
    beforeAll(() => {
      featureFlags.flags.NEW_CHECKOUT_FLOW = false;
      featureFlags.flags.EXPRESS_SHIPPING = false;
    });

    test('should calculate total with 10% tax and fixed shipping', () => {
      const result = checkout.calculateTotal(testItems);
      
      expect(result.subtotal).toBe(80);
      expect(result.tax).toBe(8); // 10% tax
      expect(result.total).toBe(93.99); // 80 + 8 + 5.99
    });
  });

  describe('New Checkout Flow (Flag ON)', () => {
    beforeAll(() => {
      featureFlags.flags.NEW_CHECKOUT_FLOW = true;
      featureFlags.flags.EXPRESS_SHIPPING = false;
    });

    test('should calculate total with 8% tax and standard shipping', () => {
      const result = checkout.calculateTotal(testItems, 'standard');
      
      expect(result.subtotal).toBe(80);
      expect(result.tax).toBe(6.4); // 8% tax
      expect(result.shipping).toBe(5.99);
      expect(result.total).toBe(92.39);
    });
  });

  describe('Express Shipping Feature (Flag ON)', () => {
    beforeAll(() => {
      featureFlags.flags.NEW_CHECKOUT_FLOW = true;
      featureFlags.flags.EXPRESS_SHIPPING = true;
    });

    test('should apply express shipping cost when selected', () => {
      const result = checkout.calculateTotal(testItems, 'express');
      
      expect(result.shipping).toBe(15.99);
      expect(result.total).toBe(102.39);
    });
  });
});

Step 4: Create CI/CD Pipeline Configuration

# .github/workflows/trunk-testing.yml
name: Trunk-Based Testing with Feature Flags

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  test-all-flag-combinations:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        checkout_flag: ['true', 'false']
        shipping_flag: ['true', 'false']
    
    steps:
      - uses: actions/checkout@v2
      
      - name: Setup Node.js
        uses: actions/setup-node@v2
        with:
          node-version: '16'
      
      - name: Install dependencies
        run: npm install
      
      - name: Run tests with flag combination
        env:
          FEATURE_NEW_CHECKOUT: ${{ matrix.checkout_flag }}
          FEATURE_EXPRESS_SHIPPING: ${{ matrix.shipping_flag }}
        run: |
          echo "Testing with NEW_CHECKOUT=${{ matrix.checkout_flag }}, EXPRESS_SHIPPING=${{ matrix.shipping_flag }}"
          npm test
      
      - name: Report coverage
        run: npm run coverage

  integration-test:
    runs-on: ubuntu-latest
    needs: test-all-flag-combinations
    
    steps:
      - uses: actions/checkout@v2
      
      - name: Run integration tests (all flags enabled)
        env:
          FEATURE_NEW_CHECKOUT: 'true'
          FEATURE_EXPRESS_SHIPPING: 'true'
        run: npm run test:integration

Step 5: Add Feature Flag Monitoring Test

// tests/featureFlag.monitoring.test.js
const featureFlags = require('../config/featureFlags');

describe('Feature Flag Monitoring', () => {
  test('should have all expected flags defined', () => {
    const expectedFlags = ['NEW_CHECKOUT_FLOW', 'EXPRESS_SHIPPING'];
    const definedFlags = Object.keys(featureFlags.flags);
    
    expectedFlags.forEach(flag => {
      expect(definedFlags).toContain(flag);
    });
  });

  test('should return boolean values for all flags', () => {
    Object.values(featureFlags.flags).forEach(flagValue => {
      expect(typeof flagValue).toBe('boolean');
    });
  });

  test('should log current flag states for debugging', () => {
    console.log('Current Feature Flag States:', featureFlags.flags);
    expect(true).toBe(true); // Always passes, but logs state
  });
});

Expected Outcome

  • āœ… Tests pass for all feature flag combinations (4 combinations)
  • āœ… CI/CD pipeline runs automatically on commits to main branch
  • āœ… Application works correctly with flags ON and OFF
  • āœ… Clear test reports showing which flag combinations were tested
  • āœ… No broken code merged to trunk

Solution Approach

  1. Feature flags control behavior without code branching
  2. Matrix testing ensures all combinations work
  3. Continuous integration validates every commit
  4. Monitoring tests track flag states and prevent configuration issues
  5. Gradual rollout possible by changing environment variables in production

šŸŽ“ Key Takeaways

  • Trunk-based development with feature flags allows multiple developers to commit incomplete features to the main branch safely, enabling true continuous integration without long-lived branches

  • Test all feature flag states systematically using matrix testing strategies in CI/CD to ensure your application works correctly regardless of which features are enabled or disabled

  • Feature flags decouple deployment from release, allowing you to deploy code to production with features hidden, then enable them gradually through configuration changes rather than code deployments

  • Automated testing must cover flag combinations to prevent integration issues; use combinatorial testing strategies when you have multiple flags to avoid exponential test explosion

  • Monitor and clean up feature flags regularly after full rollout to prevent technical debt accumulation and maintain code clarity


šŸš€ Next Steps

What to Practice

  1. Add more complex flag scenarios - Implement dependent flags (Flag B only works if Flag A is enabled)
  2. Create A/B testing setup - Extend flags to support percentage-based rollouts (e.g., 20% of users get the new feature)
  3. Build a flag dashboard - Create a simple UI to view and toggle feature flags in different environments
  4. Implement flag expiration - Add timestamps to flags and create tests that warn when flags are older than 30 days
  • Advanced Feature Flag Patterns: Kill switches, ops flags, permission flags, and experiment flags
  • Progressive Delivery: Canary releases, blue-green deployments, and ring-based deployment strategies
  • Contract Testing: Using Pact or similar tools to test service boundaries with different flag states
  • Observability & Monitoring: Adding metrics and logging around feature flag usage in production
  • Feature Flag Management Platforms: LaunchDarkly, Split.io, Unleash, and when to use them vs. custom solutions

Pro Tip: Start with simple boolean flags, but plan for migration to a feature flag management system once you have more than 5-10 active flags or need advanced targeting capabilities.