Module 7: Remote Repositories: Collaborating on Test Automation
Connect your local test repository to remote platforms like GitHub and GitLab. Learn to push test changes, pull updates from teammates, manage remote branches, and collaborate effectively on shared test automation projects. Set up CI/CD integration basics for automated test execution.
CI/CD Integration: Automated Test Execution on Push
Why This Matters
Picture this: It’s Friday afternoon, and a developer pushes code that breaks critical functionality. The bug slips through to production because no one ran the full test suite locally. By Monday morning, customer complaints flood in, and your team spends the entire week firefighting instead of delivering new features.
This is the exact problem CI/CD automation solves.
In modern software development, manual test execution is a bottleneck that doesn’t scale. When teams grow, features multiply, and release cycles accelerate, you need automated safety nets that catch issues before they reach production. CI/CD integration transforms your test suite from a tool you remember to run into an automated gatekeeper that validates every code change.
Real-World Applications
You’ll use CI/CD test automation when:
- Every code push needs validation before merging to main branches
- Multiple team members are contributing simultaneously and tests must run consistently
- Different environments (Python versions, browsers, OS platforms) need parallel testing
- Quality gates must be enforced before deployments can proceed
- Test reports need to be automatically generated and shared with stakeholders
- Regression suites are too time-consuming to run manually before each release
Pain Points This Lesson Addresses
Common challenges test engineers face without CI/CD automation:
- ❌ “It worked on my machine” syndrome where tests pass locally but fail elsewhere
- ❌ Forgotten test runs leading to broken code reaching shared branches
- ❌ Inconsistent testing across team members with different local setups
- ❌ Slow feedback loops requiring manual coordination for test results
- ❌ Limited visibility into what broke and when across the development timeline
- ❌ Manual environment setup that’s error-prone and time-consuming
By the end of this lesson, you’ll eliminate these pain points by establishing fully automated test execution pipelines.
Learning Objectives Overview
This advanced lesson takes you from manual test execution to fully automated CI/CD pipelines. Here’s what you’ll accomplish:
GitHub Actions Mastery → You’ll set up complete GitHub Actions workflows, learning to create .github/workflows/
configurations that automatically execute your tests whenever code is pushed or pull requests are created.
GitLab CI/CD Implementation → You’ll configure GitLab’s powerful CI/CD system using .gitlab-ci.yml
files, understanding how runners execute your tests and how to optimize pipeline performance.
YAML Pipeline Configuration → You’ll become proficient at writing pipeline-as-code, creating maintainable configuration files that define exactly how your tests should run in the cloud.
Intelligent Triggering → You’ll implement smart triggers that run different test suites based on what changed—quick smoke tests on every push, full regression on pull requests, and scheduled nightly runs.
Environment Management → You’ll configure dependencies, install packages, set environment variables, and ensure your CI environment perfectly replicates production conditions.
Strategic Pipeline Stages → You’ll organize tests into logical stages (unit, integration, E2E) that run in parallel or sequentially, optimizing for both speed and reliability.
Robust Reporting → You’ll set up automated test result collection, artifact storage, and notification systems so teams immediately know when something breaks.
Pipeline Troubleshooting → You’ll learn to diagnose and fix common failures like timeout issues, flaky tests, and environment configuration problems that arise in CI/CD contexts.
Each objective builds upon the previous one, taking you through a complete journey from basic pipeline setup to production-ready automated testing infrastructure. By the end, you’ll have hands-on experience with both major CI/CD platforms and the confidence to implement automated testing in any project.
Core Content
Core Content: CI/CD Integration - Automated Test Execution on Push
Core Concepts Explained
Understanding CI/CD and Automated Testing
Continuous Integration/Continuous Deployment (CI/CD) is a development practice where code changes are automatically built, tested, and deployed. When integrated with automated testing, every code push triggers a suite of tests, ensuring code quality before merging or deployment.
The CI/CD Test Automation Workflow
graph LR
A[Developer Push] --> B[CI/CD Trigger]
B --> C[Clone Repository]
C --> D[Install Dependencies]
D --> E[Run Tests]
E --> F{Tests Pass?}
F -->|Yes| G[Merge/Deploy]
F -->|No| H[Notify Developer]
H --> I[Fix Issues]
I --> A
Key Components of CI/CD Test Automation
- Version Control System (VCS): GitHub, GitLab, or Bitbucket
- CI/CD Platform: GitHub Actions, Jenkins, CircleCI, Travis CI
- Test Framework: Selenium, Playwright, Cypress, or similar
- Test Scripts: Your automated test suite
- Configuration Files: YAML/JSON files defining CI/CD pipelines
Setting Up GitHub Actions for Test Automation
Step 1: Create the Workflow Directory Structure
First, create the necessary directory structure in your repository:
# Navigate to your project root
cd your-project-directory
# Create GitHub Actions workflow directory
mkdir -p .github/workflows
# Create a workflow file
touch .github/workflows/test-automation.yml
Step 2: Configure Basic Workflow File
Create a complete GitHub Actions workflow configuration:
# .github/workflows/test-automation.yml
name: Automated Tests on Push
# Trigger configuration - runs on push and pull requests
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]
# Define jobs to run
jobs:
test:
# Use Ubuntu as the runner environment
runs-on: ubuntu-latest
# Define the steps to execute
steps:
# Step 1: Checkout the repository code
- name: Checkout code
uses: actions/checkout@v3
# Step 2: Set up Node.js environment
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'
# Step 3: Install project dependencies
- name: Install dependencies
run: npm ci
# Step 4: Install Playwright browsers (if using Playwright)
- name: Install Playwright Browsers
run: npx playwright install --with-deps
# Step 5: Run automated tests
- name: Run tests
run: npm test
# Step 6: Upload test results as artifacts
- name: Upload test results
if: always()
uses: actions/upload-artifact@v3
with:
name: test-results
path: test-results/
retention-days: 30
Step 3: Configure Package.json Test Scripts
Ensure your package.json
has the correct test commands:
{
"name": "automated-testing-project",
"version": "1.0.0",
"scripts": {
"test": "playwright test",
"test:headed": "playwright test --headed",
"test:chrome": "playwright test --project=chromium",
"test:report": "playwright show-report"
},
"devDependencies": {
"@playwright/test": "^1.40.0"
}
}
Creating Advanced CI/CD Test Configurations
Multi-Browser Testing Configuration
# .github/workflows/multi-browser-tests.yml
name: Multi-Browser Test Suite
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
strategy:
# Don't cancel other jobs if one fails
fail-fast: false
matrix:
# Test across multiple browsers
browser: [chromium, firefox, webkit]
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm ci
- name: Install Playwright
run: npx playwright install --with-deps ${{ matrix.browser }}
- name: Run tests on ${{ matrix.browser }}
run: npx playwright test --project=${{ matrix.browser }}
- name: Upload ${{ matrix.browser }} test results
if: always()
uses: actions/upload-artifact@v3
with:
name: test-results-${{ matrix.browser }}
path: playwright-report/
Parallel Test Execution with Sharding
# .github/workflows/parallel-tests.yml
name: Parallel Test Execution
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
# Split tests into 4 parallel shards
shardIndex: [1, 2, 3, 4]
shardTotal: [4]
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm ci
- name: Install Playwright browsers
run: npx playwright install --with-deps
- name: Run tests (Shard ${{ matrix.shardIndex }}/${{ matrix.shardTotal }})
run: npx playwright test --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }}
- name: Upload shard results
if: always()
uses: actions/upload-artifact@v3
with:
name: test-results-shard-${{ matrix.shardIndex }}
path: test-results/
Practical Test Examples with CI/CD Integration
Sample Test Suite for practiceautomatedtesting.com
// tests/login.spec.js
import { test, expect } from '@playwright/test';
test.describe('Login Functionality Tests', () => {
test.beforeEach(async ({ page }) => {
// Navigate to the practice website
await page.goto('https://practiceautomatedtesting.com/login');
});
test('should successfully login with valid credentials', async ({ page }) => {
// Fill in login form
await page.fill('#username', 'testuser@example.com');
await page.fill('#password', 'ValidPassword123');
// Click login button
await page.click('button[type="submit"]');
// Verify successful login
await expect(page).toHaveURL(/dashboard/);
await expect(page.locator('.welcome-message')).toBeVisible();
});
test('should display error for invalid credentials', async ({ page }) => {
// Attempt login with invalid credentials
await page.fill('#username', 'invalid@example.com');
await page.fill('#password', 'WrongPassword');
await page.click('button[type="submit"]');
// Verify error message appears
await expect(page.locator('.error-message')).toContainText('Invalid credentials');
});
test('should validate required fields', async ({ page }) => {
// Click submit without filling fields
await page.click('button[type="submit"]');
// Check for validation messages
await expect(page.locator('#username-error')).toBeVisible();
await expect(page.locator('#password-error')).toBeVisible();
});
});
Playwright Configuration for CI/CD
// playwright.config.js
import { defineConfig, devices } from '@playwright/test';
export default defineConfig({
// Test directory
testDir: './tests',
// Maximum time one test can run
timeout: 30 * 1000,
// Run tests in parallel
fullyParallel: true,
// Fail build on CI if you accidentally left test.only
forbidOnly: !!process.env.CI,
// Retry failed tests on CI
retries: process.env.CI ? 2 : 0,
// Number of parallel workers
workers: process.env.CI ? 2 : undefined,
// Reporter configuration
reporter: [
['html', { outputFolder: 'playwright-report' }],
['json', { outputFile: 'test-results/results.json' }],
['junit', { outputFile: 'test-results/junit.xml' }]
],
// Shared settings for all projects
use: {
baseURL: 'https://practiceautomatedtesting.com',
screenshot: 'only-on-failure',
video: 'retain-on-failure',
trace: 'on-first-retry',
},
// Configure projects for different browsers
projects: [
{
name: 'chromium',
use: { ...devices['Desktop Chrome'] },
},
{
name: 'firefox',
use: { ...devices['Desktop Firefox'] },
},
{
name: 'webkit',
use: { ...devices['Desktop Safari'] },
},
],
});
Environment Variables and Secrets Management
Setting Up GitHub Secrets
# Access GitHub repository settings via web interface:
# Repository → Settings → Secrets and variables → Actions → New repository secret
Using Secrets in Workflow
# .github/workflows/secure-tests.yml
name: Tests with Secure Credentials
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm ci
- name: Run tests with secrets
env:
# Access secrets from GitHub repository settings
TEST_USERNAME: ${{ secrets.TEST_USERNAME }}
TEST_PASSWORD: ${{ secrets.TEST_PASSWORD }}
API_KEY: ${{ secrets.API_KEY }}
run: npm test
Accessing Environment Variables in Tests
// tests/secure-login.spec.js
import { test, expect } from '@playwright/test';
test('login with credentials from environment', async ({ page }) => {
// Access environment variables
const username = process.env.TEST_USERNAME;
const password = process.env.TEST_PASSWORD;
await page.goto('https://practiceautomatedtesting.com/login');
// Use credentials from CI/CD secrets
await page.fill('#username', username);
await page.fill('#password', password);
await page.click('button[type="submit"]');
await expect(page).toHaveURL(/dashboard/);
});
Status Badges and Reporting
Adding Status Badge to README
<!-- README.md -->
# My Test Automation Project

## Project Description
Automated testing suite with CI/CD integration.
<!-- SCREENSHOT_NEEDED: BROWSER
URL: https://github.com/username/repo
Description: README with green CI/CD status badge showing passing tests
Placement: After status badge section -->
Generating Test Reports
# Add to your workflow file
- name: Generate HTML Report
if: always()
run: npx playwright show-report
- name: Publish Test Report
if: always()
uses: actions/upload-artifact@v3
with:
name: playwright-report
path: playwright-report/
retention-days: 30
- name: Comment PR with Test Results
if: github.event_name == 'pull_request'
uses: actions/github-script@v6
with:
script: |
const fs = require('fs');
const results = JSON.parse(fs.readFileSync('test-results/results.json'));
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `## Test Results\n✅ Passed: ${results.passed}\n❌ Failed: ${results.failed}`
});
Common Mistakes and Debugging
Mistake 1: Not Handling Timeouts in CI Environment
Problem:
// Tests pass locally but timeout in CI
test('slow operation', async ({ page }) => {
await page.click('.load-data'); // Times out in CI
await expect(page.locator('.data')).toBeVisible();
});
Solution:
// Increase timeout for CI environment
test('slow operation', async ({ page }) => {
// Set longer timeout for CI
test.setTimeout(60000);
await page.click('.load-data');
await expect(page.locator('.data')).toBeVisible({ timeout: 30000 });
});
Mistake 2: Missing Browser Installation
Problem:
# Error in CI logs:
# Error: browserType.launch: Executable doesn't exist
Solution:
# Always include browser installation step
- name: Install Playwright Browsers
run: npx playwright install --with-deps
Mistake 3: Hardcoded Base URLs
Problem:
// Hardcoded URL prevents testing different environments
await page.goto('https://practiceautomatedtesting.com/login');
Solution:
// Use baseURL from config
await page.goto('/login'); // Uses baseURL from playwright.config.js
// Or use environment variables
const BASE_URL = process.env.BASE_URL || 'https://practiceautomatedtesting.com';
await page.goto(`${BASE_URL}/login`);
Debugging CI/CD Test Failures
# Enable debug logging
- name: Run tests with debug info
run: DEBUG=pw:api npm test
# Upload videos and traces on failure
- name: Upload failure artifacts
if: failure()
uses: actions/upload-artifact@v3
with:
name: failure-artifacts
path: |
test-results/
playwright-report/
videos/
traces/
Viewing Test Logs
# Access workflow logs via GitHub interface:
# Repository → Actions → Select workflow run → View logs
# Download artifacts to local machine:
# Actions → Workflow run → Artifacts section → Download
This comprehensive guide covers CI/CD integration for automated test execution, providing you with the configuration files, test examples, and best practices needed to implement continuous testing in your projects.
Hands-On Practice
CI/CD Integration: Automated Test Execution on Push
🎯 Learning Objectives
- Configure CI/CD pipelines to trigger automated tests on code push
- Integrate test frameworks with popular CI/CD platforms (GitHub Actions, GitLab CI, Jenkins)
- Implement parallel test execution and test result reporting
- Handle test failures and implement quality gates
- Set up notifications and test metrics dashboards
📝 Hands-On Exercise
Task: Build a Complete CI/CD Pipeline for E-commerce Test Suite
You’ll create a comprehensive CI/CD pipeline that automatically runs your test suite whenever code is pushed, executes tests in parallel, reports results, and blocks deployment if tests fail.
Scenario
Your e-commerce application has:
- API tests (15 tests, ~5 min execution)
- UI tests (30 tests, ~15 min execution)
- Integration tests (10 tests, ~8 min execution)
You need to set up a pipeline that runs all tests efficiently and provides clear feedback.
Step-by-Step Instructions
Step 1: Set Up Project Structure
project/
├── .github/
│ └── workflows/
│ └── test-pipeline.yml
├── tests/
│ ├── api/
│ ├── ui/
│ └── integration/
├── test-reports/
├── package.json
└── pytest.ini (or similar config)
Step 2: Create GitHub Actions Workflow
Create .github/workflows/test-pipeline.yml
:
name: Automated Test Pipeline
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
# Job 1: API Tests
api-tests:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [18.x]
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.node-version }}
- name: Cache dependencies
uses: actions/cache@v3
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
- name: Install dependencies
run: npm ci
- name: Run API tests
run: npm run test:api -- --reporters=default --reporters=jest-junit
env:
API_URL: ${{ secrets.API_URL }}
API_KEY: ${{ secrets.API_KEY }}
- name: Upload API test results
uses: actions/upload-artifact@v3
if: always()
with:
name: api-test-results
path: test-reports/api/
# Job 2: UI Tests (Parallel)
ui-tests:
runs-on: ubuntu-latest
strategy:
matrix:
shard: [1, 2, 3]
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: 18.x
- name: Install dependencies
run: npm ci
- name: Install Playwright browsers
run: npx playwright install --with-deps
- name: Run UI tests (Shard ${{ matrix.shard }})
run: npx playwright test --shard=${{ matrix.shard }}/3
env:
BASE_URL: ${{ secrets.BASE_URL }}
- name: Upload UI test results
uses: actions/upload-artifact@v3
if: always()
with:
name: ui-test-results-${{ matrix.shard }}
path: test-reports/ui/
# Job 3: Integration Tests
integration-tests:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:14
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install pytest pytest-html pytest-xdist
- name: Run integration tests
run: pytest tests/integration -n auto --html=test-reports/integration/report.html
env:
DB_HOST: localhost
DB_PASSWORD: postgres
- name: Upload integration test results
uses: actions/upload-artifact@v3
if: always()
with:
name: integration-test-results
path: test-reports/integration/
# Job 4: Test Report and Quality Gate
test-summary:
needs: [api-tests, ui-tests, integration-tests]
runs-on: ubuntu-latest
if: always()
steps:
- uses: actions/checkout@v3
- name: Download all test results
uses: actions/download-artifact@v3
with:
path: all-test-reports
- name: Publish test results
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
files: |
all-test-reports/**/*.xml
all-test-reports/**/*.json
- name: Check test pass rate
run: |
# Custom script to calculate pass rate
PASS_RATE=$(node scripts/calculate-pass-rate.js)
echo "Test pass rate: $PASS_RATE%"
if [ $(echo "$PASS_RATE < 95" | bc) -eq 1 ]; then
echo "❌ Test pass rate below threshold (95%)"
exit 1
fi
echo "✅ Test pass rate meets threshold"
- name: Send Slack notification
if: always()
uses: 8398a7/action-slack@v3
with:
status: ${{ job.status }}
text: 'Test Pipeline: ${{ job.status }}'
webhook_url: ${{ secrets.SLACK_WEBHOOK }}
Step 3: Create Supporting Scripts
scripts/calculate-pass-rate.js:
const fs = require('fs');
const path = require('path');
function calculatePassRate() {
const reportsDir = './all-test-reports';
let totalTests = 0;
let passedTests = 0;
// Parse all test result files
// Implementation depends on your test framework
const passRate = (passedTests / totalTests) * 100;
console.log(passRate.toFixed(2));
}
calculatePassRate();
Step 4: Configure GitLab CI Alternative
Create .gitlab-ci.yml
:
stages:
- test
- report
variables:
PASS_RATE_THRESHOLD: 95
api_tests:
stage: test
image: node:18
script:
- npm ci
- npm run test:api
artifacts:
when: always
reports:
junit: test-reports/api/junit.xml
paths:
- test-reports/api/
ui_tests:
stage: test
image: mcr.microsoft.com/playwright:v1.40.0
parallel: 3
script:
- npm ci
- npx playwright test --shard=$CI_NODE_INDEX/$CI_NODE_TOTAL
artifacts:
when: always
paths:
- test-reports/ui/
integration_tests:
stage: test
image: python:3.11
services:
- postgres:14
script:
- pip install -r requirements.txt
- pytest tests/integration -n auto
artifacts:
when: always
paths:
- test-reports/integration/
quality_gate:
stage: report
script:
- node scripts/calculate-pass-rate.js
- |
if [ $? -ne 0 ]; then
echo "Quality gate failed"
exit 1
fi
only:
- main
- develop
Expected Outcome
After completing this exercise, you should have:
- ✅ A working CI/CD pipeline that triggers on push
- ✅ Tests running in parallel (UI tests across 3 shards)
- ✅ Test artifacts collected and published
- ✅ Quality gate that fails pipeline if pass rate < 95%
- ✅ Notifications sent to Slack on completion
- ✅ Test execution time reduced by ~60% through parallelization
Solution Approach
- Pipeline Configuration: Use matrix strategy for parallel execution
- Artifact Management: Upload/download test results between jobs
- Quality Gates: Implement pass rate calculation and threshold checking
- Optimization: Cache dependencies, use appropriate sharding
- Reporting: Aggregate results and publish unified reports
- Notifications: Integrate with communication tools for visibility
🎓 Key Takeaways
-
Trigger Configuration: CI/CD pipelines should trigger on relevant events (push, PR, merge) to provide fast feedback on code changes. Branch filtering prevents unnecessary runs.
-
Parallel Execution Strategy: Sharding tests across multiple runners dramatically reduces execution time. For large test suites, calculate optimal shard count based on test duration and runner availability.
-
Quality Gates Are Critical: Automated quality gates prevent broken code from reaching production. Set realistic thresholds (90-95% pass rate) and fail the pipeline when tests don’t meet standards.
-
Comprehensive Reporting: Aggregate test results from all jobs into a unified report. Store test artifacts for debugging failures and tracking trends over time.
-
Fast Feedback Loops: Optimize pipeline execution through caching, parallel jobs, and conditional execution. Developers should receive test results within 10-15 minutes maximum.
🚀 Next Steps
Practice These Skills
- Implement flaky test detection: Add retry logic and track tests that fail intermittently
- Set up test result trends: Use tools like Allure or ReportPortal to track test metrics over time
- Create custom quality gates: Beyond pass rate, check coverage, performance, and security scan results
- Optimize pipeline costs: Implement smart test selection to run only affected tests on feature branches
Related Topics to Explore
-
Advanced Topics:
- Container-based test environments with Docker Compose
- Test distribution across different OS and browser combinations
- Blue-green deployments with automated rollback on test failure
- Infrastructure as Code (IaC) for test environment provisioning
-
Tools to Learn:
- Jenkins Pipeline as Code (Jenkinsfile)
- CircleCI test splitting and parallelism
- Azure DevOps test plans integration
- Kubernetes-based test execution with Testkube
-
Best Practices:
- Implementing test quarantine for flaky tests
- Progressive delivery with feature flags and canary testing
- Contract testing in microservices CI/CD pipelines