Implementing Smart Test Automation with Testim & Mabl
Why This Matters
Test automation teams spend an estimated 30-50% of their time maintaining broken tests rather than creating new coverage. Every UI change—a redesigned button, relocated element, or updated workflow—triggers a cascade of test failures that demand immediate attention. This maintenance burden creates a vicious cycle: teams either abandon automation altogether or accept that their test suites become increasingly brittle over time.
Modern AI-powered testing platforms fundamentally change this equation. Testim and Mabl represent the current generation of tools delivering measurable ROI today—not theoretical future benefits. These platforms use AI to automatically repair broken tests, validate visual appearance across browsers without hard-coded assertions, and accelerate test creation through intelligent assistance.
Real-World Problems Solved
The Maintenance Crisis: When your e-commerce site redesigns its checkout flow, traditional test automation requires developers to manually update dozens of element locators across multiple test scripts. With self-healing automation, tests automatically adapt to the new UI structure, identifying elements through multiple AI-powered strategies that go beyond fragile XPath selectors.
Visual Regression at Scale: Validating that your application renders correctly across Chrome, Firefox, Safari, and Edge traditionally requires either tedious manual testing or complex screenshot comparison code with brittle pixel-matching logic. Visual AI platforms handle this automatically, distinguishing meaningful visual differences from acceptable rendering variations.
The Speed vs. Quality Tradeoff: Fast-moving development teams often skip automation because creating comprehensive test coverage takes too long. AI copilots reduce this burden by generating test logic from natural language descriptions, transforming “verify the user can add items to cart and complete checkout” into working automation in minutes rather than hours.
When You’ll Use These Skills
You’ll apply this knowledge whenever you need to:
- Build robust automation quickly for web applications with frequently changing UIs
- Reduce test maintenance overhead that’s consuming your team’s productivity
- Implement visual testing without specialized image comparison infrastructure
- Enable non-technical team members (QA analysts, product managers) to contribute to test automation
- Make build-vs-buy decisions about test automation tooling with data-driven criteria
- Demonstrate automation ROI to stakeholders through reduced maintenance time metrics
Learning Objectives Overview
This lesson provides hands-on implementation experience with both Testim and Mabl, enabling you to evaluate and deploy these platforms effectively.
Environment Setup (Objective 1): You’ll create accounts on both platforms, configure test environments, and connect them to sample web applications. This foundation ensures you can immediately begin building tests and understand each platform’s architecture.
Self-Healing Implementation (Objective 2): You’ll build tests that deliberately target elements likely to change, then modify the application UI to trigger self-healing behavior. You’ll examine how AI-powered locators work under the hood and configure healing sensitivity for your reliability requirements.
Visual AI Validation (Objective 3): You’ll implement pixel-perfect cross-browser testing using visual AI checkpoints, configure acceptable variation thresholds, and create baseline images. You’ll see how these tools distinguish real bugs from acceptable rendering differences between browsers.
AI Copilot Usage (Objective 4): You’ll use natural language commands to generate test steps, refine test logic through conversational interaction, and understand the capabilities and limitations of current AI assistance features.
Platform Comparison (Objective 5): You’ll evaluate both no-code and coded approaches through practical exercises, building the same test scenario in different ways. You’ll develop decision criteria for choosing between platforms based on team composition, application complexity, and maintenance requirements.
End-to-End Testing (Objective 6): You’ll create comprehensive test suites that combine UI and API testing, implementing realistic scenarios like user registration flows that require both backend data setup and frontend validation.
ROI Measurement (Objective 7): You’ll track and analyze test maintenance time before and after implementing AI features, calculate time savings from self-healing capabilities, and build business cases for these tools using concrete metrics.
By the end of this lesson, you’ll have working test suites in both platforms and the expertise to recommend and implement the right AI testing solution for your organization’s needs.
Core Content
Core Content: Implementing Smart Test Automation with Testim & Mabl
1. Core Concepts Explained
Understanding Smart Test Automation
Smart test automation platforms like Testim and Mabl leverage AI and machine learning to create more resilient and maintainable tests. Unlike traditional automation tools that rely solely on static locators, these platforms:
- Self-heal tests by adapting to UI changes automatically
- Generate locators intelligently using multiple attributes
- Provide visual testing capabilities out of the box
- Enable codeless test creation while supporting custom code
- Integrate seamlessly with CI/CD pipelines
Testim Overview
Testim is an AI-powered test automation platform that combines codeless test creation with the flexibility of custom JavaScript. Key features include:
- Smart Locators: AI-driven element identification that adapts to changes
- Test Recorder: Chrome extension for capturing user interactions
- Custom JavaScript: Ability to extend tests with custom code
- Cross-browser Testing: Execute tests across multiple browsers
- Grid Integration: Built-in grid for parallel execution
Mabl Overview
Mabl is an intelligent test automation platform focused on continuous testing. Key features include:
- Auto-healing: Automatically adjusts to application changes
- Visual Testing: Built-in screenshot comparison
- Performance Testing: Integrated performance metrics
- API Testing: Combine UI and API tests in unified workflows
- Insights: ML-driven analytics on test results
2. Setting Up Testim
Step 1: Installation and Account Setup
Create a Testim Account
- Navigate to https://app.testim.io
- Sign up for a free account
- Verify your email address
Install Testim Chrome Extension
- Go to Chrome Web Store
- Search for “Testim Recorder”
- Click “Add to Chrome”
- Pin the extension to your toolbar
Create Your First Project
- Log into Testim dashboard
- Click “Create New Project”
- Name your project (e.g., “Practice Test Suite”)
- Select project type: “Web Application”
<!-- SCREENSHOT_NEEDED: BROWSER
URL: https://app.testim.io
Description: Testim project creation dashboard showing new project setup
Placement: after Step 1 instructions -->
Step 2: Recording Your First Test in Testim
Start Recording
- Click “New Test” in Testim dashboard
- Enter test name: “User Login Flow”
- Click the Testim extension icon
- Click “Record”
Capture Test Steps
// Testim automatically captures these actions as you perform them:
// 1. Navigate to URL
// 2. Click elements
// 3. Type text
// 4. Validate text/elements
// Example: Login test on practiceautomatedtesting.com
// Base URL: http://practiceautomatedtesting.com/login
// Step 1: Navigate (automatically captured)
// Step 2: Fill username field
// Testim creates smart locator: input[data-test="username"]
// Step 3: Fill password field
// Smart locator: input[data-test="password"]
// Step 4: Click login button
// Smart locator: button[data-test="login-button"]
// Step 5: Verify success message
// Validation: element contains text "Welcome"
Step 3: Adding Custom JavaScript in Testim
Testim allows you to enhance tests with custom code:
// Custom step: Validate specific cart total
// Click "Add Step" → "Custom Code" in Testim editor
// Access page elements
const cartTotal = document.querySelector('[data-test="cart-total"]');
const totalValue = parseFloat(cartTotal.textContent.replace('$', ''));
// Custom validation
if (totalValue < 100) {
throw new Error(`Cart total ${totalValue} is below minimum`);
}
// Store value for later use
exportsTest.cartValue = totalValue;
// Custom step: Generate dynamic test data
const timestamp = Date.now();
const testEmail = `test.user.${timestamp}@example.com`;
// Store in test variables
exportsTest.email = testEmail;
exportsTest.timestamp = timestamp;
// Use in subsequent steps via {{email}}
Step 4: Configuring Testim Grid Execution
// Testim configuration in testim.json
{
"projectId": "your-project-id",
"token": "your-api-token",
"grid": "testim-grid",
"browsers": [
{
"type": "chrome",
"version": "latest"
},
{
"type": "firefox",
"version": "latest"
}
],
"parallelism": 5,
"baseUrl": "http://practiceautomatedtesting.com"
}
3. Setting Up Mabl
Step 1: Installation and Workspace Setup
Create Mabl Account
- Visit https://app.mabl.com
- Sign up for account
- Create workspace
Install Mabl Trainer
- Download Mabl Desktop Trainer for your OS
- Install and launch application
- Log in with your credentials
# For macOS (if using CLI installer)
$ curl -o mabl-trainer.dmg https://app.mabl.com/downloads/mabl-trainer.dmg
$ open mabl-trainer.dmg
# For Windows
# Download from: https://app.mabl.com/downloads/mabl-trainer.exe
<!-- SCREENSHOT_NEEDED: BROWSER
URL: https://app.mabl.com
Description: Mabl workspace dashboard with Create New Test button
Placement: after installation steps -->
Step 2: Creating Your First Mabl Test
Launch Trainer
- Open Mabl Desktop Trainer
- Click “Create new journey”
- Enter URL:
http://practiceautomatedtesting.com
Record Test Flow
// Mabl automatically captures interactions and creates steps
// Example: E-commerce checkout flow
// 1. Visit homepage - auto-captured
// 2. Search for product
// - Click: input[data-test="search"]
// - Type: "laptop"
// - Click: button[type="submit"]
// 3. Add assertion (manual)
// - Click eye icon
// - Select "Assert element visible"
// - Target: div.product-results
// 4. Add product to cart
// - Click: button[data-test="add-to-cart-1"]
// 5. Navigate to cart
// - Click: a[href="/cart"]
// 6. Add variable for verification
// - Create variable: cartItemCount
// - Extract from: span.cart-count
Step 3: Adding Variables and Logic in Mabl
// Creating JavaScript step in Mabl
// Click "+" → "JavaScript"
// Example: Calculate expected discount
const productPrice = mabl.env.getValue('productPrice');
const discountPercent = 0.15;
const expectedTotal = productPrice * (1 - discountPercent);
// Store for validation
mabl.env.setValue('expectedDiscountedPrice', expectedTotal.toFixed(2));
// Conditional logic in Mabl
// Click "+" → "If/Else"
// Check if user is logged in
if (await mabl.findElement({css: '.user-profile'})) {
// Navigate to dashboard
await mabl.click({css: 'a[href="/dashboard"]'});
} else {
// Navigate to login
await mabl.click({css: 'a[href="/login"]'});
}
Step 4: Configuring Mabl API Tests
Mabl allows combining UI and API tests:
// Create API test step in Mabl
// Click "+" → "API"
// Configure API request
{
"method": "POST",
"url": "{{baseUrl}}/api/cart/add",
"headers": {
"Content-Type": "application/json",
"Authorization": "Bearer {{authToken}}"
},
"body": {
"productId": "{{productId}}",
"quantity": 1
}
}
// Add assertions on response
// Status code equals: 200
// Response body path: $.success equals true
// Response body path: $.cartTotal > 0
4. CI/CD Integration
Testim CLI Integration
# Install Testim CLI
$ npm install -g @testim/testim-cli
# Run tests from command line
$ testim --token "your-token" \
--project "your-project-id" \
--grid "testim-grid" \
--suite "Regression Suite"
# Expected output:
# Running 15 tests on Testim Grid...
# ✓ Login Flow (2.3s)
# ✓ Checkout Process (5.1s)
# ✓ Search Functionality (1.8s)
# ...
# 15 passed, 0 failed
Mabl CLI Integration
# Install Mabl CLI
$ npm install -g @mablhq/mabl-cli
# Authenticate
$ mabl auth login --api-key "your-api-key"
# Run deployment trigger
$ mabl deployments create \
--application-id "app-abc123" \
--environment-id "env-xyz789" \
--await-results
# Expected output:
# Deployment created: deployment-456
# Running 12 journeys...
# ✓ User Registration (3.2s)
# ✓ Product Search (2.1s)
# ✓ Checkout Flow (6.5s)
# ...
# All journeys passed
GitHub Actions Integration
# .github/workflows/testim-tests.yml
name: Testim Test Execution
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Run Testim Tests
uses: testim-io/testim-action@v1
with:
token: ${{ secrets.TESTIM_TOKEN }}
project: ${{ secrets.TESTIM_PROJECT_ID }}
suite: 'Smoke Tests'
grid: 'testim-grid'
# .github/workflows/mabl-tests.yml
name: Mabl Test Execution
on:
deployment_status:
jobs:
test:
runs-on: ubuntu-latest
if: github.event.deployment_status.state == 'success'
steps:
- name: Run Mabl Tests
uses: mablhq/mabl-action@v1
with:
application-id: ${{ secrets.MABL_APP_ID }}
environment-id: ${{ secrets.MABL_ENV_ID }}
api-key: ${{ secrets.MABL_API_KEY }}
5. Advanced Features
Testim: Dynamic Locators
// Create flexible locator that adapts to changes
// In Testim custom step:
// Instead of static selector
// ❌ const button = document.querySelector('#submit-btn-123');
// Use smart locator with multiple strategies
// ✅
const button = testimPage.findElement({
type: 'button',
attributes: {
'data-test': 'submit',
'aria-label': 'Submit form'
},
text: 'Submit',
position: 'last' // if multiple matches
});
Mabl: Visual Testing
// Enable visual testing in Mabl
// Click "+" → "Take Screenshot"
// Configure visual checkpoint
{
"name": "Homepage Hero Section",
"selector": ".hero-banner",
"hideSelectors": [".dynamic-ad", ".timestamp"],
"threshold": 0.05, // 5% difference tolerance
"compareMode": "layout" // or "pixel" for exact match
}
// Mabl automatically compares against baseline
// Flags differences for review
Comparison Architecture
graph TB
A[Test Creation] --> B{Platform}
B -->|Testim| C[Chrome Extension]
B -->|Mabl| D[Desktop Trainer]
C --> E[Smart Locators]
D --> F[Auto-healing]
E --> G[Test Execution]
F --> G
G --> H[Testim Grid]
G --> I[Mabl Cloud]
H --> J[Results & Reports]
I --> J
J --> K[CI/CD Integration]
6. Common Mistakes Section
Testim Common Pitfalls
Mistake 1: Over-relying on Auto-generated Locators
// ❌ Accepting first auto-generated locator
// Testim might suggest: body > div:nth-child(3) > button
// ✅ Customize to use stable attributes
// Edit locator to: button[data-test="submit"]
Mistake 2: Not Using Test Parameters
// ❌ Hardcoded values in every test
const username = "testuser@example.com";
// ✅ Use parameters for reusability
const username = testimParams.username; // Set at suite level
Mistake 3: Ignoring Waits
// ❌ Assuming elements load instantly
click(submitButton);
// ✅ Add explicit validation step
waitForElement(submitButton, 10000);
click(submitButton);
Mabl Common Pitfalls
Mistake 1: Not Naming Variables Clearly
// ❌ Generic variable names
mabl.env.setValue('val1', price);
// ✅ Descriptive names
mabl.env.setValue('originalProductPrice', price);
Mistake 2: Skipping Visual Baseline Updates
// Problem: Tests fail after intentional UI changes
// Solution: Review visual diffs in Mabl dashboard
// Click "Accept as Baseline" for intentional changes
Mistake 3: Not Using Data-Driven Testing
// ❌ Creating separate journey for each data set
// Journey: "Login Test User 1"
// Journey: "Login Test User 2"
// ✅ Create data table in Mabl
// Single journey iterates through data
// Users: [user1@test.com, user2@test.com, user3@test.com]
Debugging Tips
# Testim: View detailed execution logs
$ testim --token "token" --project "id" --report-file results.json
$ cat results.json | jq '.tests[] | select(.status=="failed")'
# Mabl: Download execution artifacts
# Navigate to: Results → Failed Journey → Download Screenshots
// Add console logging in custom steps
console.log('Current URL:', window.location.href);
console.log('Element found:', !!document.querySelector('[data-test="target"]'));
// Logs appear in execution details
Performance Tip: Both platforms work best with stable test environments. Avoid testing against constantly changing staging servers.
Maintenance Tip: Review auto-healed tests weekly to ensure the AI made correct decisions about element identification.
Hands-On Practice
Hands-On Exercise
Exercise: Build an End-to-End Smart Test Suite for an E-Commerce Application
Objective
Create a comprehensive test automation suite using either Testim or Mabl that demonstrates AI-powered test creation, self-healing capabilities, and cross-browser testing for a sample e-commerce website.
Task Overview
You’ll automate testing for a demo e-commerce site (use https://demo.evershop.io or similar) covering user registration, product search, cart management, and checkout process.
Step-by-Step Instructions
Part 1: Initial Test Creation (30 minutes)
Set up your testing environment
- Sign up for a free trial account (Testim.io or Mabl.com)
- Install the browser extension (Chrome recommended)
- Configure your project with the target URL:
https://demo.evershop.io
Create your first AI-powered test
- Create a new test called “User Registration Flow”
- Use the recorder to capture these actions:
- Navigate to the homepage
- Click on “Sign In” or “Register”
- Fill in registration form (email, password, name)
- Submit the form
- Verify success message appears
- Add at least 3 smart assertions using AI locators
- Save and execute the test
Build additional test cases
- Test Case 2: “Product Search and Filter”
- Search for a product category
- Apply filters (price range, brand, etc.)
- Verify filtered results display correctly
- Test Case 3: “Add to Cart Journey”
- Select a product
- Choose product options (size, color)
- Add to cart
- Verify cart icon updates with item count
- Test Case 2: “Product Search and Filter”
Part 2: Implement Self-Healing (20 minutes)
Configure self-healing capabilities
- Review the AI locators automatically generated
- Manually adjust one element locator to use multiple attributes
- Enable self-healing in your project settings
- Document which elements are using smart locators
Test self-healing behavior
- Simulate a UI change scenario:
- If possible, note an element’s ID or class
- Re-run the test
- Check the test execution logs for self-healing actions
- Review the healing suggestions in the platform dashboard
- Simulate a UI change scenario:
Part 3: Cross-Browser & Mobile Testing (25 minutes)
Configure cross-browser test execution
- Create a test suite combining your 3 test cases
- Configure the suite to run on:
- Chrome (latest)
- Firefox (latest)
- Safari or Edge
- Set up at least one mobile viewport (iOS or Android)
Execute and analyze results
- Run the complete test suite
- Document any browser-specific failures
- Review execution time across different browsers
Part 4: Reporting & CI Integration Setup (15 minutes)
Configure reporting
- Set up email notifications for test failures
- Create a custom test report showing:
- Pass/fail status
- Execution time
- Screenshots of failures
- Self-healing instances
Prepare for CI/CD integration
- Generate an API key for your project
- Document the CLI command or API endpoint needed
- (Bonus) If you have access to GitHub Actions, Jenkins, or similar, create a basic integration configuration
Expected Outcomes
By completing this exercise, you should have:
✅ A working test suite with 3+ automated test cases
✅ Evidence of AI-powered element detection in use
✅ Self-healing configuration documented
✅ Cross-browser test results from at least 3 browsers
✅ A test execution report with visual evidence
✅ Basic CI/CD integration configuration ready
Solution Approach
Key Implementation Tips:
Smart Locator Strategy
Priority order for element identification: 1. AI-generated smart locators (let the tool learn) 2. data-testid attributes (if available) 3. Semantic HTML (aria-labels, roles) 4. Multiple fallback attributes (id, class, text content)Self-Healing Verification
- Check test logs for “healed” or “auto-fixed” indicators
- Review the element change history in the dashboard
- Compare original vs. healed locators
Test Organization Best Practices
Test Suite Structure: ├── Smoke Tests (critical paths) │ ├── User Registration │ └── Login Flow ├── Feature Tests │ ├── Product Search │ ├── Cart Management │ └── Checkout Process └── Regression Suite (all tests combined)CI/CD Integration Template
# Example GitHub Actions workflow name: E2E Tests on: [push, pull_request] jobs: test: runs-on: ubuntu-latest steps: - name: Run Testim/Mabl Tests run: | # Use platform CLI or API # testim --token ${{ secrets.TESTIM_TOKEN }} --project "ProjectID" --suite "SuiteID"
Key Takeaways
🎓 What You Learned:
AI-powered test creation significantly reduces test maintenance effort by automatically generating robust element locators that adapt to minor UI changes, making tests more resilient than traditional CSS or XPath selectors
Self-healing capabilities automatically detect and fix broken tests when elements change, reducing the time spent on test maintenance from hours to minutes, though they should be monitored to ensure fixes align with actual application behavior
Codeless automation platforms like Testim and Mabl enable both technical and non-technical team members to contribute to test automation, democratizing QA while still providing extensibility through custom code when needed
Cross-browser and mobile testing becomes streamlined through cloud-based execution grids, allowing parallel test runs across multiple environments without maintaining local infrastructure
Smart reporting and CI/CD integration provides immediate feedback on application quality, enabling teams to catch issues early in the development cycle and maintain confidence in continuous deployment practices
When to Apply These Tools:
- ✅ Agile teams with frequent UI changes requiring low-maintenance tests
- ✅ Organizations scaling QA efforts across multiple team members with varying technical skills
- ✅ Projects requiring extensive cross-browser and mobile coverage
- ✅ CI/CD pipelines needing fast, reliable feedback on every commit
- ⚠️ Complex custom applications may still benefit from traditional Selenium/Playwright for edge cases
Next Steps
Practice Exercises
- Expand your test coverage: Add tests for edge cases like error handling, form validation, and negative scenarios
- Optimize test performance: Implement parallel execution and identify opportunities to reduce test execution time
- Advanced self-healing: Intentionally modify UI elements and document how the self-healing responds
- API integration: Combine UI tests with API tests for complete end-to-end coverage
Related Topics to Explore
Deepen Your Knowledge:
- Visual regression testing integration (Percy, Applitools)
- Performance testing alongside functional tests
- Test data management strategies for AI-powered tools
- Advanced reporting with test analytics and metrics
Complementary Technologies:
- Traditional frameworks: Selenium, Playwright, Cypress (understand when to use each)
- BDD frameworks: Cucumber integration with codeless tools
- Continuous testing strategies and shift-left practices
- API testing tools: Postman, REST Assured for comprehensive coverage
Certification & Community:
- Pursue vendor certifications (Testim Certified Professional, etc.)
- Join QA communities and forums to share experiences
- Contribute to test automation best practices documentation
- Explore other AI-powered testing tools (Functionize, Katalon Studio, Autify)