Overview
Test automation integration allows you to connect your automated test suites with ConductorQA, providing centralized visibility into both manual and automated test results. This guide covers setup, configuration, and best practices for automation workflows.
Automation Concepts
Hybrid Testing Approach
ConductorQA supports a hybrid model combining:
- Manual Tests: Exploratory, usability, and ad-hoc testing
- Automated Tests: Regression, API, and repetitive functional tests
- Semi-Automated Tests: Manual steps with automated validation
- Integration Tests: Cross-system and end-to-end scenarios
Test Result Integration
Result Reporting Flow
Automated Test Execution
↓
Test Results (JUnit, TestNG, etc.)
↓
ConductorQA API Integration
↓
Unified Test Dashboard
↓
Analytics and Reporting
Supported Test Frameworks
- JavaScript: Jest, Mocha, Jasmine, Cypress, Playwright
- Python: pytest, unittest, nose2
- Java: JUnit 4/5, TestNG, Spock
- C#: NUnit, xUnit, MSTest
- Ruby: RSpec, Test::Unit, Minitest
- Go: Built-in testing package
- Custom: Any framework generating XML/JSON results
Prerequisites for Automation Setup
Technical Requirements
API Access
- ✅ ConductorQA project with API access enabled
- ✅ Generated API key with appropriate permissions
- ✅ Network access from your CI/CD environment to ConductorQA
Test Environment
- ✅ Existing automated test suite
- ✅ CI/CD pipeline (GitHub Actions, GitLab CI, Jenkins, etc.)
- ✅ Test result output in supported format (JUnit XML, JSON)
Project Configuration
- ✅ Test suites created in ConductorQA
- ✅ Test cases mapped to automated tests (optional but recommended)
- ✅ Team members with appropriate permissions
Planning Your Integration
Integration Strategy
- Start Small: Begin with one test suite or project
- Gradual Expansion: Add more tests and projects incrementally
- Result Validation: Verify data accuracy before full rollout
- Team Training: Ensure team understands new workflows
Test Result Mapping
- Direct Mapping: One automated test = one ConductorQA test case
- Suite Mapping: Automated test suite = ConductorQA test suite
- Mixed Approach: Combine direct and suite mapping as needed
API Key Setup and Configuration
Creating API Keys for Automation
Generate Automation-Specific API Key
- Navigate to Settings → API Keys
- Click “Create New API Key”
- Configure the key:
- Name:
"CI/CD Automation"
or similar descriptive name - Purpose: External test result reporting
- Projects: Select projects that will report automated results
- Permissions: Read/Write access for test execution data
- Name:
API Key Security Best Practices
- Environment Variables: Store keys in CI/CD environment variables
- Scope Limitation: Only grant access to required projects
- Regular Rotation: Change keys periodically
- Access Monitoring: Review API usage regularly
- Secure Storage: Never commit keys to version control
Environment Variable Setup
CI/CD Environment Variables
# Required variables
CONDUCTORQA_API_KEY=cqa_live_your_api_key_here
CONDUCTORQA_BASE_URL=https://api.conductorqa.com/v1
CONDUCTORQA_PROJECT_ID=your_project_id
# Optional variables
CONDUCTORQA_SUITE_ID=your_suite_id
CONDUCTORQA_ENVIRONMENT=staging
CONDUCTORQA_BUILD_NUMBER=$BUILD_NUMBER
Local Development Setup
# .env file (never commit this)
CONDUCTORQA_API_KEY=cqa_live_your_api_key_here
CONDUCTORQA_BASE_URL=https://api.conductorqa.com/v1
CONDUCTORQA_PROJECT_ID=your_project_id
Integration Implementation
Direct API Integration
Basic Result Reporting
// Node.js example
const axios = require('axios');
async function reportTestResults(results) {
const response = await axios.post(
`${process.env.CONDUCTORQA_BASE_URL}/projects/${process.env.CONDUCTORQA_PROJECT_ID}/test-runs`,
{
name: `Automated Run - ${new Date().toISOString()}`,
environment: process.env.CONDUCTORQA_ENVIRONMENT || 'development',
build_number: process.env.BUILD_NUMBER,
results: results.map(test => ({
test_case_name: test.name,
status: test.status, // 'passed', 'failed', 'skipped'
duration: test.duration,
error_message: test.error,
artifacts: test.screenshots || []
}))
},
{
headers: {
'Authorization': `Bearer ${process.env.CONDUCTORQA_API_KEY}`,
'Content-Type': 'application/json'
}
}
);
console.log('Test results reported:', response.data.id);
}
Bulk Result Reporting
# Python example
import os
import requests
from datetime import datetime
def report_bulk_results(test_results):
"""Report multiple test results in a single API call"""
payload = {
'test_run': {
'name': f'Automated Tests - {datetime.now().isoformat()}',
'environment': os.getenv('CONDUCTORQA_ENVIRONMENT', 'development'),
'build_number': os.getenv('BUILD_NUMBER'),
'executed_by': 'automation',
'started_at': datetime.now().isoformat(),
},
'results': []
}
for test in test_results:
payload['results'].append({
'test_case_name': test['name'],
'status': test['status'],
'duration_seconds': test['duration'],
'error_message': test.get('error'),
'steps': test.get('steps', []),
'artifacts': test.get('artifacts', [])
})
response = requests.post(
f"{os.getenv('CONDUCTORQA_BASE_URL')}/projects/{os.getenv('CONDUCTORQA_PROJECT_ID')}/bulk-results",
json=payload,
headers={
'Authorization': f"Bearer {os.getenv('CONDUCTORQA_API_KEY')}",
'Content-Type': 'application/json'
}
)
if response.status_code == 201:
print(f"Successfully reported {len(test_results)} test results")
return response.json()
else:
print(f"Failed to report results: {response.status_code} - {response.text}")
return None
Framework-Specific Integrations
Jest Integration
// jest-conductorqa-reporter.js
class ConductorQAReporter {
constructor(globalConfig, options) {
this.globalConfig = globalConfig;
this.options = options;
this.results = [];
}
onTestResult(test, testResult) {
testResult.testResults.forEach(result => {
this.results.push({
name: result.title,
status: result.status === 'passed' ? 'passed' : 'failed',
duration: result.duration / 1000, // Convert to seconds
error: result.failureMessages[0],
file_path: test.path
});
});
}
async onRunComplete() {
await this.reportResults(this.results);
}
async reportResults(results) {
// Implementation from previous example
// ... API call to ConductorQA
}
}
module.exports = ConductorQAReporter;
// jest.config.js
{
"reporters": [
"default",
["./jest-conductorqa-reporter.js", {
"projectId": "your_project_id",
"suiteId": "your_suite_id"
}]
]
}
Cypress Integration
// cypress/plugins/conductorqa.js
const axios = require('axios');
function reportToCSonductorQA(results) {
return new Promise(async (resolve, reject) => {
try {
const response = await axios.post(
`${Cypress.env('CONDUCTORQA_BASE_URL')}/projects/${Cypress.env('CONDUCTORQA_PROJECT_ID')}/test-runs`,
{
name: `Cypress Run - ${new Date().toISOString()}`,
environment: Cypress.env('ENVIRONMENT') || 'test',
results: results.runs[0].tests.map(test => ({
test_case_name: test.title.join(' > '),
status: test.state === 'passed' ? 'passed' : 'failed',
duration: test.duration / 1000,
error_message: test.err ? test.err.message : null
}))
},
{
headers: {
'Authorization': `Bearer ${Cypress.env('CONDUCTORQA_API_KEY')}`,
'Content-Type': 'application/json'
}
}
);
resolve(response.data);
} catch (error) {
reject(error);
}
});
}
module.exports = { reportToCSonductorQA };
CI/CD Pipeline Integration
GitHub Actions
# .github/workflows/test.yml
name: Run Tests and Report Results
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm run test:ci
env:
CONDUCTORQA_API_KEY: ${{ secrets.CONDUCTORQA_API_KEY }}
CONDUCTORQA_BASE_URL: https://api.conductorqa.com/v1
CONDUCTORQA_PROJECT_ID: ${{ secrets.CONDUCTORQA_PROJECT_ID }}
CONDUCTORQA_ENVIRONMENT: ci
BUILD_NUMBER: ${{ github.run_number }}
- name: Report test results
if: always()
run: npm run report-results
env:
CONDUCTORQA_API_KEY: ${{ secrets.CONDUCTORQA_API_KEY }}
CONDUCTORQA_BASE_URL: https://api.conductorqa.com/v1
CONDUCTORQA_PROJECT_ID: ${{ secrets.CONDUCTORQA_PROJECT_ID }}
GitLab CI
# .gitlab-ci.yml
stages:
- test
- report
variables:
CONDUCTORQA_BASE_URL: "https://api.conductorqa.com/v1"
CONDUCTORQA_ENVIRONMENT: "ci"
test:
stage: test
image: node:18
script:
- npm ci
- npm run test:ci
artifacts:
reports:
junit: test-results.xml
paths:
- test-results.json
expire_in: 1 week
when: always
report_results:
stage: report
image: node:18
dependencies:
- test
script:
- npm run report-results
variables:
CONDUCTORQA_API_KEY: $CONDUCTORQA_API_KEY
CONDUCTORQA_PROJECT_ID: $CONDUCTORQA_PROJECT_ID
BUILD_NUMBER: $CI_PIPELINE_ID
when: always
Jenkins Pipeline
pipeline {
agent any
environment {
CONDUCTORQA_API_KEY = credentials('conductorqa-api-key')
CONDUCTORQA_BASE_URL = 'https://api.conductorqa.com/v1'
CONDUCTORQA_PROJECT_ID = 'your_project_id'
CONDUCTORQA_ENVIRONMENT = 'jenkins'
}
stages {
stage('Install Dependencies') {
steps {
sh 'npm ci'
}
}
stage('Run Tests') {
steps {
sh 'npm run test:ci'
}
post {
always {
publishTestResults testResultsPattern: 'test-results.xml'
archiveArtifacts artifacts: 'test-results.json', fingerprint: true
}
}
}
stage('Report to ConductorQA') {
when {
always()
}
steps {
script {
env.BUILD_NUMBER = BUILD_NUMBER
sh 'npm run report-results'
}
}
}
}
}
Advanced Automation Features
Test Result Enrichment
Adding Context and Metadata
// Enhanced result reporting with additional context
const enhancedResults = testResults.map(test => ({
test_case_name: test.name,
status: test.status,
duration: test.duration,
error_message: test.error,
// Additional context
browser: process.env.BROWSER || 'chrome',
os: process.platform,
test_type: 'e2e', // or 'unit', 'integration'
priority: test.priority || 'medium',
// Execution details
retry_count: test.retries || 0,
flaky: test.retries > 0,
environment_details: {
url: process.env.BASE_URL,
version: process.env.APP_VERSION,
database: process.env.DB_VERSION
},
// Artifacts and evidence
artifacts: [
...test.screenshots,
...test.videos,
...test.logs
],
// Performance metrics
performance: {
page_load_time: test.pageLoadTime,
memory_usage: test.memoryUsage,
network_requests: test.networkRequests
}
}));
Screenshot and Video Capture
// Cypress example with artifact upload
cy.task('captureScreenshot', testName).then(screenshotPath => {
// Upload screenshot to ConductorQA
return uploadArtifact(screenshotPath, 'screenshot');
});
function uploadArtifact(filePath, type) {
const formData = new FormData();
formData.append('file', fs.createReadStream(filePath));
formData.append('type', type);
formData.append('test_case', testName);
return axios.post(
`${process.env.CONDUCTORQA_BASE_URL}/projects/${process.env.CONDUCTORQA_PROJECT_ID}/artifacts`,
formData,
{
headers: {
'Authorization': `Bearer ${process.env.CONDUCTORQA_API_KEY}`,
'Content-Type': 'multipart/form-data'
}
}
);
}
Parallel and Distributed Testing
Parallel Test Execution
# GitHub Actions matrix strategy
strategy:
matrix:
browser: [chrome, firefox, safari]
environment: [staging, production]
steps:
- name: Run tests
run: npm run test:ci
env:
BROWSER: ${{ matrix.browser }}
ENVIRONMENT: ${{ matrix.environment }}
CONDUCTORQA_TEST_RUN_NAME: "Tests-${{ matrix.browser }}-${{ matrix.environment }}"
Result Aggregation
// Aggregate results from multiple parallel runs
async function aggregateParallelResults() {
const testRunIds = process.env.PARALLEL_RUN_IDS.split(',');
const aggregatedResults = {
name: `Aggregated Run - ${new Date().toISOString()}`,
parallel_runs: testRunIds,
environment: process.env.ENVIRONMENT,
total_duration: 0,
summary: {
passed: 0,
failed: 0,
skipped: 0
}
};
// Collect results from all parallel runs
for (const runId of testRunIds) {
const run = await fetchTestRun(runId);
aggregatedResults.total_duration = Math.max(
aggregatedResults.total_duration,
run.duration
);
aggregatedResults.summary.passed += run.summary.passed;
aggregatedResults.summary.failed += run.summary.failed;
aggregatedResults.summary.skipped += run.summary.skipped;
}
return reportAggregatedResults(aggregatedResults);
}
Monitoring and Analytics
Automation Health Monitoring
Key Metrics to Track
- Execution Success Rate: Percentage of successful automation runs
- Test Stability: Flaky test identification and trending
- Execution Time: Performance trends and optimization opportunities
- Coverage Metrics: Automated vs manual test coverage
- Integration Reliability: API call success rates and response times
Setting Up Monitoring
// Health check for automation integration
async function healthCheck() {
try {
const response = await axios.get(
`${process.env.CONDUCTORQA_BASE_URL}/health`,
{
headers: {
'Authorization': `Bearer ${process.env.CONDUCTORQA_API_KEY}`
}
}
);
if (response.status === 200) {
console.log('✅ ConductorQA API is healthy');
return true;
}
} catch (error) {
console.error('❌ ConductorQA API health check failed:', error.message);
return false;
}
}
Dashboard and Reporting
Custom Dashboard Configuration
- Automation Overview: Success rates, execution times, trends
- Flaky Test Detection: Tests with inconsistent results
- Environment Comparison: Results across different environments
- CI/CD Integration Status: Pipeline success rates and bottlenecks
Automated Reporting
// Generate and send automated reports
async function generateDailyReport() {
const report = await generateTestSummary({
period: 'last_24_hours',
include_trends: true,
include_flaky_tests: true
});
await sendSlackReport(report);
await sendEmailSummary(report);
}
Troubleshooting Automation Issues
Common Integration Problems
API Authentication Errors
Problem: 401 Unauthorized responses Solutions:
- Verify API key format and validity
- Check project permissions for the API key
- Ensure proper Authorization header format
- Confirm API key hasn’t expired or been revoked
Test Result Upload Failures
Problem: Results not appearing in ConductorQA Solutions:
- Check network connectivity from CI/CD environment
- Verify JSON payload format and required fields
- Review API response codes and error messages
- Confirm project and suite IDs are correct
Performance Issues
Problem: Slow test result reporting affecting CI/CD pipelines Solutions:
- Use bulk result reporting endpoints
- Implement asynchronous result uploading
- Optimize artifact upload sizes
- Consider result batching strategies
Debugging Tips
Enable Detailed Logging
// Add comprehensive logging for debugging
const debug = require('debug')('conductorqa:integration');
async function reportResults(results) {
debug('Preparing to report %d test results', results.length);
try {
const payload = {
test_run: {
name: `Test Run ${Date.now()}`,
// ... other fields
},
results: results
};
debug('Request payload: %O', payload);
const response = await axios.post(endpoint, payload, config);
debug('Response status: %d', response.status);
debug('Response data: %O', response.data);
return response.data;
} catch (error) {
debug('Error reporting results: %O', error.response?.data || error.message);
throw error;
}
}
Validate Integration Health
#!/bin/bash
# integration-health-check.sh
echo "🔍 Running ConductorQA Integration Health Check"
# Check environment variables
if [ -z "$CONDUCTORQA_API_KEY" ]; then
echo "❌ CONDUCTORQA_API_KEY not set"
exit 1
fi
if [ -z "$CONDUCTORQA_PROJECT_ID" ]; then
echo "❌ CONDUCTORQA_PROJECT_ID not set"
exit 1
fi
# Test API connectivity
response=$(curl -s -w "%{http_code}" -o /dev/null \
-H "Authorization: Bearer $CONDUCTORQA_API_KEY" \
"$CONDUCTORQA_BASE_URL/projects/$CONDUCTORQA_PROJECT_ID")
if [ "$response" = "200" ]; then
echo "✅ API connectivity successful"
else
echo "❌ API connectivity failed (HTTP $response)"
exit 1
fi
echo "✅ Integration health check passed"
Best Practices
Development Workflow Integration
Branch-Based Testing
- Feature Branches: Run subset of tests relevant to changes
- Main Branch: Run full regression suite
- Release Branches: Execute comprehensive test coverage
- Hotfix Branches: Focus on affected functionality testing
Test Selection Strategies
// Smart test selection based on code changes
function selectTestsForChanges(changedFiles) {
const relevantTests = [];
changedFiles.forEach(file => {
// Map file changes to relevant tests
if (file.includes('/auth/')) {
relevantTests.push('authentication-suite');
}
if (file.includes('/api/')) {
relevantTests.push('api-integration-suite');
}
// ... additional mappings
});
return relevantTests;
}
Continuous Integration Best Practices
- Fast Feedback: Run critical tests first
- Parallel Execution: Utilize multiple agents/runners
- Smart Retries: Retry flaky tests with exponential backoff
- Result Caching: Cache results for unchanged code
- Progressive Testing: Expand test coverage based on risk
Data Management
Test Data Strategy
- Isolated Data: Each test run uses fresh data
- Data Cleanup: Remove test data after execution
- Shared Fixtures: Use consistent baseline data
- Environment Parity: Maintain similar data across environments
Result Retention
- Archive Old Results: Move historical data to long-term storage
- Trend Analysis: Keep aggregated metrics for performance tracking
- Compliance: Meet regulatory requirements for audit trails
- Storage Optimization: Balance detail with storage costs
Next Steps
After setting up test automation:
- Explore Analytics and Reporting - Monitor automation effectiveness
- Set Up Team Collaboration - Share automation insights
- Review Best Practices - Optimize your automation workflows
- Configure Troubleshooting - Handle automation issues
Need help with automation setup? Check the API documentation or contact support for assistance with complex integration scenarios.