Creating a Synthetic Monitoring Test
Who it's for: Engineers and QA specialists modelling user journeys for proactive monitoring.
You’ll learn: Test setup, fail conditions, locations, alerting, and troubleshooting workflows.
Overview
Synthetic monitoring simulates real user activity by running scripted journeys at scheduled intervals. Each execution captures request timings, screenshots, video, console logs, and performance insights so you can catch regression before customers do.
📌 Prerequisites: Ensure you have a project created, credentials for any protected areas you plan to monitor, and the Chrome recording extension installed if you want to capture journeys automatically.
Start the Setup Process
- Click Create New Synthetic Test inside your chosen project.
- Give the test a descriptive name (example:
Checkout - Production).
- Choose the Project that will contain the test if prompted.
- Enter the Startup Page URL – the page where the journey should begin (example:
https://yourbusinesswebsite.com/login).
- Optionally provide a Base Domain URL to use as a variable inside scripted steps (
test.test.base.pageurl("/account")).
- Select the Emulated Device (Desktop, Mobile, or Tablet). This sets the viewport size and user agent.
Configure the Schedule
- Choose how often the test should run. Frequency depends on your subscription tier (from every minute to once per day).
- Toggle Enable Schedule to start automated runs.
- Consider time-zone coverage. For global services, schedule from multiple regions at staggered intervals.
📌 Tip: Run critical revenue flows (checkout, signup) every 5 minutes; less critical journeys can run every 15–30 minutes.
Define Fail Conditions
Fail conditions decide when a run is marked as failed:
- JavaScript Errors: Flag runs if console errors appear.
- HTTP Status Thresholds: Configure counts for 200, 300, 400, and 500 level responses. Example: fail after 1×500 or 3×404.
- Response Validation: Assert that a specific string exists in the DOM or response body.
Optional filters:
- Ignore Non-Domain Errors: Exclude third-party hosts from failure counts.
- Request Type Monitoring: Track Fetch/XHR, JS, CSS, IMG individually to catch specific asset failures.
Security Monitoring Options
- Domain & IP Monitoring: Fail the run if requests route through unexpected domains or addresses.
- Domain & IP Whitelist: Explicitly allow only trusted origins.
- Request Blocking: Prevent loading analytics, ads, or noisy third-party scripts that might distort performance numbers.
Use these controls to ensure your journeys stay deterministic and to detect supply-chain compromises.
Enable Lighthouse Audits
- Toggle Lighthouse to collect performance, accessibility, SEO, and best practice scores during each run.
- Review the results on the Synthetic Test Details page to track how deployments impact Core Web Vitals.
Running Lighthouse adds a few seconds to the run; enable only on journeys where the insights are valuable.
Select Testing Locations
- Choose geographic locations (e.g., London, Frankfurt, Ohio, Singapore) to emulate users worldwide.
- Mix regions to detect CDN issues, regional downtime, or compliance-related blocking.
- Adjust per-location scheduling if you need to reduce traffic in specific areas.
Build the User Journey
User journeys simulate click-by-click behaviour:
- Manual Steps: Add actions such as Click, Fill, Select, Hover, Upload, Wait, Screenshot, See Element, Run JavaScript, Extract Email Text, and Assert Email Received.
- Journey Library: Import an existing journey template to reuse across environments.
- Chrome Extension: Record interactions directly from your browser and save them as reusable steps.
- Variables & Secrets: Reference environment variables for credentials, tokens, or dynamic data.
When finished, click Save Journey and optionally add it to your shared library for other tests.
Save Your Test
Once all settings are configured, click Save. You will be returned to the Project dashboard where the new synthetic test appears with its next scheduled run time.
Setting Up Alerts for Synthetic Monitoring
Enable Alerts by Request Type
- Click Set Alert on the Synthetic Test Dashboard.
- Choose request categories to monitor (Fetch/XHR, JavaScript errors, CSS failures, image load failures, console errors, security issues).
- For each channel (Email, Slack, Microsoft Teams, Desktop, Webhook) provide recipients and configure the minimum fail count.
Test Your Alerts
- Click Test Alert to send sample payloads to all enabled channels.
- Confirm delivery in each tool and adjust fail counts as needed to reduce noise.
📌 If you do not receive the test alert, check your fail count, credentials, or webhook response logs.
Validation & Maintenance
- Run On-Demand Tests: Use the Run button after deploying changes to verify fixes before the next scheduled execution.
- Version Journeys: When flows change, duplicate the test and update steps so you retain historical data for comparison.
- Rotate Credentials: Update stored secrets whenever you rotate test accounts or API keys.
- Review Logs: Inspect the Synthetic Test Details view regularly for trends in response times, errors, and Lighthouse scores.
Best Practices
- Model journeys after real customer paths, including edge cases (failed logins, optional steps).
- Keep steps resilient—use semantic selectors instead of brittle CSS selectors when possible.
- Combine assertions with visual checks (screenshots, video) for faster diagnosis.
- Limit the number of concurrent synthetic tests hitting login forms to avoid rate limits or CAPTCHA triggers.
- Pair synthetic alerts with uptime alerts to differentiate page failures from infrastructure outages.
Troubleshooting
- Test Fails Immediately: Verify the startup URL resolves and authentication credentials are valid.
- Element Not Found: Increase wait times, use
Wait steps, or adjust selectors to account for dynamic content.
- Random Redirects: Add request blocking or adjust fail conditions for third-party marketing scripts.
- Slow Runs: Review the Requests tab and Lighthouse diagnostics to isolate heavy assets. Consider blocking ads/analytics during testing.
- Alert Noise: Increase the fail count or scope alerts to specific request types.
Related Guides