Setting Up Alerts for All Monitoring Types

Who it's for: Incident managers and engineers configuring notification policies.
Youโ€™ll learn: Channel setup, advanced routing, testing tips, and troubleshooting strategies.

Acumen Logs provides alerting mechanisms to notify you of failures, missed checks, or upcoming expirations, depending on the monitoring type. Alert rules are scoped to each monitor so you can tailor severity and recipients to the business impact.


Prerequisites

  • Ensure your monitor is active and generating results (synthetic journey, uptime check, heartbeat, SSL/WHOIS, API test).
  • Confirm your project has at least one user with editor or admin permissions.
  • Gather delivery details ahead of time (Slack channel webhook, Teams connector URL, email distribution list, webhook target).
  • Optional: Configure alerting maintenance windows for planned downtime to avoid noise.

General Alert Configuration

The following notification types are available across multiple monitoring types:

  • Email Notifications
  • Microsoft Teams Notifications
  • Slack Notifications
  • Desktop Notifications
  • Webhook Notifications

Each notification channel shares a minimum fail count. This threshold defines how many consecutive failures must occur before a notification is fired.

Setup Process

  1. Open a monitor and click Set Alert (synthetic and uptime) or Configure Alerts (heartbeat, SSL/WHOIS, API).
  2. Choose the request types or event categories you want to watch (e.g., JavaScript errors, 500 responses, missed heartbeats, certificate expiry days).
  3. For each enabled notification channel:
    • Toggle the channel on.
    • Provide delivery details (email address, Slack webhook, Teams connector, webhook URL).
    • Set the minimum fail count (or days before expiration for SSL/WHOIS).
  4. Click Save. The configuration is applied immediately to subsequent executions.

๐Ÿ“Œ Tip: Set different fail counts by channel to control noise. For example, email after 2 fails, Slack after 4 fails, webhook after 5 fails.


Advanced Options

  • Quiet Hours / Maintenance Windows: Temporarily mute alerts during change windows so planned work does not trigger incidents.
  • Escalation Chains: Use multiple channels with increasing fail counts to escalate if incidents persist (email first, then Slack, then on-call webhook).
  • Custom Payloads: Webhook notifications include JSON payloads describing the monitor, failure reason, timestamps, and a deeplink back to the dashboard. Use these fields to power automation in PagerDuty, Jira, ServiceNow, or custom tooling.
  • Alert Templates: Customize subject lines and message bodies per channel to align with team conventions (e.g., include project name, environment, severity).

๐Ÿ“Œ Fail Count vs. Expiration-Based Alerts:

  • Uptime, API, and Heartbeat Monitoring: Alerts trigger after a minimum fail count.
  • SSL & WHOIS Monitoring: Alerts trigger based on days before expiration.

๐Ÿ“Œ Testing Your Alerts: Use the "Test Alert" button to send a test notification to all selected recipients. Each channel displays a success banner when the payload is accepted.

๐Ÿ“Œ Best Practice: Run a test whenever you add a new channel, rotate credentials, or change the fail count.


Monitoring-Specific Alert Conditions

Uptime, API, and Heartbeat Monitoring

โœ… Alerts triggered by minimum fail count โœ… Supports Email, Teams, Slack, Desktop, and Webhooks

Suggested Fail Count Baselines: Adjust to your SLA and traffic profile.

  • Email Alerts: Start at 2-3 consecutive failures for production services. Use higher thresholds in staging.
  • Microsoft Teams / Slack Alerts: Start at 2 to surface urgent incidents to the war room quickly.
  • Desktop Notifications: Ideal for on-duty engineers actively watching the dashboard.
  • Webhooks: Pair with incident tools; consider higher thresholds (3-5) to avoid paging for transient blips.

SSL and WHOIS Monitoring

โœ… Alerts triggered by days before expiration โœ… Supports Email, Teams, Slack, and Webhooks

๐Ÿ“Œ Expiration Alert Timeframe Options:

  • By Day: Alert 5 days before expiration.
  • By Week: Alert 1 week before expiration.
  • By Month: Alert 2 months before expiration.

You can combine multiple timeframesโ€”e.g., notify via email 30 days prior and escalate to Slack 7 days prior.


Final Steps & Testing Your Alerts

  1. Review your enabled alerts with stakeholders to confirm ownership.
  2. Click "Test Alert" to verify notifications.
  3. Monitor logs to ensure alerts are sent as expected. Use the activity log to confirm successful delivery.

๐Ÿ“Œ Efficient alerting ensures you're notified only when necessary.


Troubleshooting

  • No Alert Received: Confirm the channel toggle is enabled and the fail count threshold was met. For webhooks, inspect the receiving serviceโ€™s logs.
  • Too Many Alerts: Increase the fail count, shorten maintenance windows, or filter by specific request types.
  • Alert Delivered but Empty: Regenerate the webhook secret or review custom templates for missing placeholders.
  • Rotated Credentials: Update stored webhook URLs or email lists whenever Slack/Teams connectors are regenerated.
  • Need Historical Context: Export alert history from the monitor timeline to correlate with deployments or incidents.

Related Guides