Monitoring12 January 20254 min read

Monitoring Rate Limits: Yours and Your Dependencies

Rate limiting failures are a category of error that traditional monitoring misses completely. Your service returns 429s? No alert fires — it is technically working. Your upstream returns 429s? Requests fail silently in ways that look like your code is broken.

MonitoringUptime MonitoringWebsite MonitoringApi MonitoringCron Job Monitoring
Monitoring

You'll get rate limited — both by the APIs you call and by your own rate limiter. The teams that recover fastest are the ones who know about it before their users file tickets.

Two types of rate limit problems

You are the rate limitee — an upstream API (Stripe, Twilio, GitHub) is rejecting your requests with 429. Symptoms: specific operations fail intermittently, retry storms start, user-visible errors that appear non-deterministic.

You are the rate limiter — your own rate limiter is rejecting legitimate user traffic. Symptoms: users reporting 'too many requests' errors, spike in 429 responses in your API metrics.

Monitoring your own rate limiter

Track 429 responses from your API separately from other 4xx errors: ``` alert when: 429_rate > 5% of total requests for 5 minutes ```

This catches scenarios where your rate limits are set too aggressively for normal usage patterns — like after a traffic spike that resets user quotas.

Monitoring upstream rate limits

Instrument your HTTP client to track 429s from each upstream: ```python def make_request(service, url): response = requests.get(url) if response.status_code == 429: metrics.increment(f'rate_limited.{service}') retry_after = int(response.headers.get('Retry-After', 60)) time.sleep(retry_after) return response ```

Alert when any upstream 429 rate exceeds 1%.

Cron jobs and rate limits

Scheduled jobs that process large batches are common rate limit culprits. A nightly job that calls an API 10,000 times without rate awareness will exhaust daily quotas.

Monitor: add a heartbeat to the end of your batch job on AlertsDock, and log the number of 429s encountered during the run.

Circuit breaker for rate-limited services

Implement a circuit breaker that opens when rate limit errors exceed a threshold: - 10 consecutive 429s → open circuit for 60 seconds - During circuit open: return cached data or graceful error - After 60s → test with single request (half-open)

This prevents retry storms that worsen the rate limit situation.

This article is available across the supported locale routes — use the language switcher above to change.

Feature Guide

Uptime Monitoring

AlertsDock gives teams uptime monitoring for websites, APIs, TCP checks, DNS checks, SSL expiry, and fast alert routing without enterprise overhead.

Read guide

Alternative Page

UptimeRobot Alternative

Compare AlertsDock with UptimeRobot for teams that want uptime monitoring plus heartbeat monitoring, status pages, webhook inspection, and per-resource alert routing.

See comparison
AD
AlertsDock Team
12 January 2025
Try AlertsDock free