Skip to main content

rate-limit

Control message throughput by counting requests per time window with drop, queue, or delay strategies.

Overview

The rate-limit node controls message throughput by enforcing a maximum number of messages per time window. Unlike the delay node which spaces messages evenly, rate-limit uses a counter-based approach: it tracks how many messages have passed within the current window and applies a strategy (drop, queue, or delay) when the limit is exceeded. Ideal for API rate compliance, notification throttling, and protecting downstream services from burst traffic.

Count
Based Limiting
Drop
Excess Messages
Queue
For Later
Delay
Until Window

Properties

Property Type Default Description
rate number 100 Maximum messages allowed per time window
timeWindow string "minute" second, minute, hour
strategy string "drop" drop, queue, delay
maxQueue number 1000 Max queued messages (queue strategy)
septopics boolean false Track rate limits per topic separately

Overflow Strategies

Drop

Silently discard excess messages

// Rate: 3/window
In:  1 2 3 4 5 | 6 7 8
Out: 1 2 3 X X | 6 7 8
     ^^^^^^^^^^   ^^^^^
     window 1     window 2
(4 and 5 are dropped)

Queue

Buffer excess for next window

// Rate: 3/window
In:  1 2 3 4 5 | next window
Out: 1 2 3     | 4 5 ...
     (4, 5 queued and sent
      when window resets)

Delay

Hold message until capacity available

// Rate: 3/window
In:  1 2 3 4 5
Out: 1 2 3 ─wait─ 4 5
     (4 waits for next window,
      back-pressure applied)

Rate Limit vs Delay Node

rate-limit (this node)

Counter-based window approach

Counts messages per window.
100 msgs/minute = allows bursts
of up to 100, then blocks until
the minute resets.

Best for: API rate limits,
quota enforcement.

delay (rate mode)

Even spacing approach

Spaces messages evenly.
100 msgs/minute = 1 message
every 600ms, steady flow.

Best for: smooth output,
LED sequences, UI updates.

Example: Limit API Calls to 100/Minute

Enforce external API rate limits to avoid HTTP 429 errors.

// Rate-limit node configuration
{
  "rate": 100,
  "timeWindow": "minute",
  "strategy": "queue",
  "maxQueue": 500
}

// Flow: sensor data → rate-limit → http-request (external API)
//
// Sensors generate ~200 msgs/min during peak
// API allows max 100 requests/minute
//
// Behavior:
//   0:00 - 0:30  → 100 messages pass (burst allowed)
//   0:30 - 1:00  → excess queued (limit reached)
//   1:00 onward  → queued messages drain, no 429 errors

Example: Throttle Notification Emails

Prevent alert fatigue by limiting email notifications to 5 per hour.

// Rate-limit node configuration
{
  "rate": 5,
  "timeWindow": "hour",
  "strategy": "drop",
  "septopics": true    // Separate limits per alert type
}

// Flow: alert-engine → rate-limit → email node
//
// Without rate-limit:
//   Temperature alarm fires every 30 seconds
//   = 120 emails per hour!
//
// With rate-limit (drop strategy):
//   First 5 alerts → email sent
//   Next 115 alerts → silently dropped
//   Next hour → counter resets, 5 more allowed
//
// septopics: true means each alert type
// (temperature, humidity, pressure) gets its own
// 5/hour allowance independently

Common Use Cases

API Quota Compliance

Stay within third-party API request limits

Notification Throttling

Prevent alert storms and email flooding

Device Protection

Limit commands to actuators and relays

Cost Control

Cap cloud API calls to manage billing

Related Nodes