Overview
The compress node reduces data size using standard
compression algorithms. It can compress data before storage or network transmission and
decompress incoming compressed data. Supports gzip (most common), zlib, and raw deflate
algorithms with configurable compression levels. This significantly reduces bandwidth usage
and storage costs for large payloads.
Properties
| Property | Type | Default | Description |
|---|---|---|---|
| operation | string | "compress" | Operation: "compress" or "decompress" |
| algorithm | string | "gzip" | Algorithm: "gzip", "zlib", or "deflate" |
| level | number | 6 | Compression level: 1 (fastest) to 9 (smallest) |
| encoding | string | "buffer" | Output encoding: "buffer", "base64", or "hex" |
| property | string | "payload" | Message property to compress or decompress |
Algorithm Comparison
Gzip
Most widely supported. Includes headers with metadata (filename, timestamp). Used by HTTP Content-Encoding and file archives.
// Best for: HTTP APIs, file storage
// Header: 10-byte gzip header
// Compatible with .gz files Zlib
Includes a compact header with checksum. Common in network protocols and database storage. Slightly less overhead than gzip.
// Best for: Protocols, databases
// Header: 2-byte zlib header
// Includes Adler-32 checksum Deflate
Raw compressed data with no headers or checksums. Smallest output. Use when you manage integrity checks separately.
// Best for: Minimal overhead
// Header: None (raw stream)
// Smallest output size Example: Compress Log Data Before Storage
Compress large JSON log entries before writing them to a database or file system to save storage space.
// Compress node configuration
{
"operation": "compress",
"algorithm": "gzip",
"level": 6,
"encoding": "base64",
"property": "payload"
}
// Flow: [Collect Logs] -> [Batch: 100 msgs] -> [Compress] -> [DB Write]
//
// Before compression (JSON log batch):
// msg.payload = {
// "logs": [
// { "ts": "2025-01-15T10:00:01Z", "level": "info", "msg": "Request received", ... },
// { "ts": "2025-01-15T10:00:02Z", "level": "info", "msg": "Processing...", ... },
// // ... 98 more entries
// ]
// }
// Size: ~45 KB (JSON text)
//
// After compression:
// msg.payload = "H4sIAAAAAAAAA6tWKkktLlGyUlAqS8..."
// Size: ~8 KB (base64 of gzip)
// Compression ratio: ~82% reduction Example: Decompress API Response Body
Handle compressed responses from APIs that use gzip content encoding.
// Decompress node configuration
{
"operation": "decompress",
"algorithm": "gzip",
"encoding": "buffer",
"property": "payload"
}
// Flow: [HTTP Request] -> [Decompress] -> [JSON Parse] -> [Process]
//
// HTTP Request returns compressed response:
// Response header: Content-Encoding: gzip
// msg.payload = (gzip binary)
//
// After decompression:
// msg.payload = (raw JSON bytes)
//
// After JSON parse:
// msg.payload = {
// "devices": [ ... ],
// "total": 1500,
// "page": 1
// }
//
// Useful when HTTP node does not auto-decompress Common Use Cases
Log Compression
Compress batched log entries before writing to databases, reducing storage costs by 70-90%.
Bandwidth Optimization
Compress payloads before sending over MQTT or HTTP to reduce bandwidth on constrained networks.
API Response Handling
Decompress gzip-encoded responses from APIs and web services.
Archive Processing
Decompress data from archived storage or compressed file formats for analysis.