Skip to content

bunqueue vs BullMQ

Real benchmark results comparing bunqueue with BullMQ on identical workloads.

Summary

36x Faster Push

341,362 vs 9,488 ops/sec

Single job push operations

28x Faster Bulk

1,023,376 vs 36,656 ops/sec

Bulk push (100 jobs per batch)

8x Faster Processing

131,923 vs 15,947 ops/sec

Job processing throughput

Zero Infrastructure

No Redis Required

Embedded SQLite vs Redis server


Throughput Comparison

Throughput comparison chart
OperationbunqueueBullMQSpeedup
Push341,362 ops/sec9,488 ops/sec36x faster
Bulk Push1,023,376 ops/sec36,656 ops/sec28x faster
Process131,923 ops/sec15,947 ops/sec8.3x faster

Latency Comparison

Latency comparison chart
Operationbunqueue p99BullMQ p99Improvement
Push0.01ms0.54ms54x lower
Bulk Push0.55ms6.77ms12x lower
Process56.8ms614.5ms11x lower

Speedup by Operation

Speedup comparison chart

Real-World Scenarios

Performance comparison on typical production workloads.

Real-world scenarios comparison chart
ScenariobunqueueBullMQSpeedup
Email Queue (5k emails, mixed priorities)324,462 ops/sec11,658 ops/sec27.8x faster
Webhook Burst (3k webhooks with retries)646,964 ops/sec44,648 ops/sec14.5x faster
Image Processing (1k large payloads)190,491 ops/sec8,740 ops/sec21.8x faster
Order Processing (5k orders, priorities)387,220 ops/sec13,281 ops/sec29.2x faster

Niche Scenarios

Edge cases and stress tests that push the limits.

Niche scenarios comparison chart
ScenariobunqueueBullMQSpeedup
Massive Delayed (10k scheduled jobs)950,890 ops/sec47,914 ops/sec19.8x faster
Priority Stress (100 priority levels)419,013 ops/sec14,382 ops/sec29.1x faster
IoT Tiny Payloads (50k minimal jobs)1,244,372 ops/sec52,268 ops/sec23.8x faster
Deduplication (5k jobs, 1k unique keys)362,328 ops/sec14,438 ops/sec25.1x faster
High Concurrency (50 workers)189,236 ops/sec27,379 ops/sec6.9x faster

Scenario Speedups

Scenario speedup comparison chart

Why is bunqueue Faster?

No Network Overhead

bunqueue uses embedded SQLite with direct FFI bindings. BullMQ requires network round-trips to Redis.

Optimized Data Structures

Skip lists, MinHeap, and LRU cache provide O(log n) or O(1) operations for common tasks.

Batch Transactions

SQLite transactions batch multiple operations into single disk writes.

32-Way Sharding

Lock contention is minimized by distributing work across 32 independent shards.


Memory Comparison

MetricbunqueueBullMQ
Base Memory~50 MB~80 MB + Redis
Per Job~100 bytes~500 bytes
10K Jobs~65 MB~120 MB
External ServicesNoneRedis server

Feature Comparison

FeaturebunqueueBullMQ
Queue Types✅ Standard, Priority, LIFO✅ Standard, Priority, LIFO
Delayed Jobs✅ Yes✅ Yes
Retries & Backoff✅ Exponential✅ Exponential
Dead Letter Queue✅ Built-in✅ Built-in
Rate Limiting✅ Per-queue✅ Per-queue
Cron Jobs✅ Built-in✅ Via scheduler
Job Dependencies✅ Parent-child flows✅ Parent-child flows
Persistence✅ SQLite (embedded)✅ Redis
Horizontal Scaling⚠️ Single process✅ Multi-process
External Dependencies✅ None❌ Redis required
S3 Backup✅ Built-in❌ Manual

Run Your Own Benchmarks

Terminal window
git clone https://github.com/egeominotti/bunqueue.git
cd bunqueue
bun install
# Start Redis (required for BullMQ)
redis-server --daemonize yes
# Run core benchmark (push, bulk, process)
bun run bench/comparison/run.ts
# Run scenario benchmarks (real-world & niche)
bun run bench/comparison/scenarios.ts

Benchmark Environment

Hardware:

  • Mac Studio, Apple M1 Max
  • 32GB RAM
  • SSD storage

Software:

  • macOS 26.2 (Tahoe)
  • Bun 1.3.8
  • Node.js 24.3.0
  • Redis 7.x (localhost)

Configuration:

  • 10,000 iterations per test
  • Bulk size: 100 jobs
  • Concurrency: 10 workers
  • Payload: 100 bytes per job

When to Use BullMQ Instead

While bunqueue is faster for most use cases, BullMQ may be better when:

  • Horizontal scaling is required across multiple processes/servers
  • Redis is already part of your infrastructure
  • Redis-specific features like pub/sub or Lua scripts are needed
  • Multi-language workers need to share the same queue