Push Throughput
1.2M+ ops/sec
With batch operations (1000+ jobs)
bunqueue includes a comprehensive benchmark suite for performance testing, stress testing, and production validation.
Push Throughput
1.2M+ ops/sec
With batch operations (1000+ jobs)
Process Rate
494,000 ops/sec
Full cycle with 16 workers
Scale Tested
1 Million Jobs
Verified data integrity at scale
Memory Efficient
~100 bytes/job
At scale with removeOnComplete: true
These results were measured on Mac Studio M1 Max, 32GB RAM, Bun 1.3.8.
| Operation | Throughput | Notes |
|---|---|---|
| PUSH | 579,603 ops/sec | Single job insertion |
| PULL | 420,835 ops/sec | Single job retrieval |
| ACK | 464,857 ops/sec | Single job acknowledgment |
| BATCH(100) | 1,259,234 ops/sec | Batch push, 100 jobs |
| BATCH(1000) | 1,452,193 ops/sec | Batch push, 1000 jobs |
| Batch Size | Throughput | Speedup vs Individual |
|---|---|---|
| Individual | 579,603/s | 1x |
| 100 | 1,259,234/s | 2.2x |
| 1,000 | 1,452,193/s | 2.5x |
| 5,000 | 1,393,745/s | 2.4x |
| Scale | Push Rate | Process Rate | Total Events |
|---|---|---|---|
| 10,000 jobs | 519,445/s | 217,980/s | ✅ 10,000 |
| 50,000 jobs | 604,918/s | 243,945/s | ✅ 50,000 |
| 100,000 jobs | 611,819/s | 232,613/s | ✅ 100,000 |
| Metric | Result |
|---|---|
| Total Jobs | 1,000,000 |
| Push Rate | 1,210,654 ops/sec |
| Process Rate | 494,805 ops/sec |
| Overall Rate | 351,247 ops/sec |
| Total Time | 2.85 seconds |
| Data Integrity | ✅ PASSED (100%) |
| Test | Configuration | Throughput | Status |
|---|---|---|---|
| High Volume | 100K jobs, single queue | 140,032/s | ✅ PASS |
| Concurrent Queues | 10 queues × 10K jobs | 138,336/s | ✅ PASS |
| Large Payloads | 5K jobs × 10KB each | 384,781 MB/s | ✅ PASS |
| Priority Stress | 50K jobs, random priorities | 179,983/s | ✅ PASS |
| Retry Storm | 10K jobs, 50% fail rate | - | ✅ PASS |
| Batch Operations | 100K jobs, batch=1000 | 266,619/s | ✅ PASS |
bunqueue includes 6 specialized benchmarks:
| Benchmark | Purpose | Jobs | Duration |
|---|---|---|---|
throughput.bench.ts | Individual operation speeds | 10K-50K | ~30s |
worker.bench.ts | Realistic worker simulation | 10K-100K | ~60s |
stress.bench.ts | Production validation (7 tests) | 5K-100K | ~3min |
million-jobs.bench.ts | High-volume integrity test | 1M | ~5min |
10million-jobs.bench.ts | Extreme stress test | 10M | ~15min |
completed-events.bench.ts | Event delivery verification | 10K | ~30s |
# Clone and installgit clone https://github.com/egeominotti/bunqueue.gitcd bunqueuebun install
# Individual benchmarksbun run src/benchmark/throughput.bench.tsbun run src/benchmark/worker.bench.tsbun run src/benchmark/stress.bench.tsbun run src/benchmark/million-jobs.bench.tsbun run src/benchmark/10million-jobs.bench.tsbun run src/benchmark/completed-events.bench.tsFile: src/benchmark/throughput.bench.ts
Measures raw throughput of individual queue operations with warmup.
| Operation | Description | Measured Speed |
|---|---|---|
PUSH | Single job insertion | 579,603/s |
PULL | Single job retrieval | 420,835/s |
ACK | Single job acknowledgment | 464,857/s |
FULL CYCLE | Push → Pull → Ack | 182,822/s |
BATCH PUSH (100) | Batch insertion | 1,259,234/s |
BATCH PUSH (1000) | Batch insertion | 1,452,193/s |
// 1. Warmup (100 iterations)for (let i = 0; i < 100; i++) { await fn();}
// 2. Timed executionconst start = performance.now();for (let i = 0; i < totalJobs; i++) { await fn();}const durationMs = performance.now() - start;
// 3. Calculate throughputconst jobsPerSecond = (totalJobs / durationMs) * 1000;File: src/benchmark/worker.bench.ts
Simulates realistic worker behavior with event verification.
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐│ Push Phase │────▶│ Process Phase │────▶│ Event Check ││ │ │ │ │ ││ • Push N jobs │ │ • Pull job │ │ • Count events ││ • Sequential │ │ • Ack with data │ │ • Verify count │└─────────────────┘ └─────────────────┘ └─────────────────┘| Size | Push Rate | Process Rate | Events |
|---|---|---|---|
| 10,000 | 519,445/s | 217,980/s | ✅ 10,000 |
| 50,000 | 604,918/s | 243,945/s | ✅ 50,000 |
| 100,000 | 611,819/s | 232,613/s | ✅ 100,000 |
File: src/benchmark/stress.bench.ts
Comprehensive production validation with 7 different stress scenarios.
| Test | Configuration | Result |
|---|---|---|
| High Volume | 100K jobs | 140,032/s ✅ |
| Concurrent Queues | 10 × 10K | 138,336/s ✅ |
| Large Payloads | 5K × 10KB | 384,781 MB/s ✅ |
| Priority Stress | 50K random | 0 violations ✅ |
| Retry Storm | 10K, 50% fail | 75% retry ratio ✅ |
| Memory Stability | 100 × 1K | Stable ✅ |
| Batch Operations | 100K batch | 1,393,745/s push ✅ |
File: src/benchmark/million-jobs.bench.ts
High-volume test with data integrity verification.
| Parameter | Value |
|---|---|
| Total Jobs | 1,000,000 |
| Batch Size (push) | 5,000 |
| Batch Size (pull) | 500 |
| Workers | 16 |
| Queues | 16 |
📊 Summary==========================Total jobs: 1,000,000Completed: 1,000,000Total time: 2.85s
Push rate: 1,210,654 jobs/secProcess rate: 494,805 jobs/secOverall rate: 351,247 jobs/sec
✅ Data Integrity: PASSED - Pull: 1,000,000 / 1,000,000 unique - Events: 1,000,000 / 1,000,000 uniqueFile: src/benchmark/10million-jobs.bench.ts
Extreme stress test to validate system limits.
| Parameter | Value |
|---|---|
| Total Jobs | 10,000,000 |
| Workers | 32 |
| Queues | 32 |
| Shards | 32 |
👷 Processing 10M jobs...[██████████████░░░░░░░░░░░░░░░░] 45.2% | 428,571 jobs/sec[████████████████████░░░░░░░░░░] 67.8% | 435,897 jobs/sec[██████████████████████████████] 100.0% | 441,176 jobs/secFile: src/benchmark/completed-events.bench.ts
Verifies that every COMPLETED event is reliably emitted.
📊 RESULTS══════════════════════════════════════════════════Total jobs: 10,000COMPLETED events: 10,000Unique job IDs: 10,000──────────────────────────────────────────────────Push time: 20ms (506,778 jobs/sec)Process time: 47ms (211,589 jobs/sec)──────────────────────────────────────────────────✅ TEST PASSED: All COMPLETED events received!| Scenario | Push Batch | Pull Batch |
|---|---|---|
| Low latency (real-time) | 100-500 | 10-50 |
| High throughput | 1,000-5,000 | 500-1,000 |
| Extreme volume | 5,000-10,000 | 1,000+ |
Optimal Workers = min(CPU cores, shard count)bunqueue uses 16 shards by default:
removeOnComplete: true
Maximum throughput (~495K/s)
removeOnComplete: false
~25% slower (~370K/s)
Native SQLite
Direct FFI bindings via bun:sqlite. WAL mode for concurrent reads/writes. Memory-mapped I/O.
16 Shards
Minimizes lock contention. Each shard handles ~6% of queues. Parallel processing by design.
Batch Optimization
Single transaction for bulk operations. Prepared statements reused. ackBatchWithResults for throughput.
Efficient Structures
Skip lists for O(log n) priority. MinHeap for delayed jobs. Bounded collections prevent leaks.
For reproducible results, note your environment:
# System infouname -abun --version
# CPU info (macOS)sysctl -n machdep.cpu.brand_string
# CPU info (Linux)cat /proc/cpuinfo | grep "model name" | head -1Reference hardware (used for these results):
| Benchmark | Pass Condition | Result |
|---|---|---|
| Throughput | Operations complete | ✅ |
| Worker | events === jobs | ✅ |
| Stress: High Volume | events === jobs | ✅ |
| Stress: Concurrent | events === totalJobs | ✅ |
| Stress: Priority | orderViolations === 0 | ✅ |
| Stress: Retry | completed + dlq === jobs | ✅ |
| Million Jobs | Data integrity 100% | ✅ |
| Events | completedEvents === jobs | ✅ |
Run benchmarks on your hardware and share results via GitHub Discussions to help others compare.
Include: