Production Deployment
This guide covers deploying bunqueue in production. bunqueue is designed as a single-instance job queue - it doesn’t support clustering or horizontal scaling.
Architecture Overview
┌─────────────────────────────────────────────────────────────┐│ Your Application ││ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ││ │ Web App │ │ API │ │ Workers │ ││ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ ││ │ │ │ ││ └────────────────┼────────────────┘ ││ │ ││ ▼ ││ ┌───────────────────────┐ ││ │ bunqueue │ ◄── Single instance ││ │ (embedded mode) │ ││ └───────────┬───────────┘ ││ │ ││ ▼ ││ ┌───────────────────────┐ ││ │ SQLite Database │ ◄── Local file ││ │ (./data/bunq.db) │ ││ └───────────────────────┘ │└─────────────────────────────────────────────────────────────┘ │ ▼ (optional) ┌───────────────────────┐ │ S3 Backup │ │ (disaster recovery) │ └───────────────────────┘Deployment Options
Option 1: Embedded Mode (Recommended)
Run bunqueue directly in your application process. No separate server needed.
import { Queue, Worker } from 'bunqueue/client';
// Your web framework (Hono, Elysia, Express...)const app = new Hono();
// Queue is embedded in the same processconst emailQueue = new Queue('emails');
app.post('/send-email', async (c) => { const { to, subject } = await c.req.json(); await emailQueue.add('send', { to, subject }); return c.json({ queued: true });});
// Worker runs in the same processnew Worker('emails', async (job) => { await sendEmail(job.data); return { sent: true };}, { concurrency: 5 });
export default app;Pros:
- Simplest setup
- No network latency
- Single deployment unit
Cons:
- Queue dies if app dies
- Harder to scale workers independently
Option 2: Separate Worker Process
Run your API and workers as separate processes sharing the same database.
// api.ts - Your web serverimport { Queue } from 'bunqueue/client';
const queue = new Queue('tasks');
app.post('/task', async (c) => { await queue.add('process', { data: '...' }); return c.json({ ok: true });});// worker.ts - Separate processimport { Worker } from 'bunqueue/client';
new Worker('tasks', async (job) => { // Heavy processing here return { done: true };}, { concurrency: 10 });
console.log('Worker started');# Run bothbun run api.ts &bun run worker.ts &Pros:
- Workers can be restarted independently
- Better resource isolation
Cons:
- Two processes to manage
- Still single SQLite file (no true distribution)
Option 3: Server Mode (CLI)
Run bunqueue as a standalone server. Interact via CLI or HTTP API.
# Start serverbunqueue start --tcp-port 6789 --http-port 6790# Add jobs via CLIbunqueue push emails '{"to": "user@example.com", "subject": "Hello"}'
# Or via HTTP APIcurl -X POST http://localhost:6790/queues/emails/jobs \ -H "Content-Type: application/json" \ -d '{"data": {"to": "user@example.com"}}'Docker Deployment
Dockerfile
FROM oven/bun:1
WORKDIR /app
# Copy package filesCOPY package.json bun.lockb ./RUN bun install --frozen-lockfile --production
# Copy applicationCOPY . .
# Create data directoryRUN mkdir -p /app/data
# EnvironmentENV DATA_PATH=/app/data/bunq.dbENV NODE_ENV=production
# Health checkHEALTHCHECK --interval=30s --timeout=3s \ CMD curl -f http://localhost:6790/health || exit 1
EXPOSE 6789 6790
CMD ["bun", "run", "start"]Docker Compose
version: '3.8'
services: bunqueue: build: . ports: - "6789:6789" # TCP - "6790:6790" # HTTP volumes: - bunqueue-data:/app/data environment: - DATA_PATH=/app/data/bunq.db - AUTH_TOKENS=${AUTH_TOKENS} - S3_BACKUP_ENABLED=1 - S3_ACCESS_KEY_ID=${S3_ACCESS_KEY_ID} - S3_SECRET_ACCESS_KEY=${S3_SECRET_ACCESS_KEY} - S3_BUCKET=${S3_BUCKET} - S3_REGION=${S3_REGION} restart: unless-stopped deploy: resources: limits: memory: 512M reservations: memory: 256M
volumes: bunqueue-data:Systemd Service
For bare-metal or VM deployments:
[Unit]Description=bunqueue Job QueueAfter=network.target
[Service]Type=simpleUser=bunqueueGroup=bunqueueWorkingDirectory=/var/lib/bunqueueExecStart=/usr/local/bin/bunqueue startRestart=alwaysRestartSec=5
# EnvironmentEnvironment=NODE_ENV=productionEnvironment=DATA_PATH=/var/lib/bunqueue/bunq.dbEnvironmentFile=/etc/bunqueue/env
# SecurityNoNewPrivileges=trueProtectSystem=strictProtectHome=trueReadWritePaths=/var/lib/bunqueue
# Resource limitsMemoryMax=512MCPUQuota=200%
[Install]WantedBy=multi-user.target# Installsudo systemctl daemon-reloadsudo systemctl enable bunqueuesudo systemctl start bunqueue
# Check statussudo systemctl status bunqueuesudo journalctl -u bunqueue -fBuilding from Source
Compile bunqueue into a standalone executable for production deployment.
Build Command
# Clone the repositorygit clone https://github.com/egeominotti/bunqueue.gitcd bunqueue
# Install dependenciesbun install
# Build standalone binarybun run buildThis creates dist/bunqueue (~56 MB), a self-contained executable with no runtime dependencies.
Verify Build
# Check version./dist/bunqueue --version
# Show help./dist/bunqueue --help
# Start server./dist/bunqueue startInstall Globally
# Copy to system pathsudo cp dist/bunqueue /usr/local/bin/
# Verify installationbunqueue --versionPM2 Process Manager
For cross-platform process management with PM2:
Compiled Binary (Recommended)
First, build the standalone executable:
bun run buildThen configure PM2:
module.exports = { apps: [{ name: 'bunqueue', script: '/usr/local/bin/bunqueue', // Compiled binary args: 'start', instances: 1, // Single instance only - no cluster mode exec_mode: 'fork', autorestart: true, watch: false, max_memory_restart: '512M', env: { NODE_ENV: 'production', DATA_PATH: '/var/lib/bunqueue/bunq.db', TCP_PORT: 6789, HTTP_PORT: 6790, }, error_file: '/var/log/bunqueue/error.log', out_file: '/var/log/bunqueue/out.log', log_date_format: 'YYYY-MM-DD HH:mm:ss Z', merge_logs: true, }]};Development Mode (with Bun)
For development or when using the source directly:
module.exports = { apps: [{ name: 'bunqueue', script: 'bun', args: 'run start', cwd: '/opt/bunqueue', instances: 1, exec_mode: 'fork', autorestart: true, max_memory_restart: '512M', env: { NODE_ENV: 'production', DATA_PATH: '/var/lib/bunqueue/bunq.db', TCP_PORT: 6789, HTTP_PORT: 6790, }, }]};PM2 Commands
# Startpm2 start ecosystem.config.js
# Restartpm2 restart bunqueue
# Stoppm2 stop bunqueue
# View logspm2 logs bunqueue
# Monitorpm2 monit
# Save process list for startuppm2 savepm2 startupEnvironment Variables
| Variable | Description | Default |
|---|---|---|
DATA_PATH | SQLite database path | ./data/bunq.db |
TCP_PORT | TCP server port | 6789 |
HTTP_PORT | HTTP server port | 6790 |
AUTH_TOKENS | Comma-separated auth tokens | - |
S3_BACKUP_ENABLED | Enable S3 backups | 0 |
S3_ACCESS_KEY_ID | S3 access key | - |
S3_SECRET_ACCESS_KEY | S3 secret key | - |
S3_BUCKET | S3 bucket name | - |
S3_REGION | S3 region | us-east-1 |
S3_ENDPOINT | Custom S3 endpoint | - |
S3_BACKUP_INTERVAL | Backup interval (ms) | 21600000 (6h) |
S3_BACKUP_RETENTION | Backups to keep | 7 |
S3 Backup Configuration
AWS S3
S3_BACKUP_ENABLED=1S3_ACCESS_KEY_ID=AKIA...S3_SECRET_ACCESS_KEY=...S3_BUCKET=my-bunqueue-backupsS3_REGION=us-east-1S3_BACKUP_INTERVAL=3600000 # Every hourS3_BACKUP_RETENTION=24 # Keep 24 backupsCloudflare R2
S3_BACKUP_ENABLED=1S3_ACCESS_KEY_ID=...S3_SECRET_ACCESS_KEY=...S3_BUCKET=bunqueue-backupsS3_ENDPOINT=https://ACCOUNT_ID.r2.cloudflarestorage.comS3_REGION=autoMinIO (Self-hosted)
S3_BACKUP_ENABLED=1S3_ACCESS_KEY_ID=minioadminS3_SECRET_ACCESS_KEY=minioadminS3_BUCKET=bunqueueS3_ENDPOINT=http://minio:9000S3_REGION=us-east-1Health Checks
bunqueue exposes health endpoints:
# HTTP health check (detailed)curl http://localhost:6790/health# {"ok":true,"status":"healthy","uptime":3600,"version":"1.0.4",# "queues":{"waiting":5,"active":2},"connections":{"ws":0,"sse":0},# "memory":{"heapUsed":45,"heapTotal":64,"rss":128}}
# Simple liveness probecurl http://localhost:6790/healthz# OK
# Readiness probecurl http://localhost:6790/ready# {"ok":true,"ready":true}
# Queue statscurl http://localhost:6790/stats# {"ok":true,"stats":{"waiting":5,"active":2,"completed":1000,"dlq":0}}
# Prometheus metrics (text format)curl http://localhost:6790/prometheusKubernetes Probes
livenessProbe: httpGet: path: /healthz port: 6790 initialDelaySeconds: 5 periodSeconds: 10
readinessProbe: httpGet: path: /ready port: 6790 initialDelaySeconds: 5 periodSeconds: 5Resource Requirements
Memory
| Workload | Recommended RAM |
|---|---|
| Light (<1k jobs/day) | 128 MB |
| Medium (1k-10k jobs/day) | 256 MB |
| Heavy (10k-100k jobs/day) | 512 MB |
| Very Heavy (>100k jobs/day) | 1 GB+ |
Disk
SQLite database size depends on:
- Number of jobs retained
- Job data size
removeOnCompletesetting
// Reduce disk usagenew Queue('tasks', { defaultJobOptions: { removeOnComplete: true, // Don't keep completed jobs removeOnFail: false, // Keep failed for debugging }});CPU
bunqueue is I/O bound, not CPU bound. A single core handles most workloads.
Production Checklist
-
Enable S3 backups
Don’t skip this. SQLite corruption = data loss.
-
Set auth tokens
Terminal window AUTH_TOKENS=token1,token2,token3 -
Configure resource limits
Prevent runaway memory/CPU usage.
-
Set up monitoring
Scrape
/prometheuswith Prometheus or similar. -
Configure log aggregation
Send logs to a central system.
-
Test backup restoration
Terminal window # List backups firstbunqueue backup list# Then restore by keybunqueue backup restore backups/bunq-2026-01-30T12:00:00.db --force -
Set up alerts
- DLQ count > threshold
- Waiting jobs growing
- Worker not processing
Scaling Limitations
What this means
- ❌ No multi-node deployment
- ❌ No automatic failover
- ❌ No distributed processing across machines
- ✅ Multiple workers in same process (concurrency)
- ✅ Multiple worker processes on same machine (shared SQLite)
When bunqueue is enough
| Scenario | Jobs/day | bunqueue? |
|---|---|---|
| Small SaaS | <10k | ✅ Perfect |
| Medium app | 10k-100k | ✅ Fine |
| Large app | 100k-1M | ✅ Tested |
| Enterprise | >1M | ⚠️ Test first |
When to use something else
If you need:
- High availability → Redis + BullMQ with Sentinel
- Distributed processing → Kafka, RabbitMQ
- Multi-region → Managed queues (SQS, Cloud Tasks)
- Complex workflows → Temporal, Inngest
Vertical scaling
bunqueue scales vertically well:
- More RAM = more jobs in memory
- Faster disk (NVMe) = faster SQLite
- More CPU cores = more worker concurrency
// Scale worker concurrency with available CPUsimport { cpus } from 'os';
new Worker('tasks', processor, { concurrency: cpus().length * 2});Disaster Recovery
Backup Strategy
Every 1 hour → S3 backupEvery 6 hours → Verify backup integrityEvery day → Test restore in stagingRecovery Steps
-
Stop bunqueue
Terminal window systemctl stop bunqueue -
List available backups
Terminal window bunqueue backup list -
Restore from backup
Terminal window bunqueue backup restore backups/bunq-2024-01-30T12:00:00.db --force -
Start bunqueue
Terminal window systemctl start bunqueue
Point-in-Time Recovery
SQLite WAL mode allows recovery to recent states:
# Backup includes WAL filecp data/bunq.db data/bunq.db-wal /backup/
# Restorecp /backup/bunq.db* data/Security
Network
- Run behind reverse proxy (nginx, Caddy)
- Use TLS for external connections
- Firewall TCP/HTTP ports
Authentication
# Generate strong tokensAUTH_TOKENS=$(openssl rand -hex 32),$(openssl rand -hex 32)File Permissions
# Restrict database accesschmod 600 /var/lib/bunqueue/bunq.dbchown bunqueue:bunqueue /var/lib/bunqueue/bunq.dbMonitoring with Prometheus
scrape_configs: - job_name: 'bunqueue' static_configs: - targets: ['localhost:6790'] metrics_path: /prometheusKey Metrics
| Metric | Alert Threshold |
|---|---|
bunqueue_jobs_waiting | > 1000 for 5 min |
bunqueue_jobs_dlq | > 10 |
bunqueue_jobs_active | 0 for 5 min (workers dead?) |
bunqueue_jobs_failed_total | increasing rapidly |
Example: Full Production Setup
NODE_ENV=productionDATA_PATH=/var/lib/bunqueue/bunq.dbTCP_PORT=6789HTTP_PORT=6790AUTH_TOKENS=prod-token-abc123,deploy-token-xyz789
# S3 BackupsS3_BACKUP_ENABLED=1S3_ACCESS_KEY_ID=AKIA...S3_SECRET_ACCESS_KEY=...S3_BUCKET=company-bunqueue-backupsS3_REGION=eu-west-1S3_BACKUP_INTERVAL=3600000S3_BACKUP_RETENTION=48import { Queue, Worker } from 'bunqueue/client';
const queue = new Queue('production-tasks', { defaultJobOptions: { attempts: 5, backoff: 5000, removeOnComplete: true, }});
// Configure DLQ alertsqueue.setDlqConfig({ maxEntries: 1000, maxAge: 7 * 24 * 60 * 60 * 1000, // 7 days});
// Worker with production settingsnew Worker('production-tasks', async (job) => { await job.updateProgress(0, 'Starting...');
try { const result = await processJob(job.data); await job.log(`Completed: ${JSON.stringify(result)}`); return result; } catch (error) { await job.log(`Error: ${error.message}`); throw error; }}, { concurrency: 10, heartbeatInterval: 5000,});