Skip to content

Production Deployment

This guide covers deploying bunqueue in production. bunqueue is designed as a single-instance job queue - it doesn’t support clustering or horizontal scaling.

Architecture Overview

┌─────────────────────────────────────────────────────────────┐
│ Your Application │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Web App │ │ API │ │ Workers │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │
│ └────────────────┼────────────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ bunqueue │ ◄── Single instance │
│ │ (embedded mode) │ │
│ └───────────┬───────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────┐ │
│ │ SQLite Database │ ◄── Local file │
│ │ (./data/bunq.db) │ │
│ └───────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
▼ (optional)
┌───────────────────────┐
│ S3 Backup │
│ (disaster recovery) │
└───────────────────────┘

Deployment Options

Run bunqueue directly in your application process. No separate server needed.

app.ts
import { Queue, Worker } from 'bunqueue/client';
// Your web framework (Hono, Elysia, Express...)
const app = new Hono();
// Queue is embedded in the same process
const emailQueue = new Queue('emails');
app.post('/send-email', async (c) => {
const { to, subject } = await c.req.json();
await emailQueue.add('send', { to, subject });
return c.json({ queued: true });
});
// Worker runs in the same process
new Worker('emails', async (job) => {
await sendEmail(job.data);
return { sent: true };
}, { concurrency: 5 });
export default app;

Pros:

  • Simplest setup
  • No network latency
  • Single deployment unit

Cons:

  • Queue dies if app dies
  • Harder to scale workers independently

Option 2: Separate Worker Process

Run your API and workers as separate processes sharing the same database.

// api.ts - Your web server
import { Queue } from 'bunqueue/client';
const queue = new Queue('tasks');
app.post('/task', async (c) => {
await queue.add('process', { data: '...' });
return c.json({ ok: true });
});
// worker.ts - Separate process
import { Worker } from 'bunqueue/client';
new Worker('tasks', async (job) => {
// Heavy processing here
return { done: true };
}, { concurrency: 10 });
console.log('Worker started');
Terminal window
# Run both
bun run api.ts &
bun run worker.ts &

Pros:

  • Workers can be restarted independently
  • Better resource isolation

Cons:

  • Two processes to manage
  • Still single SQLite file (no true distribution)

Option 3: Server Mode (CLI)

Run bunqueue as a standalone server. Interact via CLI or HTTP API.

Terminal window
# Start server
bunqueue start --tcp-port 6789 --http-port 6790
Terminal window
# Add jobs via CLI
bunqueue push emails '{"to": "user@example.com", "subject": "Hello"}'
# Or via HTTP API
curl -X POST http://localhost:6790/queues/emails/jobs \
-H "Content-Type: application/json" \
-d '{"data": {"to": "user@example.com"}}'

Docker Deployment

Dockerfile

FROM oven/bun:1
WORKDIR /app
# Copy package files
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile --production
# Copy application
COPY . .
# Create data directory
RUN mkdir -p /app/data
# Environment
ENV DATA_PATH=/app/data/bunq.db
ENV NODE_ENV=production
# Health check
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://localhost:6790/health || exit 1
EXPOSE 6789 6790
CMD ["bun", "run", "start"]

Docker Compose

version: '3.8'
services:
bunqueue:
build: .
ports:
- "6789:6789" # TCP
- "6790:6790" # HTTP
volumes:
- bunqueue-data:/app/data
environment:
- DATA_PATH=/app/data/bunq.db
- AUTH_TOKENS=${AUTH_TOKENS}
- S3_BACKUP_ENABLED=1
- S3_ACCESS_KEY_ID=${S3_ACCESS_KEY_ID}
- S3_SECRET_ACCESS_KEY=${S3_SECRET_ACCESS_KEY}
- S3_BUCKET=${S3_BUCKET}
- S3_REGION=${S3_REGION}
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 256M
volumes:
bunqueue-data:

Systemd Service

For bare-metal or VM deployments:

/etc/systemd/system/bunqueue.service
[Unit]
Description=bunqueue Job Queue
After=network.target
[Service]
Type=simple
User=bunqueue
Group=bunqueue
WorkingDirectory=/var/lib/bunqueue
ExecStart=/usr/local/bin/bunqueue start
Restart=always
RestartSec=5
# Environment
Environment=NODE_ENV=production
Environment=DATA_PATH=/var/lib/bunqueue/bunq.db
EnvironmentFile=/etc/bunqueue/env
# Security
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/bunqueue
# Resource limits
MemoryMax=512M
CPUQuota=200%
[Install]
WantedBy=multi-user.target
Terminal window
# Install
sudo systemctl daemon-reload
sudo systemctl enable bunqueue
sudo systemctl start bunqueue
# Check status
sudo systemctl status bunqueue
sudo journalctl -u bunqueue -f

Building from Source

Compile bunqueue into a standalone executable for production deployment.

Build Command

Terminal window
# Clone the repository
git clone https://github.com/egeominotti/bunqueue.git
cd bunqueue
# Install dependencies
bun install
# Build standalone binary
bun run build

This creates dist/bunqueue (~56 MB), a self-contained executable with no runtime dependencies.

Verify Build

Terminal window
# Check version
./dist/bunqueue --version
# Show help
./dist/bunqueue --help
# Start server
./dist/bunqueue start

Install Globally

Terminal window
# Copy to system path
sudo cp dist/bunqueue /usr/local/bin/
# Verify installation
bunqueue --version

PM2 Process Manager

For cross-platform process management with PM2:

First, build the standalone executable:

Terminal window
bun run build

Then configure PM2:

ecosystem.config.js
module.exports = {
apps: [{
name: 'bunqueue',
script: '/usr/local/bin/bunqueue', // Compiled binary
args: 'start',
instances: 1, // Single instance only - no cluster mode
exec_mode: 'fork',
autorestart: true,
watch: false,
max_memory_restart: '512M',
env: {
NODE_ENV: 'production',
DATA_PATH: '/var/lib/bunqueue/bunq.db',
TCP_PORT: 6789,
HTTP_PORT: 6790,
},
error_file: '/var/log/bunqueue/error.log',
out_file: '/var/log/bunqueue/out.log',
log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
merge_logs: true,
}]
};

Development Mode (with Bun)

For development or when using the source directly:

ecosystem.config.js
module.exports = {
apps: [{
name: 'bunqueue',
script: 'bun',
args: 'run start',
cwd: '/opt/bunqueue',
instances: 1,
exec_mode: 'fork',
autorestart: true,
max_memory_restart: '512M',
env: {
NODE_ENV: 'production',
DATA_PATH: '/var/lib/bunqueue/bunq.db',
TCP_PORT: 6789,
HTTP_PORT: 6790,
},
}]
};

PM2 Commands

Terminal window
# Start
pm2 start ecosystem.config.js
# Restart
pm2 restart bunqueue
# Stop
pm2 stop bunqueue
# View logs
pm2 logs bunqueue
# Monitor
pm2 monit
# Save process list for startup
pm2 save
pm2 startup

Environment Variables

VariableDescriptionDefault
DATA_PATHSQLite database path./data/bunq.db
TCP_PORTTCP server port6789
HTTP_PORTHTTP server port6790
AUTH_TOKENSComma-separated auth tokens-
S3_BACKUP_ENABLEDEnable S3 backups0
S3_ACCESS_KEY_IDS3 access key-
S3_SECRET_ACCESS_KEYS3 secret key-
S3_BUCKETS3 bucket name-
S3_REGIONS3 regionus-east-1
S3_ENDPOINTCustom S3 endpoint-
S3_BACKUP_INTERVALBackup interval (ms)21600000 (6h)
S3_BACKUP_RETENTIONBackups to keep7

S3 Backup Configuration

AWS S3

Terminal window
S3_BACKUP_ENABLED=1
S3_ACCESS_KEY_ID=AKIA...
S3_SECRET_ACCESS_KEY=...
S3_BUCKET=my-bunqueue-backups
S3_REGION=us-east-1
S3_BACKUP_INTERVAL=3600000 # Every hour
S3_BACKUP_RETENTION=24 # Keep 24 backups

Cloudflare R2

Terminal window
S3_BACKUP_ENABLED=1
S3_ACCESS_KEY_ID=...
S3_SECRET_ACCESS_KEY=...
S3_BUCKET=bunqueue-backups
S3_ENDPOINT=https://ACCOUNT_ID.r2.cloudflarestorage.com
S3_REGION=auto

MinIO (Self-hosted)

Terminal window
S3_BACKUP_ENABLED=1
S3_ACCESS_KEY_ID=minioadmin
S3_SECRET_ACCESS_KEY=minioadmin
S3_BUCKET=bunqueue
S3_ENDPOINT=http://minio:9000
S3_REGION=us-east-1

Health Checks

bunqueue exposes health endpoints:

Terminal window
# HTTP health check (detailed)
curl http://localhost:6790/health
# {"ok":true,"status":"healthy","uptime":3600,"version":"1.0.4",
# "queues":{"waiting":5,"active":2},"connections":{"ws":0,"sse":0},
# "memory":{"heapUsed":45,"heapTotal":64,"rss":128}}
# Simple liveness probe
curl http://localhost:6790/healthz
# OK
# Readiness probe
curl http://localhost:6790/ready
# {"ok":true,"ready":true}
# Queue stats
curl http://localhost:6790/stats
# {"ok":true,"stats":{"waiting":5,"active":2,"completed":1000,"dlq":0}}
# Prometheus metrics (text format)
curl http://localhost:6790/prometheus

Kubernetes Probes

livenessProbe:
httpGet:
path: /healthz
port: 6790
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 6790
initialDelaySeconds: 5
periodSeconds: 5

Resource Requirements

Memory

WorkloadRecommended RAM
Light (<1k jobs/day)128 MB
Medium (1k-10k jobs/day)256 MB
Heavy (10k-100k jobs/day)512 MB
Very Heavy (>100k jobs/day)1 GB+

Disk

SQLite database size depends on:

  • Number of jobs retained
  • Job data size
  • removeOnComplete setting
// Reduce disk usage
new Queue('tasks', {
defaultJobOptions: {
removeOnComplete: true, // Don't keep completed jobs
removeOnFail: false, // Keep failed for debugging
}
});

CPU

bunqueue is I/O bound, not CPU bound. A single core handles most workloads.

Production Checklist

  1. Enable S3 backups

    Don’t skip this. SQLite corruption = data loss.

  2. Set auth tokens

    Terminal window
    AUTH_TOKENS=token1,token2,token3
  3. Configure resource limits

    Prevent runaway memory/CPU usage.

  4. Set up monitoring

    Scrape /prometheus with Prometheus or similar.

  5. Configure log aggregation

    Send logs to a central system.

  6. Test backup restoration

    Terminal window
    # List backups first
    bunqueue backup list
    # Then restore by key
    bunqueue backup restore backups/bunq-2026-01-30T12:00:00.db --force
  7. Set up alerts

    • DLQ count > threshold
    • Waiting jobs growing
    • Worker not processing

Scaling Limitations

What this means

  • ❌ No multi-node deployment
  • ❌ No automatic failover
  • ❌ No distributed processing across machines
  • ✅ Multiple workers in same process (concurrency)
  • ✅ Multiple worker processes on same machine (shared SQLite)

When bunqueue is enough

ScenarioJobs/daybunqueue?
Small SaaS<10k✅ Perfect
Medium app10k-100k✅ Fine
Large app100k-1M✅ Tested
Enterprise>1M⚠️ Test first

When to use something else

If you need:

  • High availability → Redis + BullMQ with Sentinel
  • Distributed processing → Kafka, RabbitMQ
  • Multi-region → Managed queues (SQS, Cloud Tasks)
  • Complex workflows → Temporal, Inngest

Vertical scaling

bunqueue scales vertically well:

  • More RAM = more jobs in memory
  • Faster disk (NVMe) = faster SQLite
  • More CPU cores = more worker concurrency
// Scale worker concurrency with available CPUs
import { cpus } from 'os';
new Worker('tasks', processor, {
concurrency: cpus().length * 2
});

Disaster Recovery

Backup Strategy

Every 1 hour → S3 backup
Every 6 hours → Verify backup integrity
Every day → Test restore in staging

Recovery Steps

  1. Stop bunqueue

    Terminal window
    systemctl stop bunqueue
  2. List available backups

    Terminal window
    bunqueue backup list
  3. Restore from backup

    Terminal window
    bunqueue backup restore backups/bunq-2024-01-30T12:00:00.db --force
  4. Start bunqueue

    Terminal window
    systemctl start bunqueue

Point-in-Time Recovery

SQLite WAL mode allows recovery to recent states:

Terminal window
# Backup includes WAL file
cp data/bunq.db data/bunq.db-wal /backup/
# Restore
cp /backup/bunq.db* data/

Security

Network

  • Run behind reverse proxy (nginx, Caddy)
  • Use TLS for external connections
  • Firewall TCP/HTTP ports

Authentication

Terminal window
# Generate strong tokens
AUTH_TOKENS=$(openssl rand -hex 32),$(openssl rand -hex 32)

File Permissions

Terminal window
# Restrict database access
chmod 600 /var/lib/bunqueue/bunq.db
chown bunqueue:bunqueue /var/lib/bunqueue/bunq.db

Monitoring with Prometheus

prometheus.yml
scrape_configs:
- job_name: 'bunqueue'
static_configs:
- targets: ['localhost:6790']
metrics_path: /prometheus

Key Metrics

MetricAlert Threshold
bunqueue_jobs_waiting> 1000 for 5 min
bunqueue_jobs_dlq> 10
bunqueue_jobs_active0 for 5 min (workers dead?)
bunqueue_jobs_failed_totalincreasing rapidly

Example: Full Production Setup

/etc/bunqueue/env
NODE_ENV=production
DATA_PATH=/var/lib/bunqueue/bunq.db
TCP_PORT=6789
HTTP_PORT=6790
AUTH_TOKENS=prod-token-abc123,deploy-token-xyz789
# S3 Backups
S3_BACKUP_ENABLED=1
S3_ACCESS_KEY_ID=AKIA...
S3_SECRET_ACCESS_KEY=...
S3_BUCKET=company-bunqueue-backups
S3_REGION=eu-west-1
S3_BACKUP_INTERVAL=3600000
S3_BACKUP_RETENTION=48
production.ts
import { Queue, Worker } from 'bunqueue/client';
const queue = new Queue('production-tasks', {
defaultJobOptions: {
attempts: 5,
backoff: 5000,
removeOnComplete: true,
}
});
// Configure DLQ alerts
queue.setDlqConfig({
maxEntries: 1000,
maxAge: 7 * 24 * 60 * 60 * 1000, // 7 days
});
// Worker with production settings
new Worker('production-tasks', async (job) => {
await job.updateProgress(0, 'Starting...');
try {
const result = await processJob(job.data);
await job.log(`Completed: ${JSON.stringify(result)}`);
return result;
} catch (error) {
await job.log(`Error: ${error.message}`);
throw error;
}
}, {
concurrency: 10,
heartbeatInterval: 5000,
});