← Back to home

Queuebase vs BullMQ

Overview

BullMQ is a mature, battle-tested message queue and job processing library for Node.js, built on top of Redis. It has been the go-to solution for background job processing in the Node.js ecosystem for years and powers production workloads at scale.

Queuebase is a background job processing system purpose-built for Next.js and serverless-friendly architectures. It uses a callback model where jobs run on your existing infrastructure, eliminating the need for Redis or a separate worker process. It provides a tRPC-style, fully type-safe TypeScript API with Zod validation.

Architecture Comparison

BullMQ: Redis + Persistent Workers

BullMQ uses Redis as its backbone. Jobs are added to a queue, workers poll Redis and execute processor functions. This requires:

  1. A Redis instance — always running, properly configured, and maintained
  2. Persistent worker processes — long-lived Node.js processes that poll for jobs
  3. Separate deployment — workers are typically deployed as standalone services
[Your App] --> [Redis] <-- [Worker Process(es)]

The worker process must stay alive to process jobs. This is fundamentally incompatible with serverless platforms like Vercel, where functions are stateless and short-lived.

Queuebase: Callback Model

Queuebase inverts the model. Instead of pulling jobs from a queue, the Queuebase service calls back to your application’s HTTP endpoint:

  1. Your app enqueues a job via the SDK
  2. Queuebase stores the job (SQLite locally, Postgres in production)
  3. Queuebase’s worker calls back to your /api/queuebase route handler
  4. Your handler executes the job and returns the result
[Your App] --> [Queuebase API] --> [Your App's /api/queuebase endpoint]

No Redis. No separate worker process. Jobs execute inside your existing Next.js route handlers.

Developer Experience

BullMQ

import { Queue, Worker } from 'bullmq';
import IORedis from 'ioredis';

const connection = new IORedis({ host: 'localhost', port: 6379, maxRetriesPerRequest: null });

// Define queue
const emailQueue = new Queue('email', { connection });

// Add a job (no input validation by default)
await emailQueue.add('sendWelcome', {
  to: 'user@example.com',
  subject: 'Welcome',
});

// Define worker in a separate file/process
const worker = new Worker('email', async (job) => {
  // job.data is `any` — no type safety without manual typing
  const { to, subject } = job.data;
  await sendEmail(to, subject);
}, { connection });
  • Queue names are strings; typos fail at runtime
  • job.data is untyped by default
  • Redis connection must be passed to every Queue and Worker
  • Worker must run as a persistent process

Queuebase

import { createJobRouter, job } from '@queuebase/nextjs';
import { z } from 'zod';

export const jobs = createJobRouter({
  sendEmail: job({
    input: z.object({
      to: z.string().email(),
      subject: z.string(),
    }),
    handler: async ({ input, jobId, attempt }) => {
      // input is fully typed as { to: string; subject: string }
      await sendEmail(input.to, input.subject);
    },
    defaults: { retries: 3, backoff: 'exponential' },
  }),
});

// Enqueue — type error if input doesn't match schema
await jobClient.sendEmail.enqueue({
  to: 'user@example.com',
  subject: 'Welcome',
});
  • Job names are object keys; typos caught at compile time
  • Input validated by Zod at runtime and typed at compile time
  • No Redis, no connection objects
  • Runs inside your existing Next.js app

Feature Comparison

FeatureBullMQQueuebase
Queue backendRedis (required)SQLite (local), Postgres (prod)
Worker modelPersistent process (pull-based)Callback to your HTTP endpoint
Type safetyManual (job.data is any)Built-in (Zod schema + inference)
Input validationDIYBuilt-in Zod validation
RetriesYes, configurable backoffYes, linear/exponential backoff
Delayed jobsYesYes
Priority queuesYes (fine-grained levels)Not yet
Rate limitingYes (global, per-queue, per-group)Not yet
Cron / repeatable jobsYesComing soon
Job flows (parent-child)Yes (unlimited nesting)Not yet
Sandboxed processorsYes (worker threads/child processes)N/A (runs in route handler)
Concurrency controlYes (per-worker, horizontal scaling)Yes (per-job-type)
DashboardBull Board (OSS), Taskforce.sh (paid)Built-in dashboard
Serverless compatibleNo (requires persistent connections)Yes (designed for it)
Framework supportAny Node.js (NestJS, Express, etc.)Next.js (more planned)
Local devRequires local Redisqueuebase dev (zero-config CLI)
Language supportNode.js, Python, ElixirTypeScript
MaturityBattle-tested, large ecosystemEarly stage, actively developed

Pricing and Infrastructure

BullMQ

  • BullMQ: Free and open source (MIT). Pro edition ~$95/month.
  • Redis: Must provision and pay for. Managed Redis starts at $10-30/month.
  • Worker hosting: Persistent server or container (EC2, Railway, Fly.io). Cannot run on Vercel.
  • Monitoring: Bull Board (free OSS) or Taskforce.sh (paid).
  • Minimum overhead: Redis + worker hosting = typically $20-50+/month

Queuebase

  • Free tier: 10,000 jobs/month, 5 concurrent, 7-day retention
  • No Redis required
  • No separate worker hosting — jobs run on your existing app
  • Built-in dashboard — included
  • Local dev: Free via queuebase dev with SQLite

When to Choose BullMQ

  • You need advanced queue features now — priority queues, rate limiting, cron jobs, parent-child flows are mature in BullMQ
  • You’re not on Next.js — BullMQ works with any Node.js framework, plus Python and Elixir
  • You already have Redis — adding BullMQ is incremental
  • You need extreme throughput — BullMQ handles 50k+ jobs/second with horizontal scaling
  • You need sandboxed processing — CPU-heavy jobs in worker threads or child processes
  • You run persistent servers — VMs, containers, Railway, Fly.io

When to Choose Queuebase

  • You’re on Next.js with Vercel or serverless — Queuebase’s callback model is designed for this. BullMQ does not work on Vercel.
  • You want type safety without boilerplate — Zod validation and TypeScript inference built in
  • You don’t want to manage Redis — no Redis to provision, secure, monitor, or pay for
  • You don’t want to manage workers — no separate deployment, no process managers
  • You want fast local devqueuebase dev with zero config, no Docker, no local Redis
  • Your volume fits serverless economics — free tier or modest plans cost less than Redis + worker hosting
  • You value simplicity over feature breadth — retries, delays, and concurrency without the operational overhead

Summary

BullMQ is the mature, feature-complete option with years of production use and advanced capabilities that Queuebase does not yet offer. The cost is operational complexity: Redis, persistent workers, and the infrastructure to run them. It does not work on serverless platforms.

Queuebase is the simpler, serverless-native option built for Next.js. It trades BullMQ’s breadth for zero-infrastructure operation, built-in type safety, and a DX designed around how Next.js developers work. If you need background jobs without spinning up Redis and a worker fleet, Queuebase removes an entire layer of infrastructure.

If you need what BullMQ offers beyond the basics, BullMQ is the right tool. If you want the simplest path to background jobs in Next.js, Queuebase gets you there with significantly less operational overhead.