SQLite is known for being lightweight, reliable, and surprisingly fast. In the Node.js ecosystem, one of the most popular libraries for working with it is better-sqlite3. The performance is excellent, but there is one important detail many developers overlook: the library is synchronous.
When a heavy query is executed, the Node.js event loop stops until the database finishes its work. For small side projects, this never becomes noticeable. But once you introduce API routes, server-side rendering, analytics, background processing, or any data-heavy task, the synchronous nature starts to show its limits.
In this guide, I’ll walk you through a practical way to eliminate those bottlenecks by moving your SQLite queries into worker threads. This approach keeps the simplicity of better-sqlite3 while removing the risk of blocking your entire server.
Why Synchronous Queries Hurt Server Performance
Node.js runs JavaScript on a single main thread. If a synchronous operation takes time, the thread cannot respond to any new incoming requests. A single query such as:
const rows = db.prepare("SELECT * FROM logs").all();
may seem harmless, but if it takes 200–400ms and multiple clients hit the same route, you end up freezing the queue. The application still works, but everything feels slower.
Shifting long-running queries into worker threads solves this by letting heavy work run in separate CPU threads while the main server keeps responding.
Introducing Worker Threads for SQLite
Worker threads in Node.js allow JavaScript to run in true parallel threads. Instead of blocking your main server, your app can delegate database work to a worker and remain free to handle incoming requests.
Each worker maintains its own SQLite connection and receives instructions from the main thread. This results in a setup where the main server behaves like a coordinator, and workers act as dedicated SQL processors.
Below is a rewritten example of how to structure this system.
worker.js: Dedicated Query Executor
This file handles the actual SQL work. Each worker opens its own SQLite connection and waits for tasks from the main thread.
`const { parentPort } = require(“worker_threads”);
const db = require(“better-sqlite3”)(“database.db”);
// Receive SQL tasks from the main thread
parentPort.on(“message”, ({ sql, params }) => {
try {
const stmt = db.prepare(sql);
const output = stmt.all(…params);
parentPort.postMessage({ ok: true, output });
} catch (err) {
parentPort.postMessage({ ok: false, error: err.message });
}
});`
Each worker:
• opens its own database connection
• listens for queries
• runs them synchronously
• returns the result through postMessage()
Because workers run in parallel, one slow query never blocks another.
master.js: Pooling Workers and Managing Jobs
The main thread manages workers and distributes SQL tasks. Instead of creating a new worker for every request, we build a worker pool.
`const { Worker } = require(“worker_threads”);
const os = require(“os”);
const queue = [];
let workers = [];
// Public API: an async wrapper around worker execution
exports.runQuery = (sql, …params) => {
return new Promise((resolve, reject) => {
queue.push({ resolve, reject, payload: { sql, params } });
processQueue();
});
};
function processQueue() {
for (const w of workers) {
w.tryWork();
}
}
// Start one worker per CPU core
new Array(os.availableParallelism()).fill(null).forEach(function spawn() {
const instance = new Worker(“./worker.js”);
let activeJob = null;
let workerError = null;
function tryWork() {
if (!activeJob && queue.length > 0) {
activeJob = queue.shift();
instance.postMessage(activeJob.payload);
}
}
instance
.on(“online”, () => {
workers.push({ tryWork });
tryWork();
})
.on(“message”, ({ ok, output, error }) => {
if (ok) activeJob.resolve(output);
else activeJob.reject(error);
activeJob = null;
tryWork();
})
.on(“error”, (err) => {
workerError = err;
})
.on(“exit”, (code) => {
workers = workers.filter(w => w.tryWork !== tryWork);
if (activeJob) {
activeJob.reject(workerError || new Error("Worker terminated unexpectedly"));
}
if (code !== 0) {
spawn();
}
});
});`
With this, your application gains:
• true parallel SQL execution
• a simple async API for running queries
• automatic worker recovery
• a queue that safely spreads work across multiple CPU cores
This feels asynchronous to the application even though better-sqlite3 itself is synchronous.
When This Approach Is Worth Using
Worker threads are helpful when your application regularly performs heavy database operations, such as:
• API endpoints under load
• SSR frameworks like Next.js or Astro
• dashboards or analytics queries
• complex joins or large data scans
• applications running on multicore environments
If your queries are tiny or you only run occasional scripts, worker threads may be unnecessary. But for real-world servers handling concurrent users, this setup prevents lockups and significantly improves responsiveness.
Final Notes
Using better-sqlite3 with worker threads lets you keep SQLite’s simplicity without sacrificing performance. You avoid event loop blocking, gain parallel processing, and achieve behavior similar to connection pooling in larger SQL systems.
It’s a clean technique that scales well, especially for modern Node.js applications that rely on server-rendered pages, microservices, or real-time APIs.
If you’re building anything that depends on SQLite but cannot afford pauses in server responsiveness, giving your queries their own workers is a practical and reliable solution.
Mashraf Aiman
CTO, Zuttle
Founder, COO, voteX
Co-founder, CTO, Ennovat