n8n Self-Hosting Requirements Guide (2026)

n8n Self-Hosting Requirements Guide (2026)
Published on May 7, 2026 Updated on May 8, 2026

n8n is an open-source workflow automation platform that connects APIs, databases, and internal tools through a visual node-based editor, available in a free Community edition and a paid Enterprise edition with added governance and security features. Typically, its standout uses are automating repetitive work, syncing data between SaaS apps, and orchestrating AI agent pipelines to seamlessly process thousands of events per hour.

You can use n8n Cloud to handle the hosting and infrastructure on your behalf. The platform manages updates, security patches, scaling, and uptime, leaving you to focus solely on building workflows. The trade-off is limited control over the underlying environment, along with execution caps on lower tiers, per-seat pricing that grows with team size, and an inability to reach internal databases or APIs sitting behind a corporate firewall.

Self-hosting removes those limitations and gives you full ownership of the stack. You decide the hardware specifications, the runtime, the database engine, and how traffic flows in and out of the instance. That control also extends to data residency, custom node installations, and direct connections to private internal services that n8n Cloud cannot reach.

The catch is that self-hosting can take several different forms depending on how you build the rig. The efficiency of n8n shifts based on where you allocate hardware resources, which database engine you select, and whether you containerize the deployment. Understanding what each component does and how they interact is the first step toward picking the right configuration for your workload.

#n8n Self-Hosting Requirements Overview

Self-hosting shifts n8n from a managed service into a stack you assemble yourself, which means its efficacy will be defined by how you choose to build it. Your infrastructure will consist of four major components that bring everything together, each with specifications shaped by how you intend for n8n to operate during workflow execution.

The four components are:

  • A Linux machine (the host operating system),

  • A runtime environment (either Node.js installed directly or Docker),

  • A database to store workflows, credentials, and execution history,

  • A publicly reachable network address so external services can send webhook requests to your instance.

Exactly how to work out the choice of which specifications to use for each component can be made easier by first determining three workload factors for your n8n rig.

The first is workflow count, which primarily defines database storage and baseline memory, since every saved workflow occupies space, and active workflows hold runtime state in RAM. The second is trigger frequency, which mainly defines CPU and network bandwidth, since every webhook or scheduler fire invokes the runtime and processes inbound traffic.

A workflow that runs once per day places almost no load on the server, while one that fires on every incoming webhook from a high-traffic API would have a much higher load to bear.

Finally, the third factor is the volume of data each execution processes, which is most directly defined by RAM, since payload size determines how much memory each workflow holds during processing. A team running 10 simple cron-based workflows has very different requirements from one running 100 webhook-triggered flows that each process multi-megabyte payloads.

One pattern holds true across almost every n8n deployment: memory matters more than raw CPU power. n8n is not a number-crunching application, and most execution time is spent waiting for HTTP responses from external APIs.

During that wait, the CPU sits idle while the workflow data remains parked in memory, holding the entire payload, intermediate transformations, and queued nodes. A faster CPU does not shorten the wait for an API response, but more RAM directly determines how many of those waiting workflows can sit in memory at the same time without crashing the process.

The Code node is the main reason RAM planning matters so much. While Code nodes do use CPU during the actual JavaScript or Python execution, their memory footprint is far more demanding than their compute footprint. When a Code node runs, it creates a copy of your workflow data before performing processing operations, then creates another copy afterwards.

So a single Code node handling a 100MB payload would temporarily hold around 300MB in memory. If several of these run concurrently during large scale operation, RAM utilization can quickly max out, which is why memory should be sized according to the heaviest workflow the build will be used for, rather than the expected average, and the database should match this.

These considerations form the basis of our n8n stack and prepare us to plan out our four major components.

Here is an example full-stack at a glance:

  • Linux as the base OS (Ubuntu 22.04 or 24.04),

  • Node.js 20.19 (through 24.x) or Docker,

  • SQLite for throwaway tests, PostgreSQL 13-17 for production,

  • Nginx or Caddy as a reverse proxy with SSL,

  • A public IP or domain name for webhook access,

  • Redis 6+ for queue mode scaling.

For a step-by-step walkthrough of getting n8n running on Ubuntu, see our guide on how to install n8n on Ubuntu.

Before settling on a specific server tier, it helps to understand how n8n actually uses each hardware resource during execution. The numbers on a spec sheet only become meaningful when you know which ones scale with your workload and which ones sit mostly idle.

Scalable VPS Hosting

Deploy fast, secure VPS for websites, apps, and dev environments. Scale on demand, manage with full root access, and get 24/7 expert support.

#CPU, RAM, and Storage Requirements Explained

CPU, RAM, and storage each carry weight in an n8n build, but they do not carry equal weight across all workloads. The CPU determines how quickly the runtime can dispatch and process simultaneous executions. RAM dictates how many of those executions can hold their payloads in memory at once without crashing the process.

Storage governs both how much execution history the database can retain and how fast PostgreSQL can read from disk during queries. Each component has its own scaling curve, and getting the balance wrong on any one of them creates noticeable bottlenecks at production load.

#CPU

n8n workflows are I/O-bound. They fire HTTP requests, wait for responses, transform the returned data, and move on to the next node in the chain. Most node types involve very little local computation. The HTTP Request node sends a request and waits for a response. The database nodes issue a query and wait for a response.

Even data transformation nodes like Set or IF perform lightweight operations on already-loaded JSON.

Most of that time is spent waiting, not computing, which is why even modest CPUs are more than enough to handle a surprising amount of n8n workflow traffic. Modern entry-level processors like the Intel Xeon E-2300 series, AMD Ryzen 5, or a small 4-8 core ARM-based VPS are more than capable of running n8n at a small scale.

A small build that benefits from this efficiency might be a personal automation setup running 5-10 cron-based workflows that sync data between a CRM, a Google Sheet, and a Slack channel, a few times per hour. Two cores cover that loop comfortably for a small number of simple workflows like these.

Concurrency changes the equation. Twenty workflows that execute operations at the same moment, or Code nodes that parse large CSVs and run loops over thousands of records, push CPU into genuinely necessary territory. In these scenarios, a 4-8-core processor with higher single-thread performance, such as an Intel Xeon Gold or AMD EPYC chip, handles the workload more reliably than a higher-core-count budget CPU.

In queue mode, each worker claims its own CPU share, so the total core count scales with the number of workers you run.

A main instance plus four workers running on a single server would typically need 8-12 cores total to process jobs without queuing.

Two cores for the main process handling the editor and webhook reception, plus 1-2 cores per worker, depending on whether the workers are running CPU-light HTTP-driven workflows or CPU-heavy Code node executions.

#RAM

Memory is where n8n surprises people. At idle, processes can sit around the 100MB mark, which certainly looks lightweight on any modern server. However, when a workflow pulls in a large JSON response, the Code node duplicates it twice, and that single execution suddenly consumes 800MB. Three such workflows running in parallel can easily blow past 2GB of RAM without warning.

In these circumstances, the Code node is almost always the cause of production instances crashing due to memory exhaustion because the build underestimated the RAM utilization. As a rule of thumb, it’s best to size RAM according to your heaviest possible workflow, and add a buffer for concurrent executions.

It’s recommended to budget for 8GB of RAM for standard API-to-API workflows that largely move small payloads. If your workflows touch files, process image data, or handle CSV exports with tens of thousands of rows, then an allocation of 16GB or more may be more suitable.

#Storage

Every workflow execution is logged to the database, and self-hosted n8n automatically prunes these logs by default, retaining the most recent 14 days or 10,000 executions, whichever limit is reached first. That keeps growth in check for most setups, but an instance running 500 executions per day with file-heavy payloads can still accumulate multiple gigabytes of log data per month before pruning catches up.

Both the volume of storage you provision and how you tune those retention thresholds matter equally for long-term performance.

Capacity planning starts with the workload tier. A development instance running a handful of workflows can get by with 20 GB of storage, since SQLite stays small and execution history rarely accumulates.

A production instance handling moderate webhook traffic should start at 50 GB and scale toward 100 GB if pruning is not aggressive. High-volume queue mode setups processing thousands of executions per day will benefit from 200 GB or more, with separate volumes for the database and application directories.

For example, if left unmanaged, the execution_entity table in PostgreSQL grows exponentially and can lead to slow downs in every query that touches it. Configuring pruning early on can help keep the database lean.

To adjust the defaults, set EXECUTIONS_DATA_MAX_AGE to a retention window in hours that matches how far back you need to debug (336 hours covers the default two weeks), and set EXECUTIONS_DATA_PRUNE_MAX_COUNT to cap the number of executions retained regardless of age.

Storage speed matters just as much as capacity. NVMe SSDs are non-negotiable for production because PostgreSQL hammers random reads during query execution, and NVMe handles random reads 5-10x better than SATA SSDs.

Spinning disks will choke the database once the execution table crosses a few hundred thousand rows. The query latency becomes painfully long before storage actually fills up.

A few storage practices worth implementing from day one:

  • NVMe for both the PostgreSQL data directory and the n8n application directory

  • Default execution pruning thresholds adjusted with EXECUTIONS_DATA_MAX_AGE and EXECUTIONS_DATA_PRUNE_MAX_COUNT to match your debugging needs

  • Database and application volumes are on separate partitions, so one filesystem issue does not take down both

  • Nightly pg_dump backups copied to remote storage

  • Separate backups of /home/node/.n8n for credential encryption keys

#Minimum vs Recommended Server Specifications

With the role of each hardware component in mind, the next question is: what does a real-world n8n build look like at different scales? There are three common tiers, and each runs n8n in a noticeably different way.

A low workload build, such as for prototyping integrations, testing community nodes, or demoing a workflow concept, runs n8n as a single Node.js process with SQLite as the database, usually on a laptop or small VPS. It boots in minutes and reliably handles a small number of workflows as it is not exposed to sustained traffic or concurrent executions.

These specs make building a workflow on your laptop perfectly viable. However, they would likely not hold up under webhook traffic, concurrent runs, or large payloads. Primarily, this is because SQLite locks the entire file during writes, so two workflows that finish at the same time would clash.

Minimum specs (dev and testing):

  • CPU: 2 cores

  • RAM: 2 GB

  • Storage: 20 GB SSD

  • Database: SQLite

  • Runtime: Node.js 20.19 (through 24.x) or Docker

A recommended production build runs n8n in Docker alongside PostgreSQL, with a reverse proxy fronting it and SSL. It can handle dozens of concurrent workflows and restart cleanly with no risk of data loss.

This setup most often serves small-to-mid SaaS companies running internal automations, marketing teams orchestrating multi-channel campaigns, or DevOps teams piping events between GitHub, PagerDuty, and Slack.

Its strength lies in stability and predictability: PostgreSQL handles concurrent writes cleanly, Docker isolates the runtime from host-level changes, and the reverse proxy provides a secure entry point for webhooks. The deficiency arises when a sustained load pushes a single Node.js process beyond what a single machine can handle.

As workflow execution times grow, the editor starts lagging during heavy runs, and webhook ingestion begins timing out. At that point, the build needs to graduate to queue mode rather than just adding more CPU and RAM.

Recommended specs (production):

  • CPU: 4+ cores

  • RAM: 8 GB minimum, 16 GB for concurrent or file-heavy workflows

  • Storage: 50-100 GB NVMe SSD

  • Database: PostgreSQL 13-17

  • Reverse proxy: Nginx or Caddy with SSL

  • Networking: Static public IP and domain name

A queue mode build splits execution across multiple containers, with Redis as the message broker. It scales horizontally by adding workers and remains responsive even under heavy load.

This tier is the right fit for high-volume operations such as e-commerce platforms processing webhooks from payment providers and shipping carriers, AI agent infrastructure running thousands of LLM calls per hour, or data pipelines syncing large volumes between warehouses and operational systems.

Its strength is horizontal scalability: doubling worker count effectively doubles execution throughput, and the editor remains usable even when workers are saturated. The main consideration is operational complexity.

You manage more containers, more environment variable coordination, and more failure points across the stack. Queue mode rewards investment in proper monitoring and infrastructure-as-code practices, making it a good fit for teams that already operate at that maturity level.

Queue mode specs (high-volume):

  • Main instance: 2-4 cores, 4 GB RAM

  • Each worker: 2-4 cores, 2-4 GB RAM

  • Redis: version 6+

  • Database: PostgreSQL (SQLite is incompatible with queue mode)

The table below puts all three tiers side by side to help you match your workload to the right build at a glance.

Component Development Production Queue Mode
CPU 2 cores 4+ cores 4-8+ cores across instances
RAM 2 GB 8-16 GB 4 GB main + 2-4 GB per worker
Storage 20 GB SSD 50-100 GB NVMe 100+ GB NVMe
Database SQLite PostgreSQL 13-17 PostgreSQL 13-17
Redis No No Redis 6+
Reverse Proxy Optional Required Required

#Software and Runtime Dependencies

Along with hardware, the runtime you choose, and whether to use plain Node.js or a containerized Docker stack shapes everything from installation speed to how difficult potential upgrades may feel six months later.

Regardless of your other decisions, n8n requires Node.js 20.19 or later. The difference is whether you run the Node.js process directly on your host OS, or inside a Docker container that bundles all dependencies.

Direct Node.js installation is faster to get started as there is no container layer to configure, no Compose file to write, and no Docker network to coordinate. however it can become harder to maintain due to every additional service (PostgreSQL, Redis, Nginx) becoming a separately installed and managed system process, with its own version, configuration files, and update cycle to track.

Running it through Docker adds a small learning curve up front but pays dividends with clean, simple-to-implement upgrades, reproducible environments, and native support for multi-service deployments, such as queue mode.

But which option is the right one for your build, and in which scenarios is each beneficial?

#Node.js and npm

Running n8n directly on Node.js is one of the most lightweight options available. Once Node.js is installed, a single npm install -g n8n command pulls down the application and everything it depends on.

This method particularly shines for quick local testing. You can get n8n up and running in under five minutes, click around the editor, and verify that an integration works before committing to a more permanent setup.

NPM provides community nodes, which are third-party integrations that extend n8n beyond its built-in library of 400+ services. Common examples include nodes for niche SaaS tools that have not been added to the official catalog yet, like specific cryptocurrency exchanges, regional payment processors, or specialized AI providers.

Community nodes also cover internal use cases such as custom database drivers, message queue adapters, or wrappers around private internal APIs. The npm route is convenient for developers who want to install or test these integrations without waiting for them to be packaged into a Docker image.

It also suits developers who want to dig into n8n's source or experiment with unreleased features pulled directly from GitHub. Removing the Docker abstraction layer here means direct access to the Node.js process, which makes it easier to attach debuggers, modify n8n source files in place, or hot-reload changes during active development.

None of that is impossible inside a container, but it adds significantly more friction.

The limitations of this usually surface when the setup grows beyond a single-service deployment due to the complexity of managing parallel host-level services. Production-grade n8n usually needs PostgreSQL running alongside it, plus Redis if you plan to use queue mode.

Direct Node.js installation does not block production use, but it puts the burden of orchestrating those services entirely on the operator. If your rig will need reboots, keeping n8n running will require process management with systemd or PM2.

Carrying out upgrades requires manual npm commands, while to perform a rollback you’ll need to remember the exact previous version, and in those cases you can run into version dependency conflicts with other Node.js applications on the same host, which had their own updates.

In summary, direct Node.js installation is ideal for local testing, contributing to n8n source, or running a tightly scoped single-instance setup where the operator wants direct control. For anything involving multiple services, production traffic, or queue mode, Docker is the better starting point.

#Docker and Docker Compose

For workloads that go beyond quick testing, Docker is widely considered to be the default choice. The container image includes n8n and all of the libraries and dependencies it needs, eliminating the version-mismatch problems that often plague direct installations.

Nothing on your host OS can break n8n, and n8n cannot break anything else on your host. That isolation proves invaluable when multiple applications share the same server.

Docker is usually the right call for any build where consistency, repeatable deployments, or scaling are likely to matter. That includes production single-node setups serving real webhook traffic, queue mode deployments distributing work across multiple workers, multi-environment workflows where staging and production need to behave identically, and any team setting where multiple operators need to deploy or maintain the same instance reliably.

Upgrades become a one-line operation. You simply pull the new image tag, restart the container, and the update is live across your stack.

If the update does break something, you can easily revert the tag and restore from a built-in backup, and these rollbacks take mere seconds rather than an afternoon of troubleshooting dependency conflicts.

Docker Compose extends those benefits to multi-service setups. A single docker-compose.yml file can be used to declare that n8n, PostgreSQL, Redis, networking rules, volume mounts, and restart policies as one coordinated stack.

The n8n hosting repository on GitHub publishes ready-made Compose files for common deployment patterns, ranging from simple single-node n8n with PostgreSQL to full queue-mode setups, with Redis and multiple worker containers. These provide a solid starting point that you can adapt to your environment

The tradeoff is a slightly steeper initial learning curve if Docker is new to you. To get the most out of it, you need a working understanding of containers as isolated processes, image tags as version anchors, volumes for persistent data outside the container lifecycle, and Docker networking for service-to-service communication.

None of these are advanced topics, but they all need to click before queue mode operations feel routine.

Queue mode specifically requires containerization because it depends on multiple services communicating over a Docker network. Running it without Compose is possible in theory, but painful in practice, because you would need to manually create the Docker network, start each container in the correct order with the appropriate environment variables, manage restart behavior individually, and handle inter-service DNS resolution by hand.

Compose declares everything in a single file and lets a single command bring the whole stack up or down.

#Database Requirements (SQLite vs PostgreSQL)

The database is where n8n keeps the state that makes the platform persist across restarts. The information stored here includes workflow definitions, encrypted credentials, and complete execution history for each run carried out to date.

It is also the component most directly tied to how far your deployment can scale. A database that handles five concurrent writes cleanly is fine for prototyping, but struggles the moment your team starts running multiple webhook-triggered workflows that finish at the same instant.

A database that cannot handle fifty concurrent writes will bottleneck your entire automation stack. Every webhook trigger, every scheduled workflow, and every API call that updates state ultimately touches the database.

In a queue mode deployment with four workers each processing 10 workflows per minute, that is 40 simultaneous writes per minute hitting the database, and each one has to acquire a lock, complete its transaction, and release.

A database that serializes those writes turns into the bottleneck for the entire system, regardless of how much CPU or RAM the workers have available.

n8n supports two databases that sit at opposite ends of the scalability spectrum. SQLite is a serverless file-based database built into n8n by default, ideal for testing and lightweight use. PostgreSQL is a full-featured relational database that runs as its own service, designed for production workloads with concurrent access.

The right choice depends less on raw performance benchmarks and more on where you see the deployment heading over the next six to twelve months.

#SQLite

SQLite is a serverless embedded database that stores everything in a single file on disk. There is no separate process for installing, configuring, or managing.

n8n creates the file automatically on first boot, and the database lives alongside the application in the same directory. The setup is as simple as databases get, because there is nothing to install, no service to start, no user accounts to provision, and no network ports to manage. The entire database lives in a single file, typically named database.sqlite, that you can copy, move, or back up like any other file on disk.

That simplicity is SQLite's biggest strength, but can also be its biggest weakness. For local testing, prototyping workflows, or running n8n in a throwaway VM, it is frictionless because every piece of state lives in that one file.

You can spin up an instance, build a workflow, test it, and tear the whole thing down without touching any external service. For developers who want to evaluate n8n quickly or hand a self-contained instance to a colleague, SQLite is unmatched.

With SQLite, you don’t need to configure or set environment variables, network ports, or connection strings. Everything is handled out of the box.

The familiar story is that problems tend to emerge when put under strain of concurrent operations. SQLite locks the entire database file for every write operation, which means two workflows that finish at the same time will block each other.

This also makes it completely incompatible with queue mode, which assumes multiple processes are writing to the database simultaneously, which SQLite cannot support architecturally.

Migration is another sore spot for SQLite. If the deployment is intended to one day reach production levels, or even has the potential to do so, it’s recommended to skip SQLite from day one.

Migration to PostgreSQL is not entirely manual, but it is fragile. n8n provides built-in export/import tooling that dumps workflows and credentials to JSON files, which must then be imported into a fresh PostgreSQL-backed instance.

The fragile parts are credential decryption, execution history, which is not typically migrated, and any custom workflow timestamps or webhook IDs that downstream systems may depend on. For a small instance with five workflows, the process can be frustrating to approach.

For an instance with hundreds of workflows, dozens of integrations, and active webhook URLs already configured in third-party services, migration runs the risk of downtime and broken integrations at varying levels of severity.

#PostgreSQL

PostgreSQL is a full-featured relational database that runs as its own service, independent of n8n. It handles concurrent reads and writes cleanly through row-level locking rather than file-level locking, which means two workflows can write to different rows in the same table at the same time without blocking each other.

It further supports ACID transactions that guarantee data consistency across complex operations, and has decades of battle-tested deployment history at every scale from hobby projects to global enterprises.

PostgreSQL is the right choice for any production deployment, any deployment that may grow into production, and any deployment that intends to use queue mode. It is overkill for a developer running a single throwaway test instance, where SQLite serves the same purpose with no setup overhead.

The main considerations when adopting PostgreSQL are: you now have a separate service to back up, monitor, and patch alongside n8n; you need to manage connection credentials and rotate them on a sensible schedule; and you need enough disk space for both the n8n application data and PostgreSQL's own data directory.

n8n supports PostgreSQL versions 13 through 17, though at the time of writing, version 16 is the current sweet spot for new installations because it balances mature stability with a long remaining support window.

In a Docker Compose setup, PostgreSQL runs as its own container. n8n connects to it over the internal Docker network using standard PostgreSQL connection parameters.

n8n can be connected through six environment variables:

  • DB_TYPE: tells n8n which database engine to use (must be set to postgresdb)

  • DB_POSTGRESDB_HOST: the hostname or container name of the PostgreSQL service (commonly postgres in Docker Compose)

  • DB_POSTGRESDB_PORT: the port PostgreSQL listens on (defaults to 5432)

  • DB_POSTGRESDB_DATABASE: the name of the specific database within the PostgreSQL instance that n8n should use

  • DB_POSTGRESDB_USER: the database user account n8n authenticates as

  • DB_POSTGRESDB_PASSWORD: the password for that user account

Note: DB_TYPE must be set to postgresdb, not postgres. n8n does not support MySQL or MariaDB.

#Backup Considerations

When it comes to backups, there are actually two separate things that need to be accounted for, and it’s important that neither is overlooked or there can be serious consequences for your build.

First up is your PostgreSQL database. This is where n8n keeps everything that matters, including workflow definitions, encrypted credentials, and execution logs. It’s recommended to set up pg_dump to run daily and ensure that those dump files are copied off of the server.

If this becomes damaged or lost, you can end up manually re-entering every API key, every OAuth token, and every database password you ever configured. Depending on how many integrations you have built, that could take hours or even days.

The second item that is often forgotten is the n8n data directory. This can be found inside your Docker container at /home/node/.n8n and contains the encryption key n8n uses to unlock all the credentials stored in your PostgreSQL database.

If something happens and this key becomes lost, every single credential in your database would permanently become unreadable and unusable. Unfortunately, there is no recovery trick and no workaround, so it’s imperative to keep a backup handy.

Back this directory up on the exact same schedule as your database dumps. It’s ideal to store both of them in the same remote location to ensure that a single server failure does not destroy the encryption keys needed to decrypt the backup database.

#Networking and Security Requirements

At this point, n8n is running, and the database is persisting data reliably. The next concern is what happens when the instance becomes reachable from the internet.

For more production deployments, exposure is not as webhook triggers are one of n8n's core features, requiring external services to reach your server over the public internet. The risk is significant, however, as an unprotected n8n instance provides a potential attacker with access to every API key, OAuth token, and database credential your workflows use. A single compromise can cascade through every service n8n connects to.

To mitigate this, there are three distinct areas of server hardening that can be implemented: controlling who can reach the server at the network level; encrypting traffic in transit so intercepted packets are useless; and controlling who can log in once they do reach the application.

#Network Configuration

n8n listens on port 5678 for all platform operations, that includes the web editor, the REST API, and all webhook endpoints. Because webhooks need to be reachable from services like GitHub, Stripe, or Slack, the port must be exposed to the internet.

Rather than exposing it directly, allowing any attacker scanning for open ports to see the entire attack surface at once, a more secure approach is to place a reverse proxy in front of n8n and have it listen on port 443, the standard HTTPS port.

This proxy will accept external traffic, terminate SSL, and forward requests to n8n on the backend. Two protocols you can consider for this role are Nginx, a high-performance web server known for its fine-grained configuration control, and Caddy, a modern web server that auto-provisions HTTPS certificates with minimal setup.

With the proxy handling all external traffic, you can configure your firewall to reject any connection attempt on port 5678 coming from outside the server. This means the only public doorway into your n8n instance would be through the proxy on 443.

Bear in mind that external services still need a consistent address to deliver webhook requests to, so this setup will require either a static public IP address or a domain name with DNS records pointing at your server.

#SSL/TLS and Reverse Proxy

SSL (Secure Sockets Layer) and its successor, TLS (Transport Layer Security), are cryptographic protocols that encrypt traffic between a client and a server over the internet. Without TLS, every byte sent over HTTP travels in plaintext, readable by anything sitting on the network path. With TLS, that traffic is encrypted in a way that makes interception worthless.

For an n8n instance, this matters more than for a typical web app. Webhook payloads often carry sensitive data such as customer records, payment information, and authentication tokens. Even login sessions to the n8n editor include credentials that unlock every connected service.

In theory, if both types of traffic travel over unencrypted HTTP, anyone on the network path can read them: that can be innocuous, like ISPs and network equipment operators, or more dangerous, such as individuals running a man-in-the-middle attack.

Using HTTPS on every byte of traffic that crosses the internet is the baseline requirement to reduce this risk. Both Nginx and Caddy handle reverse proxy well, each with different strengths.

Caddy is the easier option because it automatically provisions Let's Encrypt certificates with almost no configuration, while Nginx has a more in-depth, longer setup, but gives finer control over rate limiting, caching, and header manipulation.

Within n8n itself, you can also set three environment variables to ensure that the application builds correct webhook URLs and enforces secure cookies: N8N_PROTOCOL=https, N8N_HOST=your-domain.com, and WEBHOOK_URL=https://your-domain.com/.

#Authentication and Access Control

Even with TLS in place, the editor still accepts logins from anyone who can reach it over the network. A single compromised owner account exposes every credential n8n has ever stored, and that blast radius extends to every service connected to n8n. This makes protecting the login surface just as important as protecting the network surface.

Layering multiple authentication controls is the safest approach. As n8n walks you through creating an owner account on first launch, it’s recommended to observe standard security practice and set a long, random password, and then store it in a password manager.

Enable basic authentication to place a username-and-password gate in front of the entire web UI. Enterprise license holders get SAML-based SSO and two-factor authentication as additional options, which is a step up from the Community edition, which relies on basic authentication combined with reverse proxy controls (such as IP allow-listing) for access management.

If your n8n instance only needs to be accessible by a small team, you can lock down the editor by IP address in your network firewall or reverse proxy configuration. This single step alone eliminates a large chunk of your attack surface.

An example production-ready security setup may have the following measures in place:

  • HTTPS on all traffic through the reverse proxy,

  • Port 5678 blocked from external access,

  • Basic auth or SSO protecting the web UI,

  • IP allow-listing on the editor, where team size makes it practical,

  • A regular schedule for OS patches and Docker image updates.

With the component composition and network specifications handled, the decision of which server medium suits your ideal build is the ultimate remaining question, and depends largely on your scaling needs.

#Deployment Options: Docker, VPS, or Dedicated Server?

Once the requirements are mapped, the next decision is which infrastructure should host the n8n stack. There are three common deployment paths, each with its own balance of cost, control, performance consistency, and operational overhead.

A VPS provides a slice of shared physical hardware, typically priced between $20 and $60 per month. It serves a wide range of small-to-mid-sized n8n deployments efficiently and is the most common starting point for self-hosters.

A dedicated server gives you the entire physical machine, with monthly costs starting around $60-100 and rising based on hardware specs. It suits production setups where consistent performance matters more than minimizing cost.

Direct npm installation skips Docker entirely and runs n8n as a system-level Node.js process. It is the leanest option, well-suited to local development, contributing to n8n source, or self-contained single-instance setups where the operator wants direct access to the runtime.

The sections below cover each path in detail, including the specific scenarios where each one is the strongest choice.

Rent Dedicated Servers

Deploy custom or pre-built dedicated bare metal. Get full root access, AMD EPYC and Ryzen CPUs, and 24/7 technical support from humans, not bots.

#Docker on a VPS

A Virtual Private Server (VPS) is a slice of a physical machine carved out through virtualization of underlying hardware. Multiple tenants share the hardware, but each VPS has its own isolated operating system and allocated resources.

It is the most affordable starting point for self-hosting n8n. Setup is straightforward for teams familiar with Linux administration.

For example, a standard n8n and PostgreSQL build can run on a VPS with 4-8 GB of RAM. Just Install Docker and Docker Compose, drop in a Compose file with n8n and PostgreSQL, and a working instance is live inside an hour. It’s as simple as that, with monthly costs typically ranging from $20 to $60, depending on the provider and specs.

This setup serves a wide range of real workloads. Common examples include solo founders running personal automation across email, calendar, and CRM tools; small marketing teams orchestrating content distribution between Notion, Airtable, and Slack; small DevOps teams piping events between GitHub, Jira, and PagerDuty; and internal tools teams running scheduled data syncs between SaaS platforms.

For these use cases, a Docker-on-VPS setup hits the sweet spot of low cost, low operational overhead, and enough performance headroom to grow within reason.

That pricing puts it within reach for solo operators and small teams. The entry barrier is low enough that most n8n self-hosters start here before deciding whether to move up.

The main drawback is shared physical hardware. Other tenants on the same host can consume CPU and disk I/O during their peak hours, which can result in slower webhook response times, or a sluggish editor.

For light and moderate workloads, this is rarely noticeable, but can become a serious inconsistency for latency-sensitive use cases such as real-time trading webhooks or high-frequency data syncs.

#Docker on a Dedicated Server

A dedicated server gives you the entire physical machine to customize and run, with no other tenants sharing resources. CPU performance, memory bandwidth, and disk throughput stay flat and predictable regardless of what happens elsewhere on the network.

That consistency can be invaluable for queue mode deployments where multiple workers pull jobs from Redis simultaneously and expect uniform execution times. It’s also impactful for webhook-heavy setups that need guaranteed network throughput during traffic spikes.

Bare metal opens up hardware-level controls that VPS platforms typically lock away. So functions like RAID configuration for storage redundancy, BIOS power management for performance tuning, and kernel parameter adjustments for specific workload profiles are all in the palm of your hand.

Monthly costs are higher than comparable VPS plans, typically starting at $60-100 and rising based on CPU, memory, and storage choices, but the tradeoff is worth it when workload consistency and scaling headroom are primary concerns.

#Direct NPM Installation

Skipping Docker entirely and installing n8n as a global npm package with npm install -g n8n is the fastest way to get a running instance. Process management through systemd or PM2 keeps n8n alive after reboots.

The whole setup can be up in under five minutes, and that speed makes it genuinely useful for one scenario in particular.

Direct npm installation is the right call for local development even run on simple equipment like a laptop where: Docker would be unnecessary overhead; for contributors actively working on the n8n source code who need direct access to the running process; and for self-contained single-instance deployments where the operator prefers to manage every system-level component directly.

Some teams also prefer it for very lightweight production setups where a single n8n process serves a small workload, and additional services such as PostgreSQL and Redis are managed separately by other infrastructure.

Beyond those scenarios, direct npm installation can lead to maintenance overhead that compounds as deployments grow. Upgrades require manual npm commands and process restarts.

Rollbacks are clunkier than switching a Docker image tag, and orchestrating PostgreSQL, Redis, and multiple workers without Docker Compose networking requires manual configuration. Queue mode in particular does not work well outside a containerized environment.

The right deployment method comes down to workload profile:

  • A VPS with Docker is well-suited to light and moderate workloads, where keeping costs low is the priority. It works well for solo operators, small teams, and any setup that runs steadily but doesn't handle heavy automation traffic.

  • A dedicated server with Docker fits production queue-mode setups that need consistent, contention-free performance. It is the right choice when latency consistency, scaling headroom, or hardware-level control matter more than the cost premium over a VPS.

  • Direct npm installation fits laptops, development environments, and single-instance setups where the operator wants direct runtime access. It also suits contributors working on the n8n source code who need to attach debuggers or hot-reload changes.

#How to Choose the Right Server for n8n

Picking a server starts with honestly assessing the workload it will run. Overbuying can be expensive, where a lower spec build may have sufficed, but underprovisioning can lead to webhook timeouts, frozen editors, and emergency migrations at critical moments.

The goal is to match the hardware to concrete needs, while also leaving enough headroom to handle growth without oversizing for a scale you may never reach. To help with this, there are four questions that can help guide that decision.

The first is: how much memory will your workflows actually consume under peak load? For a simple cron-based RSS reader, 2-4 GB may be enough. For a typical mix of API-to-API workflows handling JSON payloads, 8 GB is a practical baseline, while workflows that touch binary files, large CSVs, or high concurrency, would need to consider 16 GB or more as a safer floor.

The second is: what kind of storage performance does PostgreSQL need to stay responsive as execution history grows? For development instances, a SATA SSD handles the load. For production handling steady webhook traffic, NVMe becomes important once the execution table crosses a few hundred thousand rows. For high-volume queue mode setups, NVMe is non-negotiable.

The third is: how will network performance and data center location affect your workflow latency? If most of your connected APIs live in US-East, hosting your n8n in Frankfurt adds 80-120ms to every API call. If your webhooks come from Europe-based services, hosting in Europe is the obvious choice. The right answer depends on where your traffic actually originates.

Finally, the fourth is: what billing model best fits the deployment lifecycle? If you spin up staging environments for short-lived testing, hourly billing offers flexibility without long-term cost commitment. If your production instance runs 24/7 for the foreseeable future, monthly or annual billing significantly reduces the per-hour cost.

#Match Server Tier to Workload Scale

A personal setup with 5-10 lightweight workflows runs comfortably on a VPS with 2-4 cores, 4-8GB RAM, and standard SSD storage. Cherry Servers' Cloud VPS plans cover these minimum requirements with high-performance shared resources, SSD storage, and deployment in under five minutes.

A team running 50+ workflows with steady webhook traffic and multiple API integrations needs a mid-range dedicated server with 4-8 cores, 16GB RAM, and NVMe drives. Cherry Servers' entry-level bare metal plans start with Intel Gold and AMD EPYC processors that fit this tier, with NVMe storage included by default and customizable RAM options.

Enterprise deployments running queue mode with multiple workers require at least 8 or more cores, 32GB of RAM or higher, and multiple NVMe disks. Cherry Servers' higher-end custom dedicated servers let you select the exact CPU, RAM, and storage configuration your queue-mode stack needs.

#Prioritize Memory over CPU

For most workloads, N8n's memory requirements outweigh its CPU requirements. In practice, a 4-core server with 16GB of RAM will handle higher volumes of n8n traffic than an 8-core server with only 4GB of RAM. A general rule is to size your RAM based on the largest payload your workflows will ever process, then add 30-50% on top as a buffer for concurrent executions.

On the CPU side, an entry-level processor like the Intel Xeon E-2300 series or AMD Ryzen 5 would cover small-to-mid n8n workloads without strain. For production deployments handling steady concurrent execution, mid-tier server CPUs like the Intel Xeon Gold 6230R or AMD EPYC 7313P provide the headroom required without overspending.

For high-volume queue mode setups distributing work across many workers, top-tier processors like the AMD EPYC 9354P or 9474F deliver enough cores and per-core performance to keep workers running cleanly in parallel.

#Choose the Right Storage Type

NVMe SSDs are highly recommended for every production setup, as PostgreSQL relies heavily on random read performance. As the execution history table grows past a few hundred thousand rows, the gap between NVMe and SATA SSD becomes insurmountable in terms of query response times. SATA drives are acceptable for development instances, but spinning hard drives should never be considered for a n8n deployment at any tier.

On the volume side, the right capacity depends on workload tier and pruning configuration. A development instance gets by on 20-50 GB. A production instance running steady webhook traffic should start at 50-100 GB to accommodate execution log growth and PostgreSQL's working data.

A queue mode build processing thousands of executions per day benefits from 200 GB or more, ideally split across separate volumes for the database, the n8n application directory, and backups. Whatever the starting capacity, configure execution pruning early so the database does not balloon past the point where queries slow down.

#Factor in Network and Location

Webhook-heavy setups need stable, reliable bandwidth to handle thousands of incoming HTTP requests without dropping connections, and the geographical distance between your server and the APIs your workflows call also plays a role.

Every API request in every execution incurs a round-trip latency cost based on that distance. Hosting your server in Frankfurt while most of your connected services run in US-East adds 80-120ms per call, and that will accumulate across every single workflow run.

The right move is to pick the data center closest to where most of your workflows' traffic actually originates and terminates. If your APIs are predominantly US-based, host the build in a US data center. If your incoming webhooks come from European services, host them in Europe.

For workflows spanning multiple regions, a centrally located data center, such as Germany or the Netherlands, often offers the best balance. If most providers offer multiple geographic options, pick the one whose latency profile most closely matches your traffic pattern.

#Consider Billing and Deployment Speed

Pre-configured servers from most providers ship in 15-30 minutes, while custom hardware builds can take 24-72 hours, but let you specify the exact components. Consider how quickly you need your n8n build online before committing to a template build or a more personal custom rig.

For staging and testing environments that you frequently spin up and tear down, hourly billing offers flexibility that only charges for the time you use, and can be canceled at any time, at the cost of long term cost efficiency if you end up going for the long haul.

On the other hand, monthly or annual contracts bring the per-hour cost down for production servers that run around the clock. These fixed term plans ensure consistency and remove the possibility of a meter running out during crucial operations.

When comparing providers, evaluate them on:

  • NVMe storage options for the database and execution log volumes

  • Data center locations positioned near the APIs your workflows connect to

  • Provisioning speed that lines up with how quickly you need to go live

  • A mix of hourly, monthly, and annual billing to match each environment

  • API endpoints for automated provisioning and infrastructure-as-code workflows

Cherry Servers offers VPS and bare metal dedicated servers with NVMe storage, flexible billing, and API-driven provisioning across multiple data centers, with high control over your hardware, and 24-hour human technical support engineers to help maintain an optimal build.

#Scaling n8n in Production (Queue Mode, Workers)

Once the right server is in place, the next question is how n8n itself handles execution as workflow volume grows. The platform ships with two distinct execution architectures that suit very different scales.

Single-node mode runs everything in one process, which is simple but has clear limits. Queue mode distributes execution across multiple processes, adding operational complexity but removing the scaling ceiling entirely.

The architectures are not interchangeable. Each uses the database differently, each has different hardware requirements, and each suits a different point in a deployment's lifecycle.

Understanding how they differ prevents the common mistake of either over-engineering a small deployment or leaving a large one stuck on an architecture that cannot grow with it.

Switching between modes is possible but not effortless. Moving from single-node to queue mode requires adding Redis as a service, deploying additional worker containers, setting EXECUTIONS_MODE=queue across the entire stack, and validating that workflows execute as expected on the new architecture.

Existing data in PostgreSQL migrates cleanly, making the transition manageable. Downscaling from queue mode back to single-node is even easier: stop the workers, remove Redis from the environment, and unset the queue mode variable. The takeaway is that picking the right mode at the start saves operational effort, but neither choice locks you in permanently.

#Single-node mode

Single-node mode is the default. A single Node.js process runs the editor UI, the execution engine, the webhook listener, and the scheduler, all sharing one pool of CPU and memory.

It is the simplest possible deployment, and for 20-30 workflows triggering a few times per hour, it works flawlessly. Setup is minimal because everything runs in a single container.

The architecture starts breaking down once the execution load climbs. A single large workflow execution ties up the process that also serves the editor, so the UI freezes until the workflow finishes.

Webhook endpoints share that same process, which means incoming webhook requests from external services can time out while another workflow hogs the runtime. Scheduled triggers fire on schedule, but wait in line behind whatever else the process is currently running.

You can push single-node mode further with more CPU and RAM, but there is a hard ceiling as one Node.js process can only do so much, no matter how much hardware you throw at it.

#Queue Mode Architecture

Queue mode decouples execution from the main process entirely. The main container still handles the editor, the API, the scheduler, and the webhook listener.

Instead of running workflows locally, it writes a job ID to Redis whenever a trigger fires. Separate worker containers watch the Redis queue, pick up jobs, execute the corresponding workflows, and write results back to PostgreSQL.

The main process never touches execution, which means the editor stays responsive no matter what the workers are doing. Adding a second worker doubles execution capacity.

Adding a fourth quadruples it, and so on. So scaling effectively becomes a matter of spinning up more worker containers, not rebuilding the architecture from scratch.

Activating queue mode requires setting EXECUTIONS_MODE=queue on both the main process and every worker, along with the Redis connection environment variables. Workers can run on the same machine as the main process for small deployments or on separate machines for true horizontal scaling.

The tradeoff is operational complexity. Queue mode requires Redis as an additional service, PostgreSQL (SQLite does not work), and careful coordination of environment variables across all containers. Monitoring also becomes more complex because the execution state now spans multiple components.

#Webhook Processors

Webhook processor containers are specialized n8n instances that accept incoming HTTP requests and pass them into the Redis queue without running any workflow logic themselves. For high webhook volumes, consider implementing a separate scaling layer on top of queue mode.

Placing multiple webhook processors behind a load balancer spreads incoming traffic across them and lets the deployment absorb webhook spikes that would otherwise overwhelm a single listener.

The main process should stay out of the load balancer pool because mixing webhook traffic into it degrades the editor experience. Users who are building and testing workflows simultaneously will notice the slowdown immediately.

#When to Switch to Queue Mode

As in all previous cases, the use of queue mode depends on your independent use case. If your n8n only handles a handful of workflows that trigger a few times a day, you likely won’t need queue mode, but if you notice any of the following signs, it may to time for a switch:

  • The editor lags or freezes during workflow execution,

  • Webhook endpoints return HTTP 504 timeouts,

  • Building workflows feels slow during peak execution hours,

  • Inbound webhook volume outgrows what a single process can handle.

#Final Thoughts

Getting the n8n infrastructure right on day one saves a painful migration later, and the most consequential decision is the database. Start with PostgreSQL instead of SQLite even for small deployments, because moving database engines after accumulating hundreds of saved workflows and credentials is tedious and risky.

From there, plan RAM allocation around your heaviest workflow rather than your average one, since the Code node's memory duplication makes peak load the deciding factor for stability.

Once the database and memory are set, the next layer to think ahead about is execution architecture. Keep queue mode in your back pocket from the start by using Docker Compose, so the jump from single-node to multi-worker becomes a configuration change rather than a rebuild.

Around all of that, the supporting decisions matter just as much: pick your data center based on where your workflow traffic actually originates, prefer NVMe storage for any production database, and front the entire stack with a reverse proxy handling SSL.

FAQs

What are the minimum system requirements for self-hosting n8n?

n8n requires at least 2 CPU cores, 2 GB of RAM, 20 GB of SSD storage, and Node.js 20.19 (through 24.x) or Docker on a Linux-based OS. These minimums apply only to development and testing. Production environments need 4+ cores, 8-16 GB RAM, and PostgreSQL.

Can I run n8n on a VPS, or do I need a dedicated server?

A VPS handles light to medium n8n workloads effectively. Dedicated servers become worthwhile for production queue-mode setups, high-concurrency execution, or workloads that need consistent performance without resource contention from other tenants.

Can I run n8n on Cherry Servers?

Yes. Cherry Servers offers both VPS and bare metal dedicated servers with NVMe storage, Docker support, and Linux OS options, including Ubuntu 22.04 and 24.04. Pre-configured servers deploy in 15-30 minutes, and API-driven provisioning lets you automate the setup as part of your infrastructure workflow.

Which database should I use for n8n in production?

PostgreSQL versions 13 through 17. n8n defaults to SQLite, but SQLite cannot handle concurrent writes well and does not support queue mode. PostgreSQL 16 offers the best balance of stability and performance for new installations.

How much storage does n8n need?

Start with a 50-100 GB NVMe SSD for production. Execution history accumulates over time, and active instances can generate several gigabytes of log data monthly. Configure automatic pruning with environment variables to control database growth.

Does n8n support Windows servers?

n8n runs on Windows through WSL2 or Docker Desktop. However, n8n recommends Linux-based operating systems for production deployments. Linux provides better performance, stability, and compatibility with Docker and PostgreSQL.

What is n8n queue mode, and when do I need it?

Queue mode distributes workflow execution across separate worker processes using Redis as a message broker. You need it when the n8n editor slows down under heavy load, webhook requests time out, or you require horizontal scaling for high-volume automation.

Cloud VPS Hosting

Starting at just $3.51 / month, get virtual servers with top-tier performance.

Share this article

Related Articles

Published on May 3, 2026 Updated on May 4, 2026

Buying vs Renting Dedicated Servers: Full Cost Breakdown

Compare buying vs renting a dedicated server. Explore real costs, TCO, ROI, and find the best option for your workload and budget in 2026.

Read More
Published on Apr 29, 2026 Updated on Apr 30, 2026

Best VPS for OpenClaw: 5 Providers

Compare 5 VPS providers for OpenClaw. Learn key features, pricing, and best setups for running a reliable self-hosted AI gateway with 24/7 uptime.

Read More
Published on Apr 28, 2026 Updated on Apr 29, 2026

Top Dedicated Server Features You Need in 2026

Discover key dedicated server features for 2026, from bare metal performance to GPU, storage, and security. Choose the right infrastructure for scaling apps.

Read More
No results found for ""
Recent Searches
Navigate
Go
ESC
Exit