How to Set Up Solana RPC Node: Step-by-Step
Running your own RPCis usually about three things: better reliability, fewer rate-limit headaches, and more control over performance and configuration. This guide explains how to set up and run your own Solana RPC node, which provides a private endpoint for querying blockchain data and submitting transactions.
Although we’ll use Devnet for this walkthrough, we’ll also show the exact flags and config changes needed to switch your RPC node to Testnet or Mainnet-Beta when you’re ready.
#What is a Solana RPC node?
A Solana RPC node is your own gateway into the Solana network. It exposes Solana’s JSON-RPC endpoints over HTTP and, if you enable it, WebSocket too. That means your apps and scripts can query on-chain data like account balances, transaction details, and block info, and also submit transactions, using an endpoint you control.
If you want a deeper explanation of RPC nodes before you dive into setup, here’s a good read: What is an RPC Node.
#Solana RPC node setup
Before deploying a Solana RPC node, ensure your server meets the network’s performance and reliability requirements. The sections below outline the recommended Solana RPC node setup: the hardware, software, and network configuration needed to run a stable node.
#Solana RPC node hardware requirements
CPU
• Minimum: 24 physical cores / 48 threads
• Production grade: 32 cores / 64 threads
• Base clock speed: 3.55 GHz or higher
• Must support SHA extensions and AVX2 instructions
• Recommended processors: AMD EPYC Gen 4 or newer
RAM
• 384 GB+ (ECC recommended)
• 512 GB+ if using account-index
Storage
• NVMe SSD required (PCIe Gen3 x4 or better)
• Accounts: 1 TB+ NVMe, high endurance (high TBW)
• Ledger: 2TB+ NVMe, high endurance (high TBW)
• Snapshots: separate disk recommended
• Use larger ledger storage if you need more historical data
Network
• 10 Gbps recommended
#Software Requirements
• Ubuntu: Ubuntu 24.04 LTS or newer
• SSH access and a sudo user
• Basic tools: curl, tar, gzip, git
• Time sync enabled (chrony/ntp)
#Ports
Solana P2P
• TCP and UDP ports in the 8000–10000 range (or a restricted dynamic port range)
RPC access
• HTTP JSON-RPC commonly uses 8899
• WebSocket commonly uses 8900
For this demonstration, we used a dedicated server with specs similar to the requirements above. You can pick a comparable server here.
Deploy a Solana RPC node in minutes
Dedicated server configuration optimized for Solana RPC workloads.
#Choose your environment
We’ll use Devnet in this guide so you can test the full setup without touching real assets.
• Devnet
Best for development and testing. It’s the safest place to validate your RPC node setup end to end.
• Testnet
Mostly used for network testing and validator-related experiments. It can be less predictable than Devnet for app testing, but it’s useful when you want to test in a more “network-like” environment.
• Mainnet-Beta
Production network. Use this when you’re ready to serve real users and real transactions.
For now, we’ll proceed with Devnet. Later in the guide, you’ll see exactly what to change to switch to Testnet or Mainnet-Beta.
#How to Set Up Solana RPC Node: Step-by-Step
#Prepare the server
Step 1: Open the terminal program
To start this guide, you will be running commands on your trusted computer, not on the remote server that you plan to use for the RPC node. First, open the terminal program on your local machine.
If you are using Ubuntu, you can open the terminal with Ctrl + Alt + T.
Step 2: Install the Solana CLI on your local machine
Pick your OS below and run the install.
Mac and Linux
Run this in your terminal
sh -c "$(curl -sSfL https://release.anza.xyz/stable/install)"
When the install works, you should see something like this
downloading v3.1.9 installer
✨ 3.1.9 initialized
Adding
export PATH="/Users/mac/.local/share/solana/install/active_release/bin:$PATH" to /Users/mac/.zprofile
Adding
export PATH="/Users/mac/.local/share/solana/install/active_release/bin:$PATH" to /Users/mac/.bash_profile
Close and reopen your terminal to apply the PATH changes or run the following in your existing shell:
export PATH="/Users/mac/.local/share/solana/install/active_release/bin:$PATH"
PATH update
Depending on your system, the installer may also print a message telling you to update your PATH. If you see that, copy the export command it shows and run it. It usually looks like this
export PATH="$HOME/.local/share/solana/install/active_release/bin:$PATH"
Close and reopen your terminal, then confirm
solana --version
Windows
Open Command Prompt as Administrator
Download the installer
cmd /c "curl https://release.anza.xyz/v3.1.9/agave-install-init-x86_64-pc-windows-msvc.exe --output C:\agave-install-tmp\agave-install-init.exe --create-dirs"
Run it
C:\agave-install-tmp\agave-install-init.exe v3.1.9
Close the window, open a new Command Prompt, then confirm
solana --version
Output:
solana-cli 3.1.9 (src:765ee54a; feat:1620780344, client:Agave)
Version note
You can replace v3.1.9 with the release tag matching the software version you want, or use one of the channel names stable, beta, or edge.
Step 3: Set the Solana cluster to Devnet
solana config set --url https://api.devnet.solana.com
solana config get
Output:
Config File: /Users/mac/.config/solana/cli/config.yml
RPC URL: https://api.devnet.solana.com
WebSocket URL: wss://api.devnet.solana.com/ (computed)
Keypair Path: /Users/mac/.config/solana/id.json
Commitment: confirmed
Config File: /Users/mac/.config/solana/cli/config.yml
RPC URL: https://api.devnet.solana.com
WebSocket URL: wss://api.devnet.solana.com/ (computed)
Keypair Path: /Users/mac/.config/solana/id.json
Commitment: confirmed
Step 4: Generate the RPC node identity keypair
solana-keygen new --outfile rpc-identity.json
Output:
Generating a new keypair
For added security, enter a BIP39 passphrase
NOTE! This passphrase improves security of the recovery seed phrase NOT the
keypair file itself, which is stored as insecure plain text
BIP39 Passphrase (empty for none):
Wrote new keypair to rpc-identity.json
============================================================================
pubkey: 96cTDLu1NoxmcFafVjMnxFQjRoU7atpCsTn2KQBoeRKd
============================================================================
Save this seed phrase and your BIP39 passphrase to recover your new keypair:
butter travel weapon strong olympic now wrong void happy pet cheese marriage
============================================================================
Step 5: Provision your dedicated server
Deploy a dedicated server that matches the requirements in Section 2. Once it’s ready, copy the public IP address.
Step 6: SSH into the server
Next, SSH into your server using the appropriate command for your server provider. This will allow you to access and configure the validator remotely. The command should be in this format:
ssh user@YOUR_SERVER_IP
Step 7: Update packages and install basic tools
This keeps your system clean and avoids dependency issues later.
sudo apt update -y && sudo apt upgrade -y
sudo apt install -y curl wget git jq tar gzip unzip ca-certificates ufw
Step 8: Enable the firewall
Do this early, but allow SSH first so you don’t lock yourself out.
Allow SSH:
sudo ufw allow "OpenSSH"
Output:
Rules updated
Rules updated (v6)
Allow Solana P2P ports:
sudo ufw allow 8000:10000/udp
sudo ufw allow 8000:10000/tcp
Output:
Rules updated
Rules updated (v6)
Rules updated
Rules updated (v6)
If you want RPC reachable publicly, allow these too:
sudo ufw allow 8899/tcp
sudo ufw allow 8900/tcp
Output:
Rules updated
Rules updated (v6)
Rules updated
Rules updated (v6)
Enable and confirm:
sudo ufw enable
sudo ufw status
Output:
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
8000:10000/udp ALLOW Anywhere
8000:10000/tcp ALLOW Anywhere
8899/tcp ALLOW Anywhere
8900/tcp ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
8000:10000/udp (v6) ALLOW Anywhere (v6)
8000:10000/tcp (v6) ALLOW Anywhere (v6)
8899/tcp (v6) ALLOW Anywhere (v6)
8900/tcp (v6) ALLOW Anywhere (v6)
Step 9: Add a new user
This user will run the RPC process.
sudo adduser sol
This will create a new user named “sol” on the server.
Output:
info: Adding user `sol' ...
info: Selecting UID/GID from range 1000 to 59999 ...
info: Adding new group `sol' (1001) ...
info: Adding new user `sol' (1001) with group `sol (1001)' ...
info: Creating home directory `/home/sol' ...
info: Copying files from `/etc/skel' ...
New password:
Retype new password:
passwd: password updated successfully
Changing the user information for sol
Enter the new value, or press ENTER for the default
Full Name []:
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n] Y
info: Adding new user `sol' to supplemental / extra groups `users' ...
info: Adding user `sol' to group `users' ...
Step 10: Format and mount your NVMe drives
10.1 Identify disks
lsblk -o NAME,SIZE,TYPE,MOUNTPOINT
Output:
NAME SIZE TYPE MOUNTPOIN
nvme2n1 894.3G disk
nvme1n1 894.3G disk
nvme0n1 894.3G disk
├─nvme0n1p1 488M part /boot/efi
└─nvme0n1p2 893.8G part /
nvme3n1 894.3G disk
Do not format the disk that has / mounted on it, in this case it is nvme0n1 .
10.2 Decide your disk mapping
Before formatting anything, decide which disk is for what. For this guide, we will use:
• Ledger disk: /dev/nvme1n1 mounted to /mnt/ledger
• Accounts disk: /dev/nvme2n1 mounted to /mnt/accounts
• Snapshots: stored inside the ledger mount at /mnt/ledger/snapshots
Your disk names may be different, so confirm with lsblk first.
10.3 Format and mount Ledger
Format the ledger disk:
sudo mkfs -t ext4 /dev/nvme1n1
sudo lsblk -f
Output:
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 234423126 4k blocks and 58613760 inodes
Filesystem UUID: 52a7eb1c-243d-400e-87e5-f122576dd4a1
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
nvme2n1
nvme1n1
nvme0n1
├─nvme0n1p1 vfat FAT32 uefi-boot A1AE-6E23 480.9M 1% /boot/efi
└─nvme0n1p2 ext4 1.0 cbfae2fb-4940-4bca-bfbe-99cad4410b24 822.5G 1% /
nvme3n1
Mount it:
sudo mkdir -p /mnt/ledger
sudo chown -R sol:sol /mnt/ledger
sudo mount /dev/nvme1n1 /mnt/ledger
Create a snapshots folder on the ledger mount:
sudo mkdir -p /mnt/ledger/snapshots
sudo chown -R sol:sol /mnt/ledger/snapshots
10.4 Format and mount Accounts
Format the accounts disk:
sudo mkfs -t ext4 /dev/nvme2n1
sudo lsblk -f
Output:
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 234423126 4k blocks and 58613760 inodes
Filesystem UUID: 4af2a8fa-45a1-4c1b-916b-1b46f1bca188
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
nvme2n1
nvme1n1 ext4 1.0 52a7eb1c-243d-400e-87e5-f122576dd4a1 834.4G 0% /mnt/ledger
nvme0n1
├─nvme0n1p1 vfat FAT32 uefi-boot A1AE-6E23 480.9M 1% /boot/efi
└─nvme0n1p2 ext4 1.0 cbfae2fb-4940-4bca-bfbe-99cad4410b24 822.5G 1% /
nvme3n1
Mount it:
sudo mkdir -p /mnt/accounts
sudo chown -R sol:sol /mnt/accounts
sudo mount /dev/nvme2n1 /mnt/accounts
10.5 Verify mounts
df -h | grep -E "ledger|accounts"
You should see an output like this:
/dev/nvme1n1 880G 32K 835G 1% /mnt/ledger
/dev/nvme2n1 880G 28K 835G 1% /mnt/accounts
10.6 Persist mounts after reboot
Get the real UUIDs for your ledger and accounts drives
Run:
sudo blkid /dev/nvme1n1
sudo blkid /dev/nvme2n1
You will get output that looks like:
/dev/nvme1n1: UUID="52a7eb1c-243d-400e-87e5-f122576dd4a1" BLOCK_SIZE="4096" TYPE="ext4"
/dev/nvme2n1: UUID="4af2a8fa-45a1-4c1b-916b-1b46f1bca188" BLOCK_SIZE="4096" TYPE="ext4"
Edit fstab:
sudo nano /etc/fstab
Add entries like:
UUID=YOUR_LEDGER_UUID /mnt/ledger ext4 defaults,noatime 0 2
UUID=YOUR_ACCOUNTS_UUID /mnt/accounts ext4 defaults,noatime 0 2
Paste those real UUIDs into /etc/fstab
Open the file:
sudo nano /etc/fstab
Test the fstab entries
sudo mount -a
df -h | grep -E "ledger|accounts"
If mount -a runs with no errors and you can see /mnt/ledger and /mnt/accounts in the output, you’re good.
Output:
/dev/nvme1n1 880G 32K 835G 1% /mnt/ledger
/dev/nvme2n1 880G 28K 835G 1% /mnt/accounts
Step 11: Give the sol user access and create snapshots folder
After mounting your ledger and accounts drives, make sure the sol user owns both mount points. The RPC process runs as sol, so it must be able to create folders and write files inside these directories.
Give sol permission for ledger and accounts
sudo chown -R sol:sol /mnt/ledger
sudo chown -R sol:sol /mnt/accounts
Create snapshots on the ledger disk
We are keeping snapshots on the ledger disk, so create the folders inside /mnt/ledger.
sudo mkdir -p /mnt/ledger/snapshots
sudo chown -R sol:sol /mnt/ledger/snapshots
Step 12: System tuning
This helps prevent random crashes under load, especially file handle and memory mapping limits.
Create a sysctl config:
sudo bash -c "cat >/etc/sysctl.d/21-solana-rpc.conf <<EOF
net.core.rmem_default = 134217728
net.core.rmem_max = 134217728
net.core.wmem_default = 134217728
net.core.wmem_max = 134217728
vm.max_map_count = 1000000
fs.nr_open = 1000000
EOF"
Apply it:
sudo sysctl -p /etc/sysctl.d/21-solana-rpc.conf
Increase open files limit:
sudo bash -c "cat >/etc/security/limits.d/90-solana-nofiles.conf <<EOF
* - nofile 1000000
EOF"
It’s a good idea to log out and log back in after this so limits apply properly.
Step 13: Copy the RPC identity keypair to the server
From your local machine:
scp -C rpc-identity.json root@YOUR_SERVER_IP:/home/sol/
Now SSH into the server and lock it down:
sudo chown sol:sol /home/sol/rpc-identity.json
sudo chmod 600 /home/sol/rpc-identity.json
#Install Agave on the server
Step 1: Switch to the sol user
su - sol
Step 2: Install the Agave CLI
sh -c "$(curl -sSfL https://release.anza.xyz/stable/install)"
Reload your shell:
exec $SHELL
Confirm it works:
solana --version
Step 3: Why we are building from source
As of Agave v3.0.0, Anza no longer ships the agave-validator binary as a prebuilt release artifact. You may install the CLI tools successfully and still not have the validator binary needed to run an RPC node. For that reason, operators must now build Agave from source to obtain agave-validator and run the exact version they want.
Step 4: Install build dependencies
These packages need admin access. If you are currently logged in as the sol user, switch to root, install the dependencies, then switch back to sol.
Switch to root:
exit
Install dependencies:
apt update -y
apt install -y git clang cmake make pkg-config libssl-dev llvm libudev-dev protobuf-compiler
Install libclang packages:
apt install -y libclang-dev libclang1 llvm-dev
Switch back to the sol user:
su - sol
Step 5: Set LIBCLANG_PATH
Find the installed libclang location:
export LIBCLANG_PATH="$(dirname "$(find /usr -name 'libclang.so*' 2>/dev/null | head -n 1)")"
echo $LIBCLANG_PATH
Optional, but recommended to persist it:
echo "export LIBCLANG_PATH=$LIBCLANG_PATH" >> ~/.bashrc
source ~/.bashrc
Step 6: Install Rust using rustup
curl https://sh.rustup.rs -sSf | sh
Then load Rust into your current shell:
. "$HOME/.cargo/env"
This adds cargo and rustup to your PATH.
Step 7: Confirm it worked
cargo --version
rustup --version
Step 8: Build Agave from source
cd ~
git clone https://github.com/anza-xyz/agave.git
cd agave
./scripts/cargo-install-all.sh .
Step 9: Add the Agave binaries to PATH
echo 'export PATH="$HOME/agave/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
Quick check:
which agave-validator || true
which solana-validator || true
ls -1 ~/agave/bin | grep -E "validator"
Output:
agave-validator
solana-test-validator
Step 10: Set the cluster to Devnet
solana config set --url https://api.devnet.solana.com
solana config get
Output:
Config File: /home/sol/.config/solana/cli/config.yml
RPC URL: https://api.devnet.solana.com
WebSocket URL: wss://api.devnet.solana.com/ (computed)
Keypair Path: /home/sol/.config/solana/id.json
Commitment: confirmed
Config File: /home/sol/.config/solana/cli/config.yml
RPC URL: https://api.devnet.solana.com
WebSocket URL: wss://api.devnet.solana.com/ (computed)
Keypair Path: /home/sol/.config/solana/id.json
Commitment: confirmed
#Configure and start the RPC node
Step 1: Confirm the validator binary is available
If you already added /home/sol/agave/bin to your PATH, this should work:
which agave-validator
If agave-validator is not found, use the full path instead:
ls -lh /home/sol/agave/bin/agave-validator
Step 2: Create an RPC startup script
Create the script:
nano ~/rpc.sh
Paste this:
#!/bin/bash
set -e
exec /home/sol/agave/bin/agave-validator \
--identity /home/sol/rpc-identity.json \
--known-validator dv1ZAGvdsz5hHLwWXsVnM94hWf1pjbKVau1QVkaMJ92 \
--known-validator dv2eQHeP4RFrJZ6UeiZWoc3XTtmtZCUKxxCApCDcRNV \
--known-validator dv4ACNkpYPcE3aKmYDqZm9G5EB3J4MRoeE7WNDRBVJB \
--known-validator dv3qDFk1DTF36Z62bNvrCXe9sKATA6xvVy6A798xxAS \
--only-known-rpc \
--full-rpc-api \
--no-voting \
--ledger /mnt/ledger \
--accounts /mnt/accounts \
--snapshots /mnt/ledger/snapshots \
--log /home/sol/solana-rpc.log \
--rpc-port 8899 \
--rpc-bind-address 0.0.0.0 \
--dynamic-port-range 8000-8025 \
--entrypoint entrypoint.devnet.solana.com:8001 \
--wal-recovery-mode skip_any_corrupted_record \
--limit-ledger-size
Save and exit.
Make it executable:
chmod +x ~/rpc.sh
Step 3: Start the RPC node
Run it in the foreground first:
~/rpc.sh
Step 4: What to expect on first start
On the first run, the node will usually download and unpack a snapshot. This can take a while, and the RPC port may not be open immediately until the snapshot is downloaded and the node finishes bootstrapping.
Step 5: Watch logs
tail -f /home/sol/solana-rpc.log
Step 6: Verify RPC is listening
In another SSH session, check if the RPC port is open:
ss -lntp | grep 8899 || true
Output:
LISTEN 0 1024 0.0.0.0:8899 0.0.0.0:* users:(("agave-validator",pid=184054,fd=217238))
Once it appears, test locally on the server:
curl http://127.0.0.1:8899 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"getHealth"}'
Output:
{"jsonrpc":"2.0","result":"ok","id":1}
Note about other networks
The --known-validator values are network specific. If you later switch to Testnet or Mainnet-Beta, you must update the --known-validator list to match that network.
#Run the RPC node with systemd
Note: Make sure to stop the RPC Node before following with the step
Step 1: Make sure your startup script uses the full validator path
This avoids “command not found” issues when systemd runs the service.
Open the script:
nano /home/sol/rpc.sh
Confirm it uses the full path, like:
exec /home/sol/agave/bin/agave-validator \
Step 2: Switch to root
exit
Step 3: Create a systemd service file
nano /etc/systemd/system/solana-rpc.service
Paste this:
[Unit]
Description=Solana RPC Node
After=network-online.target
Wants=network-online.target
[Service]
User=sol
Group=sol
Type=simple
ExecStart=/home/sol/rpc.sh
Restart=always
RestartSec=3
LimitNOFILE=1000000
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
Step 4: Reload systemd and start the service
systemctl daemon-reload
systemctl enable --now solana-rpc
Output:
Created symlink /etc/systemd/system/multi-user.target.wants/solana-rpc.service → /etc/systemd/system/solana-rpc.service.
Step 5: Check status and logs
systemctl status solana-rpc --no-pager
journalctl -u solana-rpc -f
#Verify the RPC is working
Step 1: Watch the logs
On first start, the node will usually download a snapshot and rebuild local state. While this is happening, your RPC port may not be open yet.
tail -f /home/sol/solana-rpc.log
Step 2: Check when the RPC port is live
ss -lntp | grep 8899 || true
If it returns nothing, the node is still bootstrapping.
Step 3: Test RPC locally on the server
Once 8899 is listening:
curl http://127.0.0.1:8899 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"getHealth"}'
Slot check:
curl http://127.0.0.1:8899 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"getSlot"}'
Step 4: Test RPC from your local machine
If your RPC is public:
curl http://YOUR_SERVER_IP:8899 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"getHealth"}'
If you kept RPC private, use an SSH tunnel:
ssh -L 8899:127.0.0.1:8899 root@YOUR_SERVER_IP
Then test:
curl http://127.0.0.1:8899 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"getHealth"}'
Step 5: Create a Devnet wallet
solana-keygen new --outfile devnet-wallet.json
Get the wallet address:
solana-keygen pubkey devnet-wallet.json
Step 6: Airdrop SOL using the Devnet faucet
Open the Solana faucet and request Devnet SOL to the address you just generated:
Step 7: Check the wallet balance through your RPC
Point your CLI to your RPC:
solana config set --url http://YOUR_SERVER_IP:8899
solana config set --keypair devnet-wallet.json
solana balance
If you prefer using curl directly:
curl http://YOUR_SERVER_IP:8899 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc":"2.0",
"id":1,
"method":"getBalance",
"params":["YOUR_WALLET_ADDRESS"]
}'
If your node is still syncing, some requests may be slow or return behind data. Once it catches up, responses become consistent.
#Switching environments
In this guide we used Devnet. Switching to Testnet or Mainnet Beta is basically changing a few cluster specific settings.
Step 1: Stop the node
If you are using systemd:
sudo systemctl stop solana-rpc
Step 2: Use a fresh ledger folder per network
Do not reuse the same ledger data across networks. It can cause weird issues.
Create new folders:
sudo mkdir -p /solana/testnet/{ledger,accounts,snapshots,logs}
sudo mkdir -p /solana/mainnet/{ledger,accounts,snapshots,logs}
sudo chown -R sol:sol /solana/testnet /solana/mainnet
Step 3: Update your rpc.sh
Open the script:
nano ~/rpc.sh
Update these parts:
Environment
• Devnet uses entrypoint.devnet.solana.com:8001
• Testnet uses entrypoint.testnet.solana.com:8001
• Mainnet Beta uses entrypoint.mainnet-beta.solana.com:8001
Paths
• Point --ledger, --accounts, --snapshots, and --log to the new network folders you created
Known validators
• The --known-validator list is network specific
• Devnet validators will not be the same as Testnet or Mainnet Beta
• When switching networks, replace the known validators with the correct ones for that network
Optional but recommended
• If you use --expected-genesis-hash, update it for the target network too
Step 4: Update Solana CLI network on the server
As the sol user:
Devnet
solana config set --url https://api.devnet.solana.com
Testnet
solana config set --url https://api.testnet.solana.com
Mainnet Beta
solana config set --url https://api.mainnet-beta.solana.com
Step 5: Start the node again
sudo systemctl start solana-rpc
sudo systemctl status solana-rpc --no-pager
Step 6: Quick sanity check
Wait a bit, then watch logs and check when RPC starts listening again:
tail -f /solana/logs/solana-rpc.log
ss -lntp | grep 8899 || true
#Conclusion
You now have a Solana RPC node running on Devnet, with a setup that is easy to manage and restart like a normal service. At this point, you can query chain data through your own endpoint, test requests locally, and also connect from an external machine when you are ready.
The first startup can take time because the node may need to download a large snapshot and rebuild state. Once that completes, the RPC port comes up and your node becomes usable for reads and transactions.
When you are ready to move beyond Devnet, switching to Testnet or Mainnet Beta is mostly about updating cluster specific settings like entrypoints, known validators, and using a fresh ledger folder per network.
Set up your Solana server in minutes
Optimize cost and performance with a pre-configured or custom dedicated bare metal server for blockchain workloads. High uptime, instant 24/7 support, pay in crypto.
We accept Bitcoin and other popular cryptocurrencies.