Best VPS Providers for LibreChat in 2026
By Arnas Kazlaus · Software engineer and founder, 15 years shipping code
Tests run personally on rented VPSes · Last updated
The best VPS for LibreChat isn't the one with the biggest CPU spec — it's the one with the shortest network path to OpenAI. We installed LibreChat on 7 VPS providers, sent 50 real gpt-4o-mini chat requests from each VM, and measured time-to-first-token + full completion latency. Plus idle footprint, UI load time, register + login round trips. Every number below comes from real runs on 2026-04-22. Kamatera and OVHcloud lead on chat latency; DigitalOcean loses by a wide margin for the second time.
hostingselector.com rented one server per host, installed LibreChat + MongoDB + Meilisearch from the official docker-compose stack, and put each one through the same three measurements: (1) full idle footprint with the stack running but no conversations yet, (2) 20 back-to-back GETs against the LibreChat UI for TTFB distribution, (3) 50 real chat requests from the VM to OpenAI via gpt-4o-mini with a 50-token response cap, timing both TTFT and full-completion latency. Ubuntu 24.04 on all hosts. Fresh install per provider. The numbers below come from real runs we did on 2026-04-22. All scripts are open; you can replicate the tests from scripts/bench/tools/librechat/ in our repo.
Some links on this page are affiliate links. If you sign up through one, we get a small commission — you pay the same price either way. This doesn’t change who wins below.
- Test window
- 2026-04-22 — 2026-04-22
- Region
- Provider flagship region
Why LibreChat?
Why self-host LibreChat? If you want a ChatGPT-style interface but don't want to pay OpenAI's ChatGPT Plus $20/month per person or hand conversation history to a SaaS, LibreChat is the escape hatch. It's an open-source web UI that talks directly to OpenAI, Anthropic, Google, and local LLM APIs — you bring your own API keys, the chat UI runs on your VPS. Works for personal chat, team-shared workspaces, and multi-model workflows (compare GPT-4o vs Claude vs Gemini side-by-side). Trade-off: you pay ~$15-40/month for the VPS plus per-token API costs. For a team of 3+ users, breakeven vs ChatGPT Plus is usually month one.
What LibreChat needs from a VPS
Before you shop, know what LibreChat actually needs. The surprising part: CPU barely matters. Network latency to OpenAI matters a lot.
2 vCPU minimum
LibreChat's Node.js backend is very light. Hostinger's 2-vCPU plan beat Vultr's 4-vCPU plan on chat latency in our tests because the bottleneck is network, not CPU.
2 GB minimum, 4 GB recommended
The full stack (LibreChat + MongoDB + Meilisearch) uses ~1.1-1.3 GB idle. 2 GB VPS is workable solo; 4 GB+ for team use.
20 GB SSD minimum
The full install weighs ~3 GB. Mongo grows with chat history — budget 10 GB headroom for a year of heavy use.
Ubuntu 22.04 or 24.04 LTS
Debian 12 also works. LibreChat's docker-compose stack assumes modern Docker.
US-East or EU-West
Network hop to OpenAI dominates chat latency. Our Hetzner US-East dry run was ~230ms faster than Hetzner US-West. OVH Canada-East and Kamatera US-NJ won the chat-latency benchmark. Pick a datacenter close to OpenAI's edge, not close to you.
Must be installable
The stack is three containers: LibreChat, MongoDB, and Meilisearch. LXC-based 'container' VPS hosting often does not support this.
500 GB outbound minimum
LibreChat is light — most traffic is small JSON roundtrips. Heavy image/file attachments change the math.
Our Top Picks
Kamatera
Best disk by 2×, fastest chat total, network routing that matters
$39.00/mo
Hetzner
Cheapest plan, region choice matters for chat latency
$15.00/mo
Hostinger
Fastest per-core CPU, friendly UI, predatory pricing model
$24.49/mo
Kamatera
Best disk by 2×, fastest chat total, network routing that matters
$39.00/mo
OVHcloud
Fastest chat TTFT, DDoS protection, unmetered bandwidth
$78.68/mo
At-a-Glance Comparison
Every provider side-by-side. Lower is better for deploy time; higher is better for everything else.
| Host | Price/mo | CPU (events/sec) | Disk IOPS | Deploy p50 | Variance | Bandwidth |
|---|---|---|---|---|---|---|
| Hetzner Cheapest plan, region choice matters for chat latency | $15.00/mo | 3,593 | 5,652 | — | — | 20 TB EU / 1 TB US |
| Hostinger Fastest per-core CPU, friendly UI, predatory pricing model | $24.49/mo | 4,249 | 13,422 | — | — | 8 TB, throttle only |
| Vultr Fastest CPU at 4 vCPU, pricey, mid-pack network to OpenAI | $48.00/mo | 4,140 | 9,891 | — | — | 6 TB, $0.01/GB over |
| OVHcloud Fastest chat TTFT, DDoS protection, unmetered bandwidth | $78.68/mo | 3,670 | 10,759 | — | — | Unmetered |
| Contabo Cheapest plan, shared CPU hurts install + baseline, trans-Atlantic still competitive on chat | $11.21/mo | 1,511 | 3,013 | — | — | Unmetered* |
| DigitalOcean Best dashboard, slowest chat, worst performance-per-dollar | $48.00/mo | 816.97 | 3,169 | — | — | 5 TB, $0.01/GB over |
| Kamatera Best disk by 2×, fastest chat total, network routing that matters | $39.00/mo | 2,679 | 19,181 | — | — | 5 TB, $0.01/GB over |
* Contabo says “unmetered” but can throttle heavy users at their discretion. See full review.
Cost vs performance at a glance
Upper-left is best (cheap and fast). Lower-right is worst (expensive and slow).
Best overall pick · Avoid at this tier
The cost that isn’t on the sticker: bandwidth
LibreChat's bandwidth needs are modest compared to web-app hosts — chat JSON is small. But at 30 TB/month (a team sharing heavy attachments, for example), Hetzner Europe costs about $27 total while DigitalOcean costs about $298. Bandwidth math is tool-agnostic; these numbers match our Coolify + Dokploy editions.
Scenario: 4-core VPS at each host + 30 TB/month outbound traffic (outbound is what providers meter; inbound is usually free)
| Host | Base price | Traffic included | Over-quota policy | Total / mo |
|---|---|---|---|---|
| Contabo | $11.21 | unlimited | none | $11 |
| Hostinger | $24.49 | 8 TB | no public overage rate | $24 |
| Hetzner | $15.00 | 20 TB (EU) | €1/TB billed over quota | $27 |
| OVHcloud | $78.68 | unmetered | none | $79 |
| Vultr | $48.00 | 6 TB (HP AMD) | $0.01/GB over quota | $288 |
| Kamatera | $39.00 | 5 TB | $0.01/GB over quota | $289 |
| DigitalOcean | $48.00 | 5 TB | $0.01/GB over quota | $298 |
27× difference between cheapest and priciest for the same traffic. Bandwidth policy is the biggest hidden variable in VPS pricing.
Prices current as of April 2026. Hetzner 20 TB / €1/TB applies to EU regions — the US regions we tested are $17.99/mo with only 1 TB included. Hostinger $24.49 is the 1-month rate; their 24-month promo drops it to $8.99/mo. For typical LibreChat usage (chat JSON, no heavy file sharing), you'll be well under 1 TB/month and bandwidth doesn't matter. If you're serving voice/image attachments to a team, check the overage policy carefully.
Our picks
4 hosts we'd actually recommend — each wins on a specific axis.
Hetzner
Cheapest plan, region choice matters for chat latency
Min specs
$5.00/mo
Recommended
$15.00/mo
Benchmark measurements
- CPU (events/sec)
- 3593
- Disk IOPS
- 5652
- Disk throughput
- 22.1 MB/s
- Network
- 714 Mbps
- Tool setup
- 47 s
Setup & Ease of Use
- ✗1-Click Install Available
- ✗Docker Pre-Configured
- ✓Setup Under 10 Minutes
- ✓LibreChat-Specific Docs
- ✓Intuitive Control Panel
Performance
- ✓Strong CPU Benchmark
- ✓8GB RAM at Base Tier
- ✓NVMe SSD Storage
- ✓Low API Latency
- ✓99.9%+ Uptime
Pricing & Value
- ✓Affordable at Min Specs
- ✓Affordable at Rec. Specs
- ✓No Hidden Fees
- ✓Hourly Billing Available
- —Free Trial or Money-Back
Security
- ✓Easy Firewall Config
- ✓DDoS Protection Included
- ✓SSH Key Authentication
- ✓2FA on Hosting Panel
- ✓Automatic Backups
- —Tunnel/Reverse Proxy Guide
Support
- ✓24/7 Availability
- —Fast Response Time
- —LibreChat-Specific Knowledge
- ✓Community Forums
Pros
- Cheapest 4 vCPU / 8 GB plan we tested at $15/month (EU) / $17.99/month (US)
- LibreChat installs in 47 seconds on US-West (Hillsboro) — second only to Kamatera's 45s
- Dedicated vCPU, consistent TTFB on the LibreChat UI (2ms p50, 4ms p95)
- 20 TB of free outbound traffic in EU regions (1 TB in US — see cost math)
- 3,593 CPU events/sec baseline — competitive with the $48 tier providers
Cons
- Region choice matters: Ashburn (US-East) routed ~230ms faster to OpenAI than Hillsboro (US-West) in our tests. If you benchmark yours, try both.
- Ashburn was out of capacity when we re-provisioned for this test — Hetzner's US-East fills up. Pick a secondary region.
- Backups are opt-in, not on by default
- New accounts sometimes get held for identity verification
Visit HetznerBest value. Half the price of Vultr/DigitalOcean for similar or better LibreChat performance. The catch is region choice: our Hillsboro test saw chat TTFT of 714 ms p50; an earlier Ashburn dry run got 483 ms. If low latency to OpenAI matters to you, pick Ashburn when it has capacity (and check `cloud.hetzner.com` for availability before you buy).
Hostinger
Fastest per-core CPU, friendly UI, predatory pricing model
Min specs
$9.99/mo
Recommended
$24.49/mo
Benchmark measurements
- CPU (events/sec)
- 4249
- Disk IOPS
- 13422
- Disk throughput
- 52.4 MB/s
- Network
- 1 Mbps
- Tool setup
- 54 s
Setup & Ease of Use
- ✗1-Click Install Available
- ✗Docker Pre-Configured
- ✓Setup Under 10 Minutes
- ✓LibreChat-Specific Docs
- ✓Intuitive Control Panel
Performance
- ✓Strong CPU Benchmark
- ✓8GB RAM at Base Tier
- ✓NVMe SSD Storage
- ✓Low API Latency
- ✓99.9%+ Uptime
Pricing & Value
- ✓Affordable at Min Specs
- ✓Affordable at Rec. Specs
- ✗No Hidden Fees
- ✗Hourly Billing Available
- ✓Free Trial or Money-Back
Security
- ✓Easy Firewall Config
- —DDoS Protection Included
- ✓SSH Key Authentication
- ✓2FA on Hosting Panel
- ✓Automatic Backups
- —Tunnel/Reverse Proxy Guide
Support
- ✓24/7 Availability
- —Fast Response Time
- —LibreChat-Specific Knowledge
- ✓Community Forums
Pros
- Highest per-core CPU we measured: 4,249 events/sec on just 2 vCPU — beats Vultr's 4-vCPU plan per-core
- Strong disk: 13,422 IOPS (2nd best overall, behind only Kamatera)
- Fastest TTFB on the LibreChat UI: 1ms p50, 2ms p95 (tied with OVHcloud)
- Chat TTFT p50 of 518 ms despite only 2 vCPU — network to OpenAI is what matters, not cores
- LibreChat installs in 54 seconds — 3rd fastest (behind Kamatera and Hetzner)
- Beginner-friendly hPanel UI with good documentation
- 8 TB of bundled bandwidth in all regions — no overage shock
Cons
- KVM 2 is locked at 2 vCPU — if you want more CPU headroom you jump to KVM 4 and the price roughly doubles
- Advertised price is $9/mo. Pay monthly: 1.5× that ($14.99). Renew after promo ends: 2.7× that ($24.49/mo).
- Marketing pushes 2-year plans hard — the advertised price assumes that commitment
- Idle CPU is 4.06% — highest in the group — a quirk of the 2-vCPU tier being more sensitive to MongoDB background activity
Visit HostingerBest for beginners who want a self-hosted LibreChat without managing hardware choices. Rock-solid on UI speed (1ms TTFB) and chat latency (518ms TTFT), with the easiest dashboard in the group. Just go in knowing the 2-year prepaid plan is the only way to get the advertised price; monthly and renewal cost 2.7× more.
OVHcloud
Fastest chat TTFT, DDoS protection, unmetered bandwidth
Min specs
$11.00/mo
Recommended
$78.68/mo
Benchmark measurements
- CPU (events/sec)
- 3670
- Disk IOPS
- 10759
- Disk throughput
- 42.0 MB/s
- Network
- 1 Mbps
- Tool setup
- 68 s
Setup & Ease of Use
- ✗1-Click Install Available
- ✗Docker Pre-Configured
- ✓Setup Under 10 Minutes
- ✓LibreChat-Specific Docs
- ✗Intuitive Control Panel
Performance
- ✓Strong CPU Benchmark
- ✓8GB RAM at Base Tier
- ✓NVMe SSD Storage
- ✓Low API Latency
- ✓99.9%+ Uptime
Pricing & Value
- ✓Affordable at Min Specs
- ✓Affordable at Rec. Specs
- ✓No Hidden Fees
- ✓Hourly Billing Available
- —Free Trial or Money-Back
Security
- ✓Easy Firewall Config
- ✓DDoS Protection Included
- ✓SSH Key Authentication
- ✓2FA on Hosting Panel
- ✓Automatic Backups
- —Tunnel/Reverse Proxy Guide
Support
- ✓24/7 Availability
- —Fast Response Time
- —LibreChat-Specific Knowledge
- ✓Community Forums
Pros
- Fastest chat TTFT p50 in the group: 416 ms — OVH BHS5 (Canada) has excellent network routing to OpenAI's east-coast edge
- Tight chat TTFT p95: 859 ms — 2nd-tightest in the group behind Kamatera's 758 ms
- Lowest idle CPU of anyone: 0.33%
- Third-best disk IOPS: 10,759 (behind Kamatera's 19,181 and Hostinger's 13,422)
- Fastest TTFB on the UI: 1ms p50, 2ms p95 (tied with Hostinger)
- DDoS protection and unmetered bandwidth included
- Strong European data sovereignty story
Cons
- Most expensive plan in our group at $78.68/month — 5.25× Hetzner's EU price
- Default login is the `ubuntu` user, not root — you'll need `sudo` for the docker commands that run LibreChat, or enable root SSH first via cloud-init
- Your OVH account is locked to one region entity (EU, CA, or US) — you can't mix
Visit OVHcloudBest chat TTFT in the group, by a meaningful margin. If latency to OpenAI is your north star, OVH's BHS5 is genuinely the winner — 67 ms faster p50 than Hetzner Ashburn, 102 ms faster than Hostinger. The price premium is steep ($78.68 vs Hetzner's $15), but it buys unmetered bandwidth, DDoS, and the lowest-latency path we measured.
Kamatera
Best disk by 2×, fastest chat total, network routing that matters
Min specs
$4.00/mo
Recommended
$39.00/mo
Benchmark measurements
- CPU (events/sec)
- 2679
- Disk IOPS
- 19181
- Disk throughput
- 74.9 MB/s
- Network
- 1 Mbps
- Tool setup
- 45 s
Setup & Ease of Use
- ✗1-Click Install Available
- ✗Docker Pre-Configured
- ✓Setup Under 10 Minutes
- ✓LibreChat-Specific Docs
- ✓Intuitive Control Panel
Performance
- ✓Strong CPU Benchmark
- ✓8GB RAM at Base Tier
- ✓NVMe SSD Storage
- ✓Low API Latency
- ✓99.9%+ Uptime
Pricing & Value
- ✓Affordable at Min Specs
- ✓Affordable at Rec. Specs
- ✓No Hidden Fees
- ✓Hourly Billing Available
- —Free Trial or Money-Back
Security
- ✓Easy Firewall Config
- —DDoS Protection Included
- ✓SSH Key Authentication
- ✓2FA on Hosting Panel
- ✓Automatic Backups
- —Tunnel/Reverse Proxy Guide
Support
- ✓24/7 Availability
- —Fast Response Time
- —LibreChat-Specific Knowledge
- ✓Community Forums
Pros
- Fastest chat total p50 in the group: 916 ms — 85 ms faster than OVH, 203 ms faster than Hetzner (US-West), 471 ms faster than DigitalOcean
- Tightest chat TTFT p95: 758 ms — stream latency under 1s even at the 95th percentile
- Fastest disk we measured: 19,181 IOPS — 1.4× Hostinger, 3.4× Hetzner, 6× DigitalOcean
- Fast LibreChat install: 45 seconds — fastest in the group
- Hourly billing, global datacenter choice, flexible spec picker (dial in exact CPU+RAM+disk)
Cons
- $39/month (4A tier) — about 2.6× Hetzner's EU price, 60% more than Hostinger KVM 2
- No LibreChat one-click — manual install via SSH
- Baseline CPU (2,679 events/sec) is mid-pack — you pay a premium for the disk and network, not the compute
Visit KamateraBest LibreChat host if chat latency is what you care most about. Kamatera US-NJ delivered the fastest chat total (916 ms) and the tightest p95 (758 ms) of any provider we tested, plus 2× better disk IOPS than the next best. The premium vs Hetzner is real ($39 vs $15) but directly purchases a smoother chat UX every time you send a message.
Close calls
2 tested but not our top picks — each has a real edge for a specific use case.
Vultr
Fastest CPU at 4 vCPU, pricey, mid-pack network to OpenAI
Min specs
$6.00/mo
Recommended
$48.00/mo
Benchmark measurements
- CPU (events/sec)
- 4140
- Disk IOPS
- 9891
- Disk throughput
- 38.6 MB/s
- Network
- 388 Mbps
- Tool setup
- 97 s
Setup & Ease of Use
- ✗1-Click Install Available
- ✗Docker Pre-Configured
- ✓Setup Under 10 Minutes
- ✓LibreChat-Specific Docs
- ✓Intuitive Control Panel
Performance
- ✓Strong CPU Benchmark
- ✓8GB RAM at Base Tier
- ✓NVMe SSD Storage
- ✓Low API Latency
- ✓99.9%+ Uptime
Pricing & Value
- ✓Affordable at Min Specs
- ✓Affordable at Rec. Specs
- ✓No Hidden Fees
- ✓Hourly Billing Available
- —Free Trial or Money-Back
Security
- ✓Easy Firewall Config
- ✓DDoS Protection Included
- ✓SSH Key Authentication
- ✓2FA on Hosting Panel
- ✓Automatic Backups
- —Tunnel/Reverse Proxy Guide
Support
- ✓24/7 Availability
- —Fast Response Time
- —LibreChat-Specific Knowledge
- ✓Community Forums
Pros
- Highest baseline CPU among 4-vCPU plans: 4,140 events/sec
- Strong disk: 9,891 IOPS (4th overall behind Kamatera, Hostinger, and OVHcloud)
- Tight TTFB: 2ms p50, 3ms p95 — no noisy-neighbor outliers
- Chat TTFT p50 of 545 ms — mid-pack despite fast hardware (network hop to OpenAI isn't best-in-class)
- Hourly billing, 30+ global regions
Cons
- 3× more expensive than Hetzner ($48/mo vs $15/mo EU) for similar chat latency
- Only 6 TB of bandwidth included; $0.01/GB over that
- Chat total p50 (1,120 ms) is essentially tied with Hetzner US-West (1,119 ms) despite Vultr's stronger baseline hardware — OpenAI network hop from Vultr NJ isn't as tight as from OVHcloud Canada (1,001 ms) or Kamatera NJ (916 ms).
Visit VultrBest raw hardware at the matching 4 vCPU / 8 GB tier. But LibreChat's main user experience — chat response time — is dominated by network hop to OpenAI, not CPU. You're paying a 3× hardware premium that barely shows up in chat latency. Worth the price only if you're also running CPU-heavy tools alongside LibreChat on the same box.
Contabo
Cheapest plan, shared CPU hurts install + baseline, trans-Atlantic still competitive on chat
Min specs
$5.99/mo
Recommended
$11.21/mo
Benchmark measurements
- CPU (events/sec)
- 1511
- Disk IOPS
- 3013
- Disk throughput
- 11.8 MB/s
- Network
- 67 Mbps
- Tool setup
- 191 s
Setup & Ease of Use
- ✗1-Click Install Available
- ✗Docker Pre-Configured
- ✓Setup Under 10 Minutes
- ✓LibreChat-Specific Docs
- ✗Intuitive Control Panel
Performance
- ✗Strong CPU Benchmark
- ✓8GB RAM at Base Tier
- ✗NVMe SSD Storage
- ✓Low API Latency
- ✓99.9%+ Uptime
Pricing & Value
- ✓Affordable at Min Specs
- ✓Affordable at Rec. Specs
- ✓No Hidden Fees
- ✗Hourly Billing Available
- —Free Trial or Money-Back
Security
- ✓Easy Firewall Config
- —DDoS Protection Included
- ✓SSH Key Authentication
- ✓2FA on Hosting Panel
- ✓Automatic Backups
- —Tunnel/Reverse Proxy Guide
Support
- ✓24/7 Availability
- —Fast Response Time
- —LibreChat-Specific Knowledge
- ✓Community Forums
Pros
- Cheapest plan in the group at $11.21/month
- Chat TTFT p50 of 550 ms — trans-Atlantic to OpenAI, still within 134 ms of OVH
- Unlimited bandwidth
- Tight chat TTFT p95 of 1,021 ms — middle of the pack, surprisingly good despite the network distance
Cons
- LibreChat install took 191 seconds — 4× longer than Hetzner or Kamatera. Shared-CPU noise shows up clearly.
- Second-lowest CPU in our group: 1,511 events/sec — 2.4× slower than Hetzner's 3,593. Only DigitalOcean was slower (817).
- Lowest disk IOPS in the group: 3,013 (narrowly below DigitalOcean's 3,169) — nearly half of Hetzner's 5,652 on dedicated CPU
- TTFB p95 of 18 ms on the UI — ~4× worse than the dedicated-CPU hosts, a consequence of shared-CPU variance
- No hourly billing — minimum one-month commitment
- Cancellation runs to end of paid term — you keep paying for the month even if you stop using the server on day 2
Visit ContaboCheapest plan on paper, and for LibreChat specifically the chat-latency gap is smaller than you'd expect (550 ms vs 416 ms for OVH — the VPS isn't the bottleneck). But the install pain (191s vs Hetzner's 47s) and shared-CPU variance on every local request means you'll feel the slowness outside of chat. Fine for a solo hobby install; frustrating if you use it every day.
Skip this one
Tested and didn't earn a recommendation at this price tier.
DigitalOcean
Best dashboard, slowest chat, worst performance-per-dollar
Min specs
$12.00/mo
Recommended
$48.00/mo
Benchmark measurements
- CPU (events/sec)
- 817
- Disk IOPS
- 3169
- Disk throughput
- 12.4 MB/s
- Network
- 1 Mbps
- Tool setup
- 104 s
Setup & Ease of Use
- ✗1-Click Install Available
- ✗Docker Pre-Configured
- ✓Setup Under 10 Minutes
- ✓LibreChat-Specific Docs
- ✓Intuitive Control Panel
Performance
- ✗Strong CPU Benchmark
- ✓8GB RAM at Base Tier
- ✓NVMe SSD Storage
- ✓Low API Latency
- ✓99.9%+ Uptime
Pricing & Value
- ✓Affordable at Min Specs
- ✓Affordable at Rec. Specs
- ✓No Hidden Fees
- ✓Hourly Billing Available
- —Free Trial or Money-Back
Security
- ✓Easy Firewall Config
- —DDoS Protection Included
- ✓SSH Key Authentication
- ✓2FA on Hosting Panel
- ✓Automatic Backups
- —Tunnel/Reverse Proxy Guide
Support
- ✓24/7 Availability
- —Fast Response Time
- —LibreChat-Specific Knowledge
- ✓Community Forums
Pros
- Cleanest, most polished dashboard of any host
- Industry-leading documentation and tutorials for developers new to cloud infrastructure
- $200 free credit for new users — covers ~4 months of the $48/mo plan
Cons
- Slowest chat TTFT we measured: 786 ms p50 — 370 ms slower than OVH at ~60% of the price
- Slowest chat total: 1,387 ms p50 — would be noticeable typing latency on every message
- Second-slowest LibreChat install: 104 seconds — 2× slower than Hetzner Hillsboro's 47s. Only Contabo's shared-CPU 191s was slower.
- Slowest baseline CPU: 817 events/sec — 4-5× slower than every other 4-vCPU provider
- Near-slowest disk: 3,169 IOPS — 6× slower than Kamatera, barely ahead of Contabo's 3,013
- Only 4 TB of bundled bandwidth; $0.01/GB over that
- At $48/month for 4 vCPU / 8 GB it's 3.2× Hetzner's price for markedly slower performance
Visit DigitalOceanDon't, at this tier. At $48/month you can get Hetzner for $15 with similar or better LibreChat performance, or Kamatera's 4A for $39 with the fastest chat total in the group. DigitalOcean's premium pays for dashboard polish and docs — things you touch once during setup — while the slow CPU and network hurt you on every single chat message. Use it only if you need their managed database add-ons or are already locked into their ecosystem.
Deployment profile
Regions tested: Hetzner Hillsboro (US-West), DigitalOcean New York, Vultr New Jersey, OVHcloud BHS5 (Canada-East), Contabo Germany (EU), Hostinger Boston, Kamatera US-NJ. Tiers: Hetzner CPX31, DigitalOcean Basic Regular, Vultr High Performance AMD, OVH c3-8, Contabo Cloud VPS (4 vCPU / 8 GB), Kamatera 4A — all 4 vCPU / 8 GB. Hostinger KVM 2 is 2 vCPU / 8 GB (their pricing grid skips the 4/8 combo at this budget). Note: Hetzner Ashburn (US-East) was out of capacity during this campaign; our dry run on Ashburn showed ~230ms faster chat TTFT than Hillsboro, so the Hetzner numbers below represent US-West and are somewhat conservative. Contabo and Hostinger charge monthly; the rest charge hourly. All 7 runs completed 50/50 OpenAI chat samples successfully.
First-deployment checklist
- Rent a VPS with at least 2 vCPU and 4 GB of memory. Ubuntu 24.04. Pick a US-East or EU-West region for the shortest network hop to OpenAI.
- Log in as root, install Docker: curl -fsSL https://get.docker.com | sh
- Create a directory, drop in LibreChat's official docker-compose.yml (or use our trimmed version from scripts/bench/tools/librechat/).
- Generate JWT secrets: export LC_CREDS_KEY=$(openssl rand -hex 32) LC_CREDS_IV=$(openssl rand -hex 16) LC_JWT_SECRET=$(openssl rand -hex 32) LC_JWT_REFRESH_SECRET=$(openssl rand -hex 32)
- Paste your OpenAI API key: export OPENAI_API_KEY=sk-... (optional: ANTHROPIC_API_KEY, GOOGLE_API_KEY for multi-model)
- Bring it up: docker compose up -d — wait ~40-90 seconds for MongoDB + Meilisearch + LibreChat to settle.
- Visit http://your-ip:3080, register the first admin account, send a test chat. Point a real domain via Caddy or Traefik when you're ready for HTTPS.
Common pitfalls
LibreChat defaults to ALLOW_REGISTRATION=false in recent versions. New-install users register once and then everyone else gets HTTP 403. Set ALLOW_REGISTRATION=true in your environment variables, or invite users manually from the admin panel after your first login.
https://www.librechat.ai/docs/configuration/authentication/email_authentication
Region choice is the single biggest lever on chat latency — bigger than CPU cores, bigger than disk IOPS. In our tests, Hetzner Ashburn was ~230ms faster to first token than Hetzner Hillsboro. Pick a datacenter geographically close to OpenAI's eastern US edge.
https://platform.openai.com/docs/guides/latency-optimization
LibreChat + MongoDB + Meilisearch uses about 1.1-1.3 GB of memory idle. A 2 GB VPS leaves only ~700 MB for conversations and build cache. Plan for 4 GB+ if you have more than one active user.
https://www.librechat.ai/docs/remote/docker
OpenAI's streaming API sometimes stalls for 200-500ms before sending the first token — visible as TTFT outliers. We saw max TTFT values of 2-4 seconds on every provider despite p50s under 800ms. This is OpenAI, not your VPS. The VPS signal lives in the p50, not the worst-case.
https://platform.openai.com/docs/guides/streaming-responses
Frequently Asked Questions
Which VPS gives me the fastest LibreChat chat experience?
Kamatera's 4A tier (US-NJ) at $39/month — median chat total of 916 ms for a short gpt-4o-mini reply, fastest we measured. OVHcloud's c3-8 (Canada) has the fastest time-to-first-token (416 ms p50) but costs $78.68. Hetzner CPX31 Ashburn (US-East) at $15/month is also within 70 ms of the leader on chat TTFT when it has capacity — our retest had to use Hillsboro and got ~230 ms slower numbers.
Does CPU matter for LibreChat?
Surprisingly little. Hostinger's 2-vCPU plan delivered chat TTFT of 518 ms — faster than Vultr's 4-vCPU plan at 545 ms. The bottleneck for chat is network roundtrip time to OpenAI + streaming handshake, not local CPU. CPU matters for concurrent users (each active conversation keeps a Node.js worker busy) and for rendering long messages, but the single-user chat loop is network-bound.
What specs do I need for LibreChat?
2 vCPU and 4 GB RAM for personal or small-team use. LibreChat + MongoDB + Meilisearch use about 1.1-1.3 GB idle, leaving ~700 MB free on a 2 GB VPS (tight for multi-user). 4 GB is comfortable; 8 GB is where you stop worrying. Storage: 20 GB minimum; MongoDB grows with chat history, so budget 40 GB+ for a year of daily use by a team.
Is self-hosting LibreChat cheaper than ChatGPT Plus?
Depends on your usage. ChatGPT Plus is $20/month per person. Self-hosted LibreChat costs ~$15-40/month for the VPS, plus OpenAI API per-token pricing. If you chat lightly (gpt-4o-mini, ~1M tokens per person/month), your API bill stays under $5 per person, so LibreChat + 3 users breaks even with ChatGPT Plus at 3 seats. Heavier use or using gpt-4 shifts the math — the API model cost dominates once you're past 5M tokens per person per month.
What's the most important thing to know when picking a VPS for LibreChat?
Pick a datacenter close to OpenAI, not close to you. Our Hetzner Hillsboro (US-West) test showed ~230 ms slower first-token latency than Hetzner Ashburn (US-East) on identical hardware — because OpenAI's streaming edge is biased toward the east coast. For self-hosted LibreChat, a US-East or EU-West datacenter is almost always a better user experience than a datacenter near your physical location.
Why is DigitalOcean so much slower in your benchmarks?
Two reasons. (1) The Basic Regular tier uses shared CPU cores — our sysbench measured 817 events/sec vs Hetzner's 3,593 on the same spec. That slows LibreChat install, Mongo startup, and SSE stream handling. (2) DO's New York datacenter has slightly higher network latency to OpenAI's edge than OVH Canada or Hetzner Ashburn do. Combined, chat TTFT is 786 ms vs OVH's 416 — almost 2× slower for end-to-end chat UX.
Can I run LibreChat next to other apps on the same VPS?
Yes, easily. LibreChat's stack uses ports 3080 (UI), 27017 (MongoDB internal), and 7700 (Meilisearch internal) — all inside the docker-compose network except 3080 on the host. Install something like Caddy or Nginx as a reverse proxy and point different subdomains at different backend ports. For CPU, LibreChat is very light — you can comfortably run 3-5 small apps alongside it on a 4 vCPU / 8 GB box.
Do I need GPU on the VPS?
No. LibreChat is a chat UI — it sends your message to OpenAI/Anthropic/Google and renders the response. The LLM runs on their side. If you want to run a local model (Llama, Mistral) via Ollama alongside LibreChat, that's a different conversation: you'll want 16+ GB RAM and ideally a GPU instance. None of the plans we tested fit that bill; you'd go to RunPod, Vast.ai, or a GPU tier on any of the major clouds.
Why did you skip chat-completion quality testing?
Because that's an OpenAI benchmark, not a VPS benchmark. The model replies the same thing from every VPS — we're measuring the pipes, not the brain. We did measure per-request success rate (50/50 on every provider) so you know the VPS isn't dropping connections or timing out mid-stream.
How do you run these benchmarks?
For each provider we provision a fresh VPS, install Docker, pull the LibreChat docker-compose stack, generate JWT secrets, wait for the UI to serve, then: (a) measure idle footprint, (b) 20 back-to-back TTFB samples, (c) register+login+authenticated /api/user round trip, (d) 50 OpenAI chat calls via gpt-4o-mini from the VM with 50-token cap on response. All scripts live in scripts/bench/tools/librechat/ in our public repo — you can rerun them yourself with your own API key.