Cloudflare Error 520: Unknown Error
Quick reference
| Code | 520 Unknown Error |
|---|---|
| Category | Cloudflare Edge Error |
| Standard HTTP? | No — Cloudflare proprietary |
| Vendor reference | Cloudflare documentation |
What 520 means
Cloudflare sits between the browser and your origin server. When Cloudflare forwards a request to your origin and the origin sends back something Cloudflare cannot interpret — an empty body, a truncated response, an invalid HTTP status line, or a TCP connection that resets before the response completes — Cloudflare returns a 520 to the browser with the message "Web server is returning an unknown error."
The key distinction from 502 Bad Gateway is specificity: 502 means Cloudflare could not reach the origin or got a clearly invalid response at the TCP level, whereas 520 is broader and covers any response that passed a TCP handshake but failed HTTP parsing or was structurally incomplete.
Each Cloudflare 520 page includes a Ray ID in the format Ray ID: 7a1b2c3d4e5f6789. That Ray ID maps to a specific request in your Cloudflare dashboard logs and, more importantly, in your origin server's access log at the same timestamp.
Root causes
There are six common triggers for 520. The most frequent is an origin process crash or restart that happens after a TCP connection is established but before the response finishes — the connection drops mid-stream, Cloudflare receives a partial response or a TCP RST, and 520 is returned.
The second most common cause is oversized response headers. Cloudflare enforces a 32 KB limit on response headers. If your origin sets many cookies, long Set-Cookie values, verbose X-Debug-* headers, or a large Link preload header, the aggregate size can exceed the limit and cause the response to be rejected.
Third is a firewall or security group on the origin that intermittently drops connections from Cloudflare IP ranges. This shows up as sporadic 520s rather than consistent failures, because Cloudflare rotates which edge node forwards the request and not all of them may be blocked.
Fourth is HTTP/2 connection multiplexing bugs at the origin. When Cloudflare reuses an HTTP/2 connection to the origin and a frame arrives on a closed or reset stream, both sides may disagree on stream state and the connection fails. This is especially common in older versions of nginx, Apache, and Node.js HTTP/2 implementations.
Fifth is a script or application timeout that fires after the response has started. If your application writes the status line and some headers but then times out before writing the body, Cloudflare receives a partial response it cannot finalize.
Sixth is an invalid HTTP status line — a response that starts with something other than HTTP/1.1 NNN. This can happen when an upstream proxy, WAF, or load balancer rewrites the response and introduces a formatting error.
Step-by-step diagnosis
Step 1 — Capture the Ray ID. The 520 error page shows a Ray ID. Copy it.
Step 2 — Cross-reference origin logs. On your origin server, find the access log entry at the timestamp matching the Ray ID. A typical nginx log line looks like:
203.0.113.45 - - [25/Apr/2026:14:22:01 +0000] "GET /api/data HTTP/1.1" 200 0 "-" "..."
A response body size of 0 with a 200 status is a strong indicator of a partial/empty response. A missing log entry entirely means the origin never received the request — point to a firewall block.
Step 3 — Check header size. On the origin, instrument or log the total size of response headers for the failing endpoint:
# nginx: log $sent_http_* combined size
# Apache: use mod_headers with LogFormat %{Content-Length}o
# Or use curl to measure:
curl -sI https://your-origin-ip/path | wc -c
If the output exceeds ~30,000 bytes, header trimming is required.
Step 4 — Verify Cloudflare IP allowlist. Cloudflare publishes its current IP ranges at https://www.cloudflare.com/ips/. Confirm every range is whitelisted in your origin firewall (iptables, security groups, Fail2Ban, ModSecurity, or cloud provider network ACLs).
Step 5 — Test HTTP/2 isolation. In Cloudflare dashboard → Speed → Optimization, or via the API, temporarily disable HTTP/2 on the Cloudflare-to-origin connection. If 520s stop, the issue is HTTP/2 multiplexing on the origin. Update the origin's HTTP/2 implementation or revert to HTTP/1.1 between Cloudflare and origin.
Fixes by cause
Crash/restart mid-response: Add a process supervisor (systemd, PM2, Supervisor) with auto-restart. Set health check endpoints Cloudflare can probe. Review application error logs (journalctl -u nginx -n 200, pm2 logs) for stack traces or out-of-memory kills.
Oversized headers: Audit and remove debug headers in production. Consolidate Set-Cookie values. Move large Link preload hints to 103 Early Hints instead of the final response header. In nginx:
proxy_buffer_size 16k; proxy_buffers 4 16k; # Remove debug headers before forwarding: proxy_hide_header X-Debug-Trace; proxy_hide_header X-Internal-Id;
Firewall blocking Cloudflare IPs: Add all ranges from https://www.cloudflare.com/ips/ to your allowlist. Automate refreshes since the list changes:
# Example: fetch and apply to iptables curl -s https://www.cloudflare.com/ips-v4 | while read ip; do iptables -A INPUT -s "$ip" -j ACCEPT done
HTTP/2 multiplexing bugs: Upgrade nginx to 1.25.x or later, which ships with improved HTTP/2 handling. For Apache, ensure mod_http2 is current. As a temporary fix, set grpc_pass or proxy_pass with an explicit HTTP/1.1 directive between Cloudflare and origin.
Script timeouts mid-response: Increase application timeouts for long-running endpoints. Move expensive work to async jobs (queues, background workers). Configure proxy_read_timeout in nginx to match the expected maximum response time.
520 vs related errors
| Code | What Cloudflare observed | Primary cause |
|---|---|---|
| 520 | Response unparseable or empty | Crash, oversized headers, HTTP/2 bug |
| 521 | TCP connection refused | Origin web server not running |
| 522 | TCP handshake timed out | Origin unreachable or overloaded |
| 524 | Connection established, no response | Long-running request, no timeout |
| 502 | Invalid response from upstream | Upstream proxy or load balancer error |
Frequently asked questions
Does a 520 error affect my Cloudflare plan?
No. 520s are logged in Cloudflare analytics and do not incur extra charges on any plan tier. They are visibility signals, not billing events.
Can I reproduce a 520 locally without Cloudflare?
Not directly, since 520 is generated by Cloudflare's edge. You can simulate the underlying trigger — send a TCP RST mid-response using tc or a custom server — but you will see a broken-pipe error on the client, not a 520 page.
Why does 520 happen intermittently but not consistently?
Intermittent 520s typically point to firewall rules that block a subset of Cloudflare IP ranges (since Cloudflare rotates edge nodes), occasional origin process crashes under load, or connection reuse hitting a stale HTTP/2 stream.
What is the Ray ID and how do I use it?
The Ray ID is a unique identifier Cloudflare assigns to each request. It appears on the 520 error page and in Cloudflare's dashboard under Analytics → HTTP Traffic. Cross-reference it with your origin access log by timestamp to find the exact failed request.
Related guides
Cloudflare 521 · Cloudflare 522 · Cloudflare 524 · Cloudflare 525 · Cloudflare 526 · HTTP 502 · HTTP 503 · HTTP 504 · Nginx 499
Server Errors Hub · All Guides · Home