,{"@context":"https://schema.org","@type":"FAQPage","mainEntity":[{"@type":"Question","name":"Does 499 mean the backend processed the request?","acceptedAnswer":{"@type":"Answer","text":"Not necessarily. The upstream may still be running when the client disconnects. For non-idempotent operations like payments, the backend may succeed even though the client never received confirmation."}},{"@type":"Question","name":"Should I alert on 499 errors?","acceptedAnswer":{"@type":"Answer","text":"Alert on 499 rate, not individual 499s. A baseline of 1-2% on long endpoints is normal. Alert when the rate spikes above baseline over a 5-minute window, indicating a backend slowdown."}}]}

Nginx 499: Client Closed Request

Quick reference

Code499
NameClient Closed Request (nginx internal)
Standard HTTP?No — nginx proprietary log code
Who generates itnginx, upon detecting client disconnection
Client seesNothing — they already disconnected

What nginx 499 means

Nginx 499 "Client Closed Request" is an nginx-internal status code written to access logs when the client disconnected before nginx completed sending the response. It is never sent to the client — by definition, the client has already gone. 499 exists purely as a log marker so operators can identify and analyze premature client disconnections.

The 499 entry in the log tells you: "nginx was processing this request (or waiting for an upstream response), and at some point the client closed the TCP connection before the response was delivered." The upstream processing may still be running at that point — the backend server (PHP-FPM, Gunicorn, Node.js, etc.) does not automatically know the client disconnected until it tries to write the response and gets a broken pipe error.

Why clients disconnect

User navigated away or refreshed: The most benign cause. A user clicked a link, hit Refresh, or closed the tab before the page loaded. The browser closes the TCP connection. nginx logs 499 for the in-flight request. This is normal and expected at low rates.

Client-side timeout: The browser or API client has its own timeout shorter than the server's response time. JavaScript fetch() with a 10-second AbortController timeout, a mobile app with a 15-second network timeout, or a load balancer upstream of nginx — all can close the connection before nginx responds.

Load balancer idle timeout: AWS ALB, nginx load balancer, or HAProxy upstream of the application server has a shorter idle timeout than the request takes to process. The load balancer closes the connection, which nginx logs as 499.

Slow origin / upstream exhaustion: If nginx is proxying to a slow backend (PHP-FPM, Gunicorn, microservice), and the backend is under heavy load, clients may wait a long time and then give up. A spike of 499s often correlates with a slow backend response time spike.

Reading 499s in access logs

# Typical 499 log entry: 192.0.2.33 - - [26/Apr/2026:11:05:22 +0000] "POST /api/checkout HTTP/1.1" 499 0 "https://shop.example.com/cart" "Mozilla/5.0..." # Fields: IP, time, method+path, status=499, bytes=0, referrer, user-agent # bytes=0 because nothing was sent to the client before they disconnected # Count 499s by endpoint to find slow routes: awk '$9==499 {print $7}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20 # Correlate 499s with upstream_response_time to find slow backends: awk '$9==499 {print $NF}' /var/log/nginx/access.log | sort -n | tail -20 # (requires upstream_response_time in log format)

Configure nginx to log upstream response time for better 499 diagnosis:

log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' 'rt=$request_time uct=$upstream_connect_time ' 'uht=$upstream_header_time urt=$upstream_response_time'; access_log /var/log/nginx/access.log main;

When 499s indicate a problem

A small number of 499s on long-running endpoints (file uploads, report generation) is normal — users cancel requests. A sudden spike of 499s on short endpoints that normally complete quickly is a signal worth investigating. Common patterns:

499 spike on one endpoint: That endpoint became slow. Check backend logs (PHP-FPM slow log, Gunicorn worker status, Node.js event loop lag) for the corresponding time period.

499s across all endpoints: Backend infrastructure-wide slowdown: database under load, shared connection pool exhausted, network congestion between nginx and upstream workers.

499s only from certain IP ranges: A proxy or CDN upstream of nginx has a timeout shorter than the response time. Check the CDN or load balancer idle timeout settings.

Reducing 499s

The primary fix is making the origin respond faster. Secondary fixes address the timeout chain:

# Increase proxy timeouts to match expected response time: location /api/reports/ { proxy_pass http://backend; proxy_read_timeout 120s; # wait longer for backend proxy_send_timeout 120s; proxy_connect_timeout 10s; } # For slow uploads, increase client body timeout: client_body_timeout 120s;

For requests that are intentionally slow (file generation, batch jobs), switch to an async pattern: return 202 Accepted immediately, process in background, let client poll for completion. This eliminates the source of 499s entirely.

Frequently asked questions

Does 499 mean the backend processed the request?

Not necessarily. Nginx may have been waiting for the upstream response when the client disconnected. The upstream backend is still running and will complete the request unless it checks for client disconnection explicitly. For non-idempotent operations (writes, payments), a 499 may mean the operation succeeded on the backend even though the client never received confirmation.

Should I alert on 499 errors?

Alert on 499 rate, not individual 499s. A baseline 499 rate of 1-2% on long endpoints is normal. Alert when the rate spikes above baseline (e.g., more than 5% of requests to any endpoint over a 5-minute window) as this typically indicates a backend slowdown or upstream timeout issue.

Is 499 the same as Cloudflare 499?

No. When Cloudflare proxies to nginx, Cloudflare uses 499 in its own logs (visible in Cloudflare Analytics) to mean the client closed the connection before Cloudflare finished receiving the response from origin. The semantic is the same as nginx 499 but at the Cloudflare-to-browser layer rather than the nginx-to-upstream layer.

How do I know if 499 correlates with user experience problems?

Cross-reference 499 timestamps with Real User Monitoring (RUM) or Core Web Vitals data. A 499 that fires after the user navigates away means the user did not experience an error. A 499 that fires while the user is still on the page (detected via client-side error tracking) indicates a genuine timeout UX problem.

Related guides

Nginx 444 · HTTP 408 · HTTP 504 · Cloudflare 524 · HTTP 202

All Guides · Home