You deploy your application, configure Nginx as a reverse proxy, navigate to your domain, and see the dreaded "502 Bad Gateway" error. This is one of the most common and most frustrating errors in web deployment because the error message tells you almost nothing about the actual cause. All it means is that Nginx, acting as a gateway, received an invalid response (or no response at all) from the upstream server it tried to reach.
The causes range from trivially simple (your backend is not running) to maddeningly subtle (Nginx is resolving a DNS name to a cached IP address that no longer exists). This checklist covers every cause we have encountered in over a decade of production deployments, organized from most common to least common.
Step 1: Is Your Backend Actually Running?
This sounds obvious, but it is the cause of the 502 error more than 40 percent of the time. Before debugging Nginx configuration, verify that your backend application is running and accepting connections on the expected port or socket.
# Check if the process is running
systemctl status your-application
# or
ps aux | grep node
ps aux | grep gunicorn
ps aux | grep uvicorn
# Check if the port is being listened on
ss -tlnp | grep 3000
# or
netstat -tlnp | grep 3000
# Test direct connection to the backend
curl -v http://localhost:3000/health
# or for Unix socket
curl --unix-socket /run/your-app/app.sock http://localhost/health
If the backend process is not running, check its logs to find out why it crashed. Common causes include out-of-memory kills (check dmesg | grep -i kill), uncaught exceptions, port conflicts, missing environment variables, and failed database connections.
If the process is running but the port is not being listened on, the application may still be starting up. Some frameworks take 10-30 seconds to initialize, especially if they are running database migrations or loading large models at startup.
Step 2: Check the Nginx Error Log
The Nginx error log is the single most valuable debugging tool for 502 errors. Every 502 response Nginx sends is accompanied by an error log entry that explains exactly why the upstream connection failed.
# Check the main error log
sudo tail -50 /var/log/nginx/error.log
# Check site-specific error log if configured
sudo tail -50 /var/log/nginx/yoursite-error.log
# Follow the log in real-time while making a request
sudo tail -f /var/log/nginx/error.log
The error message in the log tells you the specific cause. Here are the most common messages and what they mean:
connect() failed (111: Connection refused) — The backend is not listening on the configured port or socket. Either the application is not running, it is listening on a different port, or it is only listening on a specific interface (like 127.0.0.1) while Nginx is trying to connect to a different one.
connect() failed (113: No route to host) — The backend is on a different machine and either the machine is down, a firewall is blocking the connection, or the IP address is wrong.
upstream prematurely closed connection — The backend accepted the connection but then closed it before sending a response. This usually means the backend application crashed while processing the request, was killed by the OOM killer, or has a bug that causes it to close connections under certain conditions.
upstream timed out (110: Connection timed out) — Nginx waited for a response from the backend and gave up after the configured timeout period. The backend is either overloaded, stuck in a long operation, or experiencing a deadlock.
recv() failed (104: Connection reset by peer) — The backend forcibly closed the connection. This can happen when the backend's connection queue is full, when there is a TLS mismatch, or when a load balancer between Nginx and the backend is killing idle connections.
Step 3: Verify the Upstream Configuration
A common mistake is a mismatch between the Nginx upstream configuration and where the backend is actually listening. Check your Nginx configuration carefully:
# If proxying to a port
upstream backend {
server 127.0.0.1:3000;
}
# If proxying to a Unix socket
upstream backend {
server unix:/run/your-app/app.sock;
}
server {
location / {
proxy_pass http://backend;
}
}
Common mismatches include: the backend listens on port 8000 but Nginx is configured for 3000; the backend listens on 0.0.0.0:3000 but Nginx is configured to connect to a Unix socket; the backend runs in a Docker container and listens on the container's localhost (not accessible from the host) while Nginx runs on the host.
For Docker containers, the backend address should be the container name (if using Docker networks) or the host machine's IP (if using port mapping). Using 127.0.0.1 in Nginx when the backend is in a Docker container will not work because localhost in the Nginx context refers to the host machine, not the container.
# Docker Compose example with correct upstream
upstream backend {
server app-container:3000; # Use container name on same Docker network
}
# Or if using host networking / port mapping
upstream backend {
server host.docker.internal:3000; # Docker Desktop
# server 172.17.0.1:3000; # Docker on Linux (bridge network gateway)
}
Step 4: Timeout Configuration
Nginx has several timeout directives that control how long it waits for the backend. If your backend needs more time than the default timeouts allow, Nginx will return a 502 (or 504 Gateway Timeout, depending on the specific timeout that was exceeded).
location / {
proxy_pass http://backend;
# Time to establish connection to upstream (default: 60s)
proxy_connect_timeout 30s;
# Time to wait for the upstream to send a response header (default: 60s)
proxy_read_timeout 120s;
# Time to wait for the upstream to accept data from Nginx (default: 60s)
proxy_send_timeout 30s;
# Keep-alive connections to upstream
proxy_http_version 1.1;
proxy_set_header Connection "";
}
The proxy_read_timeout is the most commonly needed adjustment. If your application has endpoints that take longer than 60 seconds to respond (report generation, file processing, AI inference), increase this timeout. However, setting it extremely high (like 3600s) can mask real problems. A backend that routinely takes minutes to respond has a performance issue that should be fixed rather than accommodated with a longer timeout.
Step 5: Buffer Size Issues
Nginx buffers responses from the upstream server before sending them to the client. If the response headers or body exceed the buffer size, Nginx may fail to process the response and return a 502 error. This commonly happens with applications that set very large cookies or return large headers.
location / {
proxy_pass http://backend;
# Buffer configuration
proxy_buffer_size 128k; # Buffer for response headers (default: 4k/8k)
proxy_buffers 4 256k; # Number and size of buffers for response body
proxy_busy_buffers_size 256k; # Limit on busy buffers
# For very large responses, allow buffering to disk
proxy_max_temp_file_size 1024m;
}
The proxy_buffer_size directive is particularly important. If your backend sends response headers larger than this buffer (common with applications that set many cookies or large JWT tokens in headers), Nginx will log upstream sent too big header and return a 502.
Step 6: DNS Resolution Issues
If your upstream is defined using a hostname rather than an IP address, Nginx resolves that hostname to an IP address when the configuration is loaded and caches the result. If the IP address changes later (common with cloud services, Docker Swarm, and Kubernetes), Nginx continues using the old IP address until the configuration is reloaded.
# Problem: DNS is cached at config load time
upstream backend {
server my-api.internal:3000; # Resolved once, cached forever
}
# Solution: Use a variable to force re-resolution
location / {
resolver 127.0.0.53 valid=30s; # Use system resolver, cache for 30s
set $backend "http://my-api.internal:3000";
proxy_pass $backend;
}
When using a variable for proxy_pass, Nginx re-resolves the hostname for each request (subject to the valid parameter in the resolver directive). This is essential in dynamic environments where backend IP addresses change.
Step 7: SELinux and Permission Issues
On Red Hat-based systems (RHEL, CentOS, AlmaLinux, Rocky Linux), SELinux may prevent Nginx from connecting to upstream servers. Check if SELinux is enforcing and if it is blocking Nginx:
# Check SELinux status
getenforce
# Check for recent SELinux denials
sudo ausearch -m AVC -ts recent | grep nginx
# Allow Nginx to make network connections
sudo setsebool -P httpd_can_network_connect 1
# If using Unix sockets, allow Nginx to access the socket path
sudo setsebool -P httpd_can_network_relay 1
SELinux-related 502 errors are particularly frustrating because everything appears correctly configured — the backend is running, the port is right, the configuration is correct — but Nginx is silently prevented from making the connection by the security policy.
Step 8: Worker Connections and File Descriptor Limits
Under high traffic, Nginx or the backend may run out of available connections. Nginx has a worker_connections setting that limits the total number of simultaneous connections per worker process. If this limit is reached, new connections to the upstream are refused.
# /etc/nginx/nginx.conf
events {
worker_connections 4096; # Default is 768 or 1024
}
# Also increase system file descriptor limits
# /etc/security/limits.d/nginx.conf
nginx soft nofile 65536
nginx hard nofile 65536
Similarly, check that your backend application can handle the number of connections Nginx is sending to it. A Node.js application with a single process can handle fewer concurrent connections than a Go application or a multi-process Python application behind Gunicorn.
Complete Nginx Reverse Proxy Template
Here is a production-ready Nginx reverse proxy configuration that incorporates all the fixes discussed above:
upstream backend {
server 127.0.0.1:3000;
keepalive 32;
}
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 30s;
proxy_read_timeout 120s;
proxy_send_timeout 30s;
# Buffers
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
# Static files (bypass proxy)
location /_next/static/ {
alias /var/www/app/.next/static/;
expires 365d;
access_log off;
}
}
After making any configuration changes, always test before reloading: sudo nginx -t. If the test passes, reload with sudo systemctl reload nginx. Never restart Nginx in production — reload preserves existing connections while restart drops them all.
ZeonEdge provides Nginx configuration, reverse proxy setup, and production deployment services. Learn more about our infrastructure services.
Marcus Rodriguez
Lead DevOps Engineer specializing in CI/CD pipelines, container orchestration, and infrastructure automation.