Перейти к содержимому
Skip to content
← All articles

Nginx Performance Tuning: Key Configuration Tips

Nginx is one of the most popular web servers and reverse proxies, known for its high performance and low resource usage. However, the default configuration is conservative and designed to work on any hardware. Tuning Nginx for your specific workload can significantly improve throughput, reduce latency, and handle more concurrent connections.

Worker Processes and Connections

The most fundamental Nginx tuning parameters control how many workers handle requests and how many connections each worker can manage:

# /etc/nginx/nginx.conf

# Set to number of CPU cores (auto detects)
worker_processes auto;

# Maximum open files per worker
worker_rlimit_nofile 65535;

events {
    # Max simultaneous connections per worker
    worker_connections 4096;

    # Accept multiple connections at once
    multi_accept on;

    # Use the most efficient event method for Linux
    use epoll;
}

The total number of simultaneous connections Nginx can handle is: worker_processes × worker_connections. With 4 workers and 4096 connections each, Nginx can handle 16,384 concurrent connections.

Calculating worker_connections

Each client connection typically requires two file descriptors (one for the client, one for the upstream). The safe formula is:

worker_connections = worker_rlimit_nofile / 2

HTTP Optimizations

http {
    # Sendfile: use kernel-level file transfer (avoid userspace copy)
    sendfile on;

    # Optimize sendfile by sending headers and file in one packet
    tcp_nopush on;

    # Disable Nagle's algorithm for low-latency responses
    tcp_nodelay on;

    # Keep connections alive longer to reduce TCP handshake overhead
    keepalive_timeout 65;
    keepalive_requests 1000;

    # Reduce overhead of hash table lookups
    types_hash_max_size 2048;
    server_names_hash_bucket_size 64;

    # Hide Nginx version in headers (security)
    server_tokens off;
}

sendfile, tcp_nopush, and tcp_nodelay

These three directives work together to optimize how Nginx sends data:

DirectiveWhat It DoesWhen to Use
sendfileTransfers files directly in kernel space (zero-copy)Always on for serving static files
tcp_nopushSends headers and beginning of file in one TCP packetOn with sendfile for static content
tcp_nodelayDisables Nagle buffering, sends small packets immediatelyOn for keepalive connections and proxied content

Gzip Compression

Compression reduces response size by 60-80%, dramatically improving page load times for text-based content:

http {
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 5;         # 1-9, sweet spot is 4-6
    gzip_min_length 256;       # Don't compress tiny responses
    gzip_http_version 1.1;

    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/json
        application/javascript
        application/xml
        application/rss+xml
        application/atom+xml
        application/vnd.ms-fontobject
        font/opentype
        image/svg+xml;
}

Compression level 5 provides about 90% of the compression ratio of level 9 with much less CPU overhead. For high-traffic sites, consider pre-compressing static files with gzip_static on.

Brotli Compression

Brotli offers 15-25% better compression than gzip for text content. If the ngx_brotli module is installed:

brotli on;
brotli_comp_level 6;
brotli_types text/plain text/css application/json application/javascript text/xml application/xml image/svg+xml;

Buffer Tuning

Buffers control how Nginx stores request and response data in memory. Properly sized buffers prevent disk I/O and improve performance:

http {
    # Client request buffers
    client_body_buffer_size 16k;      # POST body buffer
    client_header_buffer_size 1k;     # Request header buffer
    large_client_header_buffers 4 8k; # Large headers (cookies, etc.)
    client_max_body_size 16m;         # Max upload size

    # Proxy buffers (for reverse proxy mode)
    proxy_buffer_size 4k;             # First part of response (headers)
    proxy_buffers 8 16k;              # Response body buffers
    proxy_busy_buffers_size 32k;      # Send to client while still reading
}

location /api/ {
    # API-specific: larger buffers for JSON responses
    proxy_buffer_size 8k;
    proxy_buffers 16 32k;
    proxy_busy_buffers_size 64k;
    proxy_pass http://api_backend;
}

Static File Caching

Aggressive caching of static files reduces server load and improves user experience:

# Static assets with long cache
location ~* \.(jpg|jpeg|png|gif|ico|webp|avif)$ {
    expires 30d;
    add_header Cache-Control "public, immutable";
    access_log off;
}

location ~* \.(css|js)$ {
    expires 7d;
    add_header Cache-Control "public";
    access_log off;
}

location ~* \.(woff|woff2|ttf|eot)$ {
    expires 365d;
    add_header Cache-Control "public, immutable";
    access_log off;
}

# Open file cache — keeps file descriptors in memory
open_file_cache max=10000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

SSL/TLS Optimization

http {
    # SSL session caching — avoid renegotiation
    ssl_session_cache shared:SSL:50m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;  # More secure, slightly less performant

    # Modern TLS only
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    # OCSP stapling — faster TLS handshake
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 1.1.1.1 8.8.8.8 valid=300s;
    resolver_timeout 5s;

    # Early hints (HTTP 103)
    # Requires Nginx 1.25.3+
    # add_header Link "; rel=preload; as=style" early;
}

Rate Limiting

Protect against abuse and DDoS with rate limiting:

http {
    # Define rate limit zones
    limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
    limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;
    limit_conn_zone $binary_remote_addr zone=addr:10m;

    server {
        location /api/ {
            limit_req zone=api burst=20 nodelay;
            limit_req_status 429;
            proxy_pass http://api_backend;
        }

        location /login {
            limit_req zone=login burst=5;
            limit_conn addr 5;
            proxy_pass http://app_backend;
        }
    }
}

Upstream Optimization

upstream backend {
    least_conn;

    server 10.0.1.1:8080 max_fails=3 fail_timeout=30s;
    server 10.0.1.2:8080 max_fails=3 fail_timeout=30s;
    server 10.0.1.3:8080 backup;

    # Keepalive connections to upstream
    keepalive 32;
    keepalive_requests 1000;
    keepalive_timeout 60s;
}

server {
    location / {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";  # Required for keepalive
    }
}

Monitoring Nginx Performance

# Enable stub_status for basic metrics
location /nginx_status {
    stub_status;
    allow 127.0.0.1;
    deny all;
}

# Output:
# Active connections: 291
# server accepts handled requests
#  16630948 16630948 31070465
# Reading: 6 Writing: 179 Waiting: 106

Key metrics to monitor: active connections, requests per second, response times, error rates (4xx/5xx), and upstream response times.

Summary

Nginx performance tuning involves optimizing worker processes for your CPU count, enabling efficient I/O with sendfile and epoll, compressing responses with gzip or Brotli, configuring proper buffer sizes, caching static files aggressively, and tuning SSL for minimal handshake overhead. The most impactful changes are usually enabling gzip compression, configuring keepalive connections, and setting proper cache headers for static assets. Always benchmark before and after changes to measure actual improvement.

Check your website right now

Check now →
More articles: Infrastructure
Infrastructure
What Is a CDN and How Does It Speed Up Your Website
11.03.2026 · 20 views
Infrastructure
API Rate Limiting: Why and How to Implement
14.03.2026 · 15 views
Infrastructure
API Versioning Strategies: URL, Header, and Query Parameter Approaches
16.03.2026 · 24 views
Infrastructure
CDN: How It Works and Why You Need It
14.03.2026 · 12 views