The Chronicle

Thoughts & Stories

Login

February 9, 2026

Nginx Load Balancing: The Traffic Cop of the Web

By huseynovvusal

When your application grows beyond what a single server can handle, you need to distribute the traffic across multiple servers. This is called Load Balancing. Nginx is one of the most popular tools for this because it is incredibly efficient, handles thousands of concurrent connections, and is easy to configure.


1. How It Works (The Architecture)

Imagine Nginx as a receptionist at a busy office.

  1. The Client (User) sends a request to your website (e.g., www.example.com).
  2. Nginx sits at the front door. It receives the request.
  3. The Upstream (Backend) servers are the workers. Nginx looks at its list of workers and decides who is free or whose turn it is.
  4. Forwarding: Nginx passes the request to that specific server.

  5. Response: The server processes the request and sends the answer back to Nginx, which hands it to the user. The user never knows multiple servers exist.


2. Basic Configuration

To set this up, you use the upstream directive in your nginx.conf file. This defines a group of servers that Nginx can talk to.


http {
    # 1. Define the group of backend servers
    upstream my_app_servers {
        server 10.0.0.1;
        server 10.0.0.2;
        server 10.0.0.3;
    }

    server {
        listen 80;
        server_name example.com;

        location / {
            # 2. Pass traffic to the group
            proxy_pass http://my_app_servers;
        }
    }
}


3. Load Balancing Algorithms (The Strategies)

Nginx doesn't just randomly pick a server. It uses specific algorithms (methods) to decide where to send traffic.

A. Round Robin (The Default)

Nginx goes down the list one by one.

  • Request 1 -> Server A
  • Request 2 -> Server B
  • Request 3 -> Server C
  • Request 4 -> Server A (Loop starts over)

Best for: Servers with identical specs and stateless applications.

B. Least Connections

Nginx sends the request to the server that is currently busy with the fewest active connections.

  • Server A has 10 active users.
  • Server B has 2 active users.
  • Next Request -> Server B.

Best for: Requests that take varying amounts of time (e.g., one user is uploading a large file while another is just loading text).


upstream my_app_servers {
    least_conn;  # <--- The magic keyword
    server 10.0.0.1;
    server 10.0.0.2;
}

C. IP Hash (The Sticky Session)

Nginx takes the client's IP address, runs a math formula (hash) on it, and assigns them to a specific server. That user will always go to the same server as long as their IP doesn't change.

  • User IP 192.168.1.5 -> Always Server A.

Best for: Apps that store session data locally on the server (though it's better to store sessions in a shared database like Redis).

Nginx


upstream my_app_servers {
    ip_hash;  # <--- The magic keyword
    server 10.0.0.1;
    server 10.0.0.2;
}

D. Weighted Load Balancing

You can tell Nginx that one server is more powerful than the others by assigning "weights".


  • Server A (Super Computer): weight=5

  • Server B (Old Laptop): weight=1

For every 6 requests, Server A gets 5, and Server B gets 1.


upstream my_app_servers {
    server 10.0.0.1 weight=5;
    server 10.0.0.2 weight=1;
}


4. Health Checks (Passive)

What happens if Server B crashes? You don't want Nginx to send users there.

Nginx Open Source performs passive health checks. If a server fails to respond to a request (e.g., connection timed out), Nginx marks it as "failed" and won't send traffic there for a short time.


upstream my_app_servers {
    # If a server fails 3 times in 30 seconds, mark it down for 30 seconds.
    server 10.0.0.1 max_fails=3 fail_timeout=30s;
    server 10.0.0.2 max_fails=3 fail_timeout=30s;
}

Summary Checklist for Production

If you are deploying this for your Django blog:

  1. Define Upstream: Point to your Docker containers (e.g., server django_container_1:8000;).
  2. Algorithm: Use Least Connections if your blog has heavy processing, or Round Robin if it's mostly static reads.
  3. Headers: Ensure you pass the Host header so Django knows the real domain (fixing that CSRF issue we discussed earlier!).


location / {
    proxy_pass http://my_app_servers;
    proxy_set_header Host $host;           # Crucial for Django!
    proxy_set_header X-Real-IP $remote_addr;
}

Would you like to see a complete nginx.conf example tailored for your Django Docker setup?