The ngx_http_upstream_module
module is used to define groups of servers that can be referenced by the proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, and grpc_pass directives.
upstream backend { server backend1.example.com weight=5; server backend2.example.com:8080; server unix:/tmp/backend3; server backup1.example.com:8080 backup; server backup2.example.com:8080 backup; } server { location / { proxy_pass http://backend; } }
Dynamically configurable group with periodic health checks is available as part of our commercial subscription:
resolver 10.0.0.1; upstream dynamic { zone upstream_dynamic 64k; server backend1.example.com weight=5; server backend2.example.com:8080 fail_timeout=5s slow_start=30s; server 192.0.2.1 max_fails=3; server backend3.example.com resolve; server backend4.example.com service=http resolve; server backup1.example.com:8080 backup; server backup2.example.com:8080 backup; } server { location / { proxy_pass http://dynamic; health_check; } }
Syntax: | upstream name { ... } |
---|---|
Default: | — |
Context: | http |
Defines a group of servers. Servers can listen on different ports. In addition, servers listening on TCP and UNIX-domain sockets can be mixed.
Example:
upstream backend { server backend1.example.com weight=5; server 127.0.0.1:8080 max_fails=3 fail_timeout=30s; server unix:/tmp/backend3; server backup1.example.com backup; }
By default, requests are distributed between the servers using a weighted round-robin balancing method. In the above example, each 7 requests will be distributed as follows: 5 requests go to backend1.example.com
and one request to each of the second and third servers. If an error occurs during communication with a server, the request will be passed to the next server, and so on until all of the functioning servers will be tried. If a successful response could not be obtained from any of the servers, the client will receive the result of the communication with the last server.
Syntax: | server address [parameters]; |
---|---|
Default: | — |
Context: | upstream |
Defines the address
and other parameters
of a server. The address can be specified as a domain name or IP address, with an optional port, or as a UNIX-domain socket path specified after the “unix:
” prefix. If a port is not specified, the port 80 is used. A domain name that resolves to several IP addresses defines multiple servers at once.
The following parameters can be defined:
weight
=number
max_conns
=number
number
of simultaneous active connections to the proxied server (1.11.5). Default value is zero, meaning there is no limit. If the server group does not reside in the shared memory, the limitation works per each worker process. If idle keepalive connections, multiple workers, and the shared memory are enabled, the total number of active and idle connections to the proxied server may exceed the max_conns
value.
Since version 1.5.9 and prior to version 1.11.5, this parameter was available as part of our commercial subscription.
max_fails
=number
fail_timeout
parameter to consider the server unavailable for a duration also set by the fail_timeout
parameter. By default, the number of unsuccessful attempts is set to 1. The zero value disables the accounting of attempts. What is considered an unsuccessful attempt is defined by the proxy_next_upstream, fastcgi_next_upstream, uwsgi_next_upstream, scgi_next_upstream, memcached_next_upstream, and grpc_next_upstream directives. fail_timeout
=time
backup
The parameter cannot be used along with the hash, ip_hash, and random load balancing methods.
down
Additionally, the following parameters are available as part of our commercial subscription:
resolve
In order for this parameter to work, the resolver
directive must be specified in the http block or in the corresponding upstream block.
route
=string
service
=name
name
(1.9.13). In order for this parameter to work, it is necessary to specify the resolve parameter for the server and specify a hostname without a port number. If the service name does not contain a dot (“.
”), then the RFC-compliant name is constructed and the TCP protocol is added to the service prefix. For example, to look up the _http._tcp.backend.example.com
SRV record, it is necessary to specify the directive:
server backend.example.com service=http resolve;
If the service name contains one or more dots, then the name is constructed by joining the service prefix and the server name. For example, to look up the _http._tcp.backend.example.com
and server1.backend.example.com
SRV records, it is necessary to specify the directives:
server backend.example.com service=_http._tcp resolve; server example.com service=server1.backend resolve;
Highest-priority SRV records (records with the same lowest-number priority value) are resolved as primary servers, the rest of SRV records are resolved as backup servers. If the backup parameter is specified for the server, high-priority SRV records are resolved as backup servers, the rest of SRV records are ignored.
slow_start
=time
time
during which the server will recover its weight from zero to a nominal value, when unhealthy server becomes healthy, or when the server becomes available after a period of time it was considered unavailable. Default value is zero, i.e. slow start is disabled. The parameter cannot be used along with the hash, ip_hash, and random load balancing methods.
drain
Prior to version 1.13.6, the parameter could be changed only with the API module.
If there is only a single server in a group,max_fails
,fail_timeout
andslow_start
parameters are ignored, and such a server will never be considered unavailable.
Syntax: | zone name [size]; |
---|---|
Default: | — |
Context: | upstream |
This directive appeared in version 1.9.0.
Defines the name
and size
of the shared memory zone that keeps the group’s configuration and run-time state that are shared between worker processes. Several groups may share the same zone. In this case, it is enough to specify the size
only once.
Additionally, as part of our commercial subscription, such groups allow changing the group membership or modifying the settings of a particular server without the need of restarting nginx. The configuration is accessible via the API module (1.13.3).
Prior to version 1.13.3, the configuration was accessible only via a special location handled by upstream_conf.
Syntax: | state file; |
---|---|
Default: | — |
Context: | upstream |
This directive appeared in version 1.9.7.
Specifies a file
that keeps the state of the dynamically configurable group.
Examples:
state /var/lib/nginx/state/servers.conf; # path for Linux state /var/db/nginx/state/servers.conf; # path for FreeBSD
The state is currently limited to the list of servers with their parameters. The file is read when parsing the configuration and is updated each time the upstream configuration is changed. Changing the file content directly should be avoided. The directive cannot be used along with the server directive.
Changes made during configuration reload or binary upgrade can be lost.
This directive is available as part of our commercial subscription.
Syntax: | hash key [consistent]; |
---|---|
Default: | — |
Context: | upstream |
This directive appeared in version 1.7.2.
Specifies a load balancing method for a server group where the client-server mapping is based on the hashed key
value. The key
can contain text, variables, and their combinations. Note that adding or removing a server from the group may result in remapping most of the keys to different servers. The method is compatible with the Cache::Memcached Perl library.
If the consistent
parameter is specified, the ketama consistent hashing method will be used instead. The method ensures that only a few keys will be remapped to different servers when a server is added to or removed from the group. This helps to achieve a higher cache hit ratio for caching servers. The method is compatible with the Cache::Memcached::Fast Perl library with the ketama_points
parameter set to 160.
Syntax: | ip_hash; |
---|---|
Default: | — |
Context: | upstream |
Specifies that a group should use a load balancing method where requests are distributed between servers based on client IP addresses. The first three octets of the client IPv4 address, or the entire IPv6 address, are used as a hashing key. The method ensures that requests from the same client will always be passed to the same server except when this server is unavailable. In the latter case client requests will be passed to another server. Most probably, it will always be the same server as well.
IPv6 addresses are supported starting from versions 1.3.2 and 1.2.2.
If one of the servers needs to be temporarily removed, it should be marked with the down
parameter in order to preserve the current hashing of client IP addresses.
Example:
upstream backend { ip_hash; server backend1.example.com; server backend2.example.com; server backend3.example.com down; server backend4.example.com; }
Until versions 1.3.1 and 1.2.2, it was not possible to specify a weight for servers using the ip_hash
load balancing method.
Syntax: | keepalive connections; |
---|---|
Default: | — |
Context: | upstream |
This directive appeared in version 1.1.4.
Activates the cache for connections to upstream servers.
The connections
parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed.
It should be particularly noted that thekeepalive
directive does not limit the total number of connections to upstream servers that an nginx worker process can open. Theconnections
parameter should be set to a number small enough to let upstream servers process new incoming connections as well.
When using load balancing methods other than the default round-robin method, it is necessary to activate them before the keepalive
directive.
Example configuration of memcached upstream with keepalive connections:
upstream memcached_backend { server 127.0.0.1:11211; server 10.0.0.2:11211; keepalive 32; } server { ... location /memcached/ { set $memcached_key $uri; memcached_pass memcached_backend; } }
For HTTP, the proxy_http_version directive should be set to “1.1
” and the “Connection” header field should be cleared:
upstream http_backend { server 127.0.0.1:8080; keepalive 16; } server { ... location /http/ { proxy_pass http://http_backend; proxy_http_version 1.1; proxy_set_header Connection ""; ... } }
Alternatively, HTTP/1.0 persistent connections can be used by passing the “Connection: Keep-Alive” header field to an upstream server, though this method is not recommended.
For FastCGI servers, it is required to set fastcgi_keep_conn for keepalive connections to work:
upstream fastcgi_backend { server 127.0.0.1:9000; keepalive 8; } server { ... location /fastcgi/ { fastcgi_pass fastcgi_backend; fastcgi_keep_conn on; ... } }
SCGI and uwsgi protocols do not have a notion of keepalive connections.
Syntax: | keepalive_requests number; |
---|---|
Default: | keepalive_requests 100; |
Context: | upstream |
This directive appeared in version 1.15.3.
Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed.
Closing connections periodically is necessary to free per-connection memory allocations. Therefore, using too high maximum number of requests could result in excessive memory usage and not recommended.
Syntax: | keepalive_timeout timeout; |
---|---|
Default: | keepalive_timeout 60s; |
Context: | upstream |
This directive appeared in version 1.15.3.
Sets a timeout during which an idle keepalive connection to an upstream server will stay open.
Syntax: | ntlm; |
---|---|
Default: | — |
Context: | upstream |
This directive appeared in version 1.9.2.
Allows proxying requests with NTLM Authentication. The upstream connection is bound to the client connection once the client sends a request with the “Authorization” header field value starting with “Negotiate
” or “NTLM
”. Further client requests will be proxied through the same upstream connection, keeping the authentication context.
In order for NTLM authentication to work, it is necessary to enable keepalive connections to upstream servers. The proxy_http_version directive should be set to “1.1
” and the “Connection” header field should be cleared:
upstream http_backend { server 127.0.0.1:8080; ntlm; } server { ... location /http/ { proxy_pass http://http_backend; proxy_http_version 1.1; proxy_set_header Connection ""; ... } }
When using load balancer methods other than the default round-robin method, it is necessary to activate them before the ntlm
directive.
This directive is available as part of our commercial subscription.
Syntax: | least_conn; |
---|---|
Default: | — |
Context: | upstream |
This directive appeared in versions 1.3.1 and 1.2.2.
Specifies that a group should use a load balancing method where a request is passed to the server with the least number of active connections, taking into account weights of servers. If there are several such servers, they are tried in turn using a weighted round-robin balancing method.
Syntax: | least_time
header |
last_byte
[inflight]; |
---|---|
Default: | — |
Context: | upstream |
This directive appeared in version 1.7.10.
Specifies that a group should use a load balancing method where a request is passed to the server with the least average response time and least number of active connections, taking into account weights of servers. If there are several such servers, they are tried in turn using a weighted round-robin balancing method.
If the header
parameter is specified, time to receive the response header is used. If the last_byte
parameter is specified, time to receive the full response is used. If the inflight
parameter is specified (1.11.6), incomplete requests are also taken into account.
Prior to version 1.11.6, incomplete requests were taken into account by default.
This directive is available as part of our commercial subscription.
Syntax: | queue
number
[timeout=time]; |
---|---|
Default: | — |
Context: | upstream |
This directive appeared in version 1.5.12.
If an upstream server cannot be selected immediately while processing a request, the request will be placed into the queue. The directive specifies the maximum number
of requests that can be in the queue at the same time. If the queue is filled up, or the server to pass the request to cannot be selected within the time period specified in the timeout
parameter, the 502 (Bad Gateway) error will be returned to the client.
The default value of the timeout
parameter is 60 seconds.
When using load balancer methods other than the default round-robin method, it is necessary to activate them before the queue
directive.
This directive is available as part of our commercial subscription.
Syntax: | random [two [method]]; |
---|---|
Default: | — |
Context: | upstream |
This directive appeared in version 1.15.1.
Specifies that a group should use a load balancing method where a request is passed to a randomly selected server, taking into account weights of servers.
The optional two
parameter instructs nginx to randomly select two servers and then choose a server using the specified method
. The default method is least_conn
which passes a request to a server with the least number of active connections.
The least_time
method passes a request to a server with the least average response time and least number of active connections. If least_time=header
is specified, the time to receive the response header is used. If least_time=last_byte
is specified, the time to receive the full response is used.
The least_time
method is available as a part of our commercial subscription.
Syntax: | resolver
address ...
[valid=time]
[ipv6=on|off]
[status_zone=zone]; |
---|---|
Default: | — |
Context: | upstream |
This directive appeared in version 1.17.5.
Configures name servers used to resolve names of upstream servers into addresses, for example:
resolver 127.0.0.1 [::1]:5353;
The address can be specified as a domain name or IP address, with an optional port. If port is not specified, the port 53 is used. Name servers are queried in a round-robin fashion.
By default, nginx will look up both IPv4 and IPv6 addresses while resolving. If looking up of IPv6 addresses is not desired, the ipv6=off
parameter can be specified.
By default, nginx caches answers using the TTL value of a response. An optional valid
parameter allows overriding it:
resolver 127.0.0.1 [::1]:5353 valid=30s;
To prevent DNS spoofing, it is recommended configuring DNS servers in a properly secured trusted local network.
The optional status_zone
parameter enables collection of DNS server statistics of requests and responses in the specified zone
.
This directive is available as part of our commercial subscription.
Syntax: | resolver_timeout time; |
---|---|
Default: | resolver_timeout 30s; |
Context: | upstream |
This directive appeared in version 1.17.5.
Sets a timeout for name resolution, for example:
resolver_timeout 5s;
This directive is available as part of our commercial subscription.
Syntax: | sticky
cookie name
[expires=time]
[domain=domain]
[httponly]
[secure]
[path=path]; sticky
route $variable ...; sticky
learn
create=$variable
lookup=$variable
zone=name:size
[timeout=time]
[header]
[sync]; |
---|---|
Default: | — |
Context: | upstream |
This directive appeared in version 1.5.7.
Enables session affinity, which causes requests from the same client to be passed to the same server in a group of servers. Three methods are available:
When the cookie
method is used, information about the designated server is passed in an HTTP cookie generated by nginx:
upstream backend { server backend1.example.com; server backend2.example.com; sticky cookie srv_id expires=1h domain=.example.com path=/; }
A request that comes from a client not yet bound to a particular server is passed to the server selected by the configured balancing method. Further requests with this cookie will be passed to the designated server. If the designated server cannot process a request, the new server is selected as if the client has not been bound yet.
The first parameter sets the name of the cookie to be set or inspected. The cookie value is a hexadecimal representation of the MD5 hash of the IP address and port, or of the UNIX-domain socket path. However, if the “route
” parameter of the server directive is specified, the cookie value will be the value of the “route
” parameter:
upstream backend { server backend1.example.com route=a; server backend2.example.com route=b; sticky cookie srv_id expires=1h domain=.example.com path=/; }
In this case, the value of the “srv_id
” cookie will be either a
or b
.
Additional parameters may be as follows:
expires=
time
time
for which a browser should keep the cookie. The special value max
will cause the cookie to expire on “31 Dec 2037 23:55:55 GMT
”. If the parameter is not specified, it will cause the cookie to expire at the end of a browser session. domain=
domain
domain
for which the cookie is set. Parameter value can contain variables (1.11.5). httponly
HttpOnly
attribute to the cookie (1.7.11). secure
Secure
attribute to the cookie (1.7.11). path=
path
path
for which the cookie is set. If any parameters are omitted, the corresponding cookie fields are not set.
route
When the route
method is used, proxied server assigns client a route on receipt of the first request. All subsequent requests from this client will carry routing information in a cookie or URI. This information is compared with the “route
” parameter of the server directive to identify the server to which the request should be proxied. If the “route
” parameter is not specified, the route name will be a hexadecimal representation of the MD5 hash of the IP address and port, or of the UNIX-domain socket path. If the designated server cannot process a request, the new server is selected by the configured balancing method as if there is no routing information in the request.
The parameters of the route
method specify variables that may contain routing information. The first non-empty variable is used to find the matching server.
Example:
map $cookie_jsessionid $route_cookie { ~.+\.(?P<route>\w+)$ $route; } map $request_uri $route_uri { ~jsessionid=.+\.(?P<route>\w+)$ $route; } upstream backend { server backend1.example.com route=a; server backend2.example.com route=b; sticky route $route_cookie $route_uri; }
Here, the route is taken from the “JSESSIONID
” cookie if present in a request. Otherwise, the route from the URI is used.
learn
When the learn
method (1.7.1) is used, nginx analyzes upstream server responses and learns server-initiated sessions usually passed in an HTTP cookie.
upstream backend { server backend1.example.com:8080; server backend2.example.com:8081; sticky learn create=$upstream_cookie_examplecookie lookup=$cookie_examplecookie zone=client_sessions:1m; }
In the example, the upstream server creates a session by setting the cookie “EXAMPLECOOKIE
” in the response. Further requests with this cookie will be passed to the same server. If the server cannot process the request, the new server is selected as if the client has not been bound yet.
The parameters create
and lookup
specify variables that indicate how new sessions are created and existing sessions are searched, respectively. Both parameters may be specified more than once, in which case the first non-empty variable is used.
Sessions are stored in a shared memory zone, whose name
and size
are configured by the zone
parameter. One megabyte zone can store about 4000 sessions on the 64-bit platform. The sessions that are not accessed during the time specified by the timeout
parameter get removed from the zone. By default, timeout
is set to 10 minutes.
The header
parameter (1.13.1) allows creating a session right after receiving response headers from the upstream server.
The sync
parameter (1.13.8) enables synchronization of the shared memory zone.
This directive is available as part of our commercial subscription.
Syntax: | sticky_cookie_insert name
[expires=time]
[domain=domain]
[path=path]; |
---|---|
Default: | — |
Context: | upstream |
This directive is obsolete since version 1.5.7. An equivalent sticky directive with a new syntax should be used instead:
sticky cookie
name
[expires=
time
] [domain=
domain
] [path=
path
];
The ngx_http_upstream_module
module supports the following embedded variables:
$upstream_addr
192.168.1.1:80, 192.168.1.2:80, unix:/tmp/sock
”. If an internal redirect from one server group to another happens, initiated by “X-Accel-Redirect” or error_page, then the server addresses from different groups are separated by colons, e.g. “192.168.1.1:80, 192.168.1.2:80, unix:/tmp/sock : 192.168.10.1:80, 192.168.10.2:80
”. If a server cannot be selected, the variable keeps the name of the server group. $upstream_bytes_received
$upstream_bytes_sent
$upstream_cache_status
MISS
”, “BYPASS
”, “EXPIRED
”, “STALE
”, “UPDATING
”, “REVALIDATED
”, or “HIT
”. $upstream_connect_time
name
sent by the upstream server in the “Set-Cookie” response header field (1.7.1). Only the cookies from the response of the last server are saved. $upstream_header_time
$upstream_http_
name
$upstream_http_server
variable. The rules of converting header field names to variable names are the same as for the variables that start with the “$http_” prefix. Only the header fields from the response of the last server are saved. $upstream_queue_time
$upstream_response_length
$upstream_response_time
$upstream_status
$upstream_trailer_
name
© 2002-2020 Igor Sysoev
© 2011-2020 Nginx, Inc.
Licensed under the BSD License.
https://nginx.org/en/docs/http/ngx_http_upstream_module.html