NGINX.COM

In this post we’ll talk about using NGINX and NGINX Plus with Node.js and Socket.IO. Our post about building real‑time web applications with WebSocket and NGINX has been quite popular, so in this post we’ll continue with documentation and best practices using Socket.IO.

Why Use NGINX with Node.js and Socket.IO?

Socket.IO is a WebSocket API that’s become quite popular with the rise of Node.js applications. The API is well known because it makes building realtime apps, like online games or chat, simple. NGINX 1.3.13 and later and all NGINX Plus releases support proxying of WebSocket connections, which allows you to utilize Socket.IO. The WebSocket protocol allows for full‑duplex, or bidirectional, communication via a single TCP connection.

Applications running in production usually need to run on port 80 (HTTP), port 443 (HTTPS), or both. This can be a challenge if several components of your application interact with the user or you are using a web server on port 80 to deliver other assets. This makes it necessary to proxy to the Socket.IO server, and NGINX is the best way to do that. Whether you have one instance of your backend application or hundreds, NGINX can also load balance your upstreams when using multiple nodes.

Socket.IO Configuration

To install Node.js, download the appropriate distribution (or install with a package manager). Run the npm install socket.io command to install Socket.IO.

For this example, we assume that the Socket.IO server for your real‑time app is running on port 5000. The following is a template for a server.js node application file; it’s a basic program that acts as a server and routes incoming requests to the proper port running the Socket.IO server.

var io = require('socket.io').listen(5000);
 
io.sockets.on('connection', function (socket) {
  socket.on('set nickname', function (name) {
    socket.set('nickname', name, function () {
      socket.emit('ready');
    });
  });
 
  socket.on('msg', function () {
    socket.get('nickname', function (err, name) {
      console.log('Chat message by ', name);
    });
  });
});

Add JavaScript code like the following to the file that is delivered to your client, for example index.html. This example requests a connection to your application to create a WebSocket with your user’s browser.

<script src="/socket.io/socket.io.js"></script>
<script>
     var socket = io(); // your initialization code here.
</script>

NGINX Configuration

Upstream Declaration

NGINX and NGINX Plus can load balance and distribute user sessions to multiple nodes if your application has several instances. In the http context in your NGINX or NGINX Plus configuration, include an upstream block to define the nodes in an upstream group.

As shown in the following example, you can include the weight parameter on a server directive to set the proportion of traffic directed to it. Here srv1.app.com receives five times more sessions than the other servers. NGINX Plus extends the reverse proxy capabilities of NGINX with enhanced load balancing methods and by adding session persistence, health checks, extended status reports, and on‑the‑fly reconfiguration of load‑balanced server groups.

# in the http{} configuration block
upstream socket_nodes {
    ip_hash;
    server srv1.app.com:5000 weight=5;
    server srv2.app.com:5000;
    server srv3.app.com:5000;
    server srv4.app.com:5000;
}

Virtual Host Configuration

Now that the upstream group of servers is declared, a virtual server needs to be configured to direct traffic to it. At minimum, include the proxy_pass directive and name the upstream group. Because the WebSocket protocol uses the Upgrade header introduced in HTTP/1.1, we include the proxy_http_version directive.

server {
    server_name app.domain.com;
    location / {
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_pass http://socket_nodes;
    }
}

What About Static Files?

To deliver static assets, you can have NGINX proxy requests to an upstream Node.js instance, but in most cases it’s more efficient to have NGINX serve them directly.

In combination with the server_name directive in the server block above, the following location block tells NGINX to respond to client requests for content in http://app.domain.com/assets/ by serving it from the local /path/to/assets directory. You can further optimize static file handling or set cache expiration settings that meet your needs.

location /assets {
    alias /path/to/assets;
    access_log off;
    expires max;
}

Troubleshooting

If you receive the following error, you are probably running a version of NGINX prior to 1.3. Use of WebSocket is supported in NGINX 1.3.13 and later.

WebSocket connection to '...' failed: Error during WebSocket handshake: 
'Connection' header value is not 'Upgrade': keep-alive socket.io.js:2371

Further Reading

To try NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases.

Hero image
《NGINX 完全指南》2024 年最新完整版


高性能负载均衡的进阶使用指南

关于作者

Patrick Nommensen

Demand Generation Manager

关于 F5 NGINX

F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 nginx-cn.net 了解更多相关信息。