NGINX.COM

When you are testing changes to an application, there are some factors you can measure only in a production environment rather than a development test bed. Examples include the effect of UI changes on user behavior and the impact on overall performance. A common testing method is A/B testing – also known as split testing – in which a (usually small) proportion of users is directed to the new version of an application while most users continue to use the current version.

In this blog post we’ll explore why it is important to perform A/B testing when deploying new versions of your web application and how to use NGINX and NGINX Plus to control which version of an application users see. Configuration examples illustrate how you can use NGINX and NGINX Plus directives, parameters, and variables to achieve accurate and measurable A/B testing.

Why Do A/B Testing?

As we mentioned, A/B testing enables you to measure the difference in application performance or effectiveness between two versions. Perhaps your development team wants to change the visual arrangement of buttons in the UI or overhaul the entire shopping cart process, but wants to compare the close rate of transactions to make sure the change has the desired business impact. Using A/B testing you can send a defined percent of the traffic to the new version and the remaining to the old version and measure the effectiveness of both versions of the application.

Or perhaps your concern is less the effect on user behavior and more related to the performance impact. Let’s say you plan to deploy a huge set of changes to your web application and don’t feel that testing within your quality assurance environment truly captures the possible effect on performance in production. In this case, an A/B deployment allows you to expose the new version to a small, defined percentage of visitors to measure the performance impact of the changes, and gradually increase the percentage until eventually you roll out the changed application to all users.

Using NGINX and NGINX Plus for A/B Testing

NGINX and NGINX Plus provide a couple methods for controlling where web application traffic is sent. The first method is available in both products, whereas the second is available in NGINX Plus only.

Both methods choose the destination of a request based on the values of one or more NGINX variables that capture characteristics of the client (such as its IP address) or of the request URI (such as a named argument), but the differences between them make them suitable for different A/B‑testing use cases:

  • The split_clients method chooses the destination of a request based on a hash of the variable values extracted from the request. The set of all possible hash values is divided up among the application versions, and you can assign a different proportion of the set to each application. The choice of destination ends up being randomized.
  • The sticky route method provides you much greater control over the destination of each request. The choice of application is based on the variable values themselves (not a hash), so you can set explicitly which application receives requests that have certain variable values. You can also use regular expressions to base the decision on just portions of a variable value, and can preferentially choose one variable over another as the basis for the decision.

Using the split_clients Method

In this method, the split_clients configuration block sets a variable for each request that determines which upstream group the proxy_pass directive sends the request to. In the sample configuration below, the value of the $appversion variable determines where the proxy_pass directive sends a request. The split_clients block uses a hash function to dynamically set the variable’s value to one of two upstream group names, either version_1a or version_1b.

http {
    # ...
    # application version 1a 
    upstream version_1a {
        server 10.0.0.100:3001;
        server 10.0.0.101:3001;
    }

    # application version 1b
    upstream version_1b {
        server 10.0.0.104:6002;
        server 10.0.0.105:6002;
    }

    split_clients "${arg_token}" $appversion {
        95%     version_1a;
        *       version_1b;
    }

    server {
        # ...
        listen 80;
        location / {
            proxy_set_header Host $host;
            proxy_pass http://$appversion;
        }
    }
}

The first parameter to the split_clients directive is the string ("${arg_token}" in our example) that is hashed using a MurmurHash2 function during each request. URI arguments are available to NGINX as variables called $arg_name – in our example the $arg_token variable captures the URI argument called token. You can use any NGINX variable or string of variables as the string to be hashed. For example, you might hash the client’s IP address (the $remote_addr variable), port ($remote_port), or the combination of both. You want to use a variable that is generated before the request is processed by NGINX. Variables that contain information about the client’s initial request are ideal; examples include the client’s IP address/port as already mentioned, the request URI, or even HTTP request headers.

The second parameter to the split_clients directive ($appversion in our example) is the variable that gets set dynamically according to the hash of the first parameter. The statements inside the curly braces divide the hash table into “buckets”, each of which contains a percentage of the possible hashes. You can create any number of buckets and they don’t have to all be the same size. Note that the percentage for the last bucket is always represented by the asterisk (*) rather than a specific number, because the number of hashes might not be evenly dividable into the specified percentages.

In our example, we put 95% of the hashes in a bucket associated with the version_1a upstream group, and the remainder in a second bucket associated with version_1b. The range of possible hash values is from 0 to 4,294,967,295, so the first bucket contains values from 0 to about 4,080,218,930 (95% of the total). The $appversion variable is set to the upstream associated with the bucket containing the hash of the $arg_token variable. As a specific example, the hash value 100,000,000 falls in the first bucket, so $appversion is dynamically set to version_1a.

Testing the split_clients Configuration

To verify that the split_clients configuration block works as intended, we created a test configuration that divides requests between two upstream groups in the same proportion as above (95% and the remainder). We configured the virtual servers in the groups to return a string indicating which group – version_1a or version_1b – handled the request (you can see the test configuration here). Then we used curl to generate 20 requests, with the value of the URI argument token set randomly by running the cat command on the urandom file. This is purely for demonstration and randomization purposes. As we intended, 1 in 20 requests (95%) was served from version_1b (for brevity we show only 10 of the requests).

# for x in {1..20}; do curl 127.0.0.1?token=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1); done
Token: p3Fsa86HfJDFwZ9ZnYz4QbaZLzcb70Ka		Served from site version_1a. 
Token: a7z7afUz7zQijXwsLGfj6mWyPU8sCRIZ		Served from site version_1a. 
Token: CjIr6W57NtPzChf4qbYMzD1Estif7jOH		Served from site version_1a. 
                 ... output for 10 requests omitted ...
Token: gXq8cbG3jhl1LuYmICPTfDQT855gaO5y		Served from site version_1a. 
Token: VVqGuNN3cRU2slsl5AWOR93aQX8nXOpk		Served from site version_1a. 
Token: z7KnewxTX5Sp6wscT0fgmTDohTsQuCmy		Served from site version_1b!!
Token: fWOgH9WRrR0kLJZcIaYchpLhceaQgPD1		Served from site version_1a. 
Token: mTADMXrVnwnr1cd5JE6QCSkgTwfWUnDk		Served from site version_1a.
Token: w7AzSNmNJtxWZaH6cXe2PWIFqst2o3oP		Served from site version_1a. 
Token: QR7ay0dA39MmVlXtzgOVsj6SBTPi8ECC		Served from site version_1a.

Using the sticky route Method

In some cases you might want to define a static route by making client routing decisions based on all or part of the value of an NGINX variable. You can do this with the sticky route directive, which is available only in NGINX Plus. The directive takes a list of one or more parameters and sets the route to the value of the first nonempty parameter in the list. We can use this feature to preferentially rank which variable from the request controls the choice of destination, and so accommodate more than one traffic‑splitting method in a single configuration.

There are two different approaches to using this method.

  • Using the client‑side approach you can choose the route based on NGINX variables that contain values which are initially sent directly from the client, such as the client’s IP address or browser‑specific HTTP request headers like the client’s User-Agent.
  • With the server‑side or application‑side approach, your application decides which test group a first‑time user is assigned to, and sends it a cookie or a redirect URI that includes a route indicator representing the chosen group. The next time the client sends a request, it presents the cookie or uses the redirect URI; the sticky route directive extracts the route indicator and forwards the request to the appropriate server.

We’re using the application‑side approach in our example: the sticky route directive in the upstream group preferentially sets the route to the value specified in the cookie provided by a server (captured in $route_from_cookie). If the client doesn’t have a cookie, the route is set to a value from an argument to the request URI ($route_from_uri). The route value then determines which server in the upstream group gets the request – the first server if the route is a, the second server if the route is b (the two servers correspond to the two versions of our application).

upstream backend {
    zone backend 64k;
    server 10.0.0.200:8098 route=a;
    server 10.0.0.201:8099 route=b;

    sticky route $route_from_cookie $route_from_uri;
}

But the a or b is embedded in a much longer character string in the actual cookie or URI. To extract just the letter, we configure a map configuration block for each of the cookie and the URI:

map $cookie_route $route_from_cookie {
    ~.(?P<route>w+)$ $route;
}

map $arg_route $route_from_uri {
    ~.(?P<route>w+)$ $route;
}

In the first map block, the $cookie_route variable represents the value of a cookie named ROUTE. The regular expression on the second line, which uses Perl Compatible Regular Expression (PCRE) syntax, extracts part of the value – in this case, the character string (w+) after the period – into the named capture group route and assigns it to the internal variable with that name. The value is also assigned to the $route_from_cookie variable on the first line, which makes it available for passing to the sticky route directive.

As an example, the first map block extracts the value “a” from this cookie and assigns it to $route_from_cookie:

ROUTE=iDmDe26BdBDS28FuVJlWc1FH4b13x4fn.a

In the second map block, the $arg_route variable represents an argument named route in the request URI. As with the cookie, the regular expression on the second line extracts part of the URI – in this case, it’s the character string (w+) after the period in the route argument. The value is read into the named capture group, assigned to an internal variable, and also assigned to the $route_from_uri variable.

As an example, the second map block extracts the value b from this URI and assigns it to $route_from_uri:

www.example.com/shopping/my-cart?route=iLbLr35AeAET39GvWK2Xd2GI5c24y5go.b

Here’s the complete sample configuration.

http {
    # ...
    map $cookie_route $route_from_cookie {
        ~.(?P<route>w+)$ $route;
    }

    map $arg_route $route_from_uri {
        ~.(?P<route>w+)$ $route;
    }

    upstream backend {
        zone backend 64k;
        server 10.0.0.200:8098 route=a;
        server 10.0.0.201:8099 route=b;

        sticky route $route_from_cookie $route_from_uri;
    }

    server {
        listen 80;

        location / {
            # ...
            proxy_pass http://backend;
        }
    }
}

Testing the sticky route Configuration

As for the split_clients method, we created a test configuration, which you can access here. We used curl either to send a cookie named ROUTE or to include a route argument in the URI. The value of the cookie or argument is a random string generated by running the cat command on the urandom file, with .a or .b appended.

First, we test with a cookie that ends in .a:

# curl --cookie "ROUTE=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1).a" 127.0.0.1
Cookie Value: R0TdyJOJvxBkLC3f75Coa29I1pPySOeQ.a
Request URI: /
Results: Site A - Running on port 8089

Then we test with a cookie that ends in .b.

# curl --cookie "ROUTE=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1).b" 127.0.0.1
Cookie Value: JhdJZrScTnPBLhqmzK3podNRcJAIc8ST.b
Request URI: /
Results: Site B - Running on port 8099

Finally we test without a cookie and instead with a route argument in the request URI that ends in .a. The output confirms that when there’s no cookie (the Cookie Value field is empty) NGINX Plus uses the route value derived from the URI.

# curl 127.0.0.1?route=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1).a
Cookie Value:
Request URI: /?route=yNp8pHskvukXK6XqbWefhVUcOBjbJv4v.a
Results: Site A - Running on port 8089

Logging and Analyzing Your Results

The kind of testing we describe here is sufficient to verify that a configuration distributes requests as intended, but interpreting the results of actual A/B testing requires much more detailed logging and analysis of how requests are processed. The right way to do logging and analysis depends on many factors and is beyond the scope of this post, but NGINX and NGINX Plus provide sophisticated, built‑in logging and monitoring of request processing.

You can use the log_format directive to define a custom log format that includes any NGINX variable. The variable values recorded in the NGINX logs are then available for analysis at a later time. For details on custom logging and runtime monitoring, see the NGINX Plus Admin Guide.

Some Last Things to Consider

When designing an experiment or A/B testing plan, make sure that the way you distribute requests between your application versions doesn’t predetermine the results. If you want a completely random experiment, using the split_clients method and hashing a combination of multiple variables provides the best results. For example, generating a unique experimental token based on the combination of the cookie and the user ID from a request provides a more randomized testing pattern than hashing just the client’s browser type and version, because there is a good chance that many users have the same type of browser and version and so will all be directed to the same version of the application.

You also need to take into account that many users belong to what is called the mixed group. They access web applications from multiple devices – perhaps both their work and home computers and also a mobile device such as a tablet or smartphone. Such users have multiple client IP addresses, so if you use client IP address as the basis for choosing the application version, they might see both versions of your application, corrupting your experimental results.

Probably the easiest solution is to require users to log in so you can track their session cookie as in our example of the sticky route method. This way you can track them and always send them to the same version they saw the first time they accessed the test. If you cannot do this, sometimes it makes sense to put users in groups that are not likely to change during the course of the testing, for example using geolocation to show one version to users in Los Angeles and another to users in San Francisco.

Conclusion

A/B testing is an effective way to analyze and track changes to your application and monitor application performance by splitting different amounts of traffic between alternate servers. Both NGINX and NGINX Plus provide directives, parameters, and variables that can be used to build a solid framework for A/B testing. They also enable you to log valuable details about each request. Enjoy testing!

Try out NGINX Plus and the sticky route method for yourself – start your free 30-day trial today or contact us to discuss your use cases.

Hero image
《NGINX 完全指南》2024 年最新完整版


高性能负载均衡的进阶使用指南

关于作者

Kevin Jones

Sr. Product Manager

Kevin Jones is a sales engineer at NGINX, where he specializes in the integration and implementation of NGINX for various accounts around the world. He has a strong background in infrastructure management, monitoring, and troubleshooting. Previously, Kevin was a lead site reliability engineer on the Production Operations team at YellowPages.com.

关于 F5 NGINX

F5, Inc. 是备受欢迎的开源软件 NGINX 背后的商业公司。我们为现代应用的开发和交付提供一整套技术。我们的联合解决方案弥合了 NetOps 和 DevOps 之间的横沟,提供从代码到用户的多云应用服务。访问 nginx-cn.net 了解更多相关信息。