How can we help you today?

How to Resolve Rate Limited Requests (429 Too Many Requests)

Created by: Jesper Verkade

Modified on: Mon, 22 Jun, 2020 at 1:28 PM


To protect your Hypernode from all kinds of attacks, bots, brute forces and scriptkiddies causing downtime, we've implemented several layers of rate limiting.

Most of these rate limit methods only apply to bots, but to avoid FPM worker depletion, we implemented a rate limiting mechanism per IP to prevent one single IP from exhausting the available FPM workers.

This article will explain the differences between the different rate limiting methods and show you how to find which rate limiting method applies and if needed, how to override them.

Rate Limiting Methods

On Hypernode we currently differentiate between two rate limiting methods and thus so-called zones: 

  • Rate limiting based on User Agents and requests per second (zone bots)
  • Rate limiting based on requests per IP address (zone zoneperip)

Both methods are implemented using this module

Determining the Applied Rate Limiting Method

You can easily determine which method of Rate Limiting was the cause of the request being 429'd, since each time any of the rate limiting methods is hit a message with be logged in the Nginx error log.

In order to do so you first look up the request in the access logs, which can be done using the hypernode-parse-nginx-logs (pnl) command: pnl --today --fields time,status,remote_addr,request --filter status=429

Copy the IP address from the output generated by this command and look up the corresponding log entry in the aforementioned Nginx error log with cat /var/log/nginx/error/log | grep "1.2.3.4" 

These entries look as follows:

A log entry where rate limit is applied to useragents and requests per second (based on the bots zone):

2016/08/15 18:25:54 [error] 11372#11372: *45252 limiting requests, excess: 0.586 by zone "bots", client: 1.2.3.4, server: , request: "GET /azie/flip.html HTTP/1.1", host: "www.kamelen-online.nl"

A log entry where the rate limit is applied per IP address (based on the zoneperip zone):

2016/08/12 10:23:39 [error] 25118#25118: *24362 limiting connections by zone "zoneperip", client: 1.2.3.4, server: , request: "GET /index.php/admin/abcdef/ HTTP/1.1", host: "www.kamelen-online.nl", referrer: "http://kamelen-online.nl/index.php/admin/abcdef/"

Note: Per IP rate limiting only applies to requests handled by PHP and not to the static content.

Rate Limiting for Bots and Crawlers

Everyday, your webshop is visited by many different bots and crawlers. While some, like Google, are important, many only have a negative impact on your site, especially if they don’t follow your robots.txt. To protect your Hypernode against negative performance impacts by misbehaving bots, it utilizes an advanced rate limiting mechanism. This slows down the hit rate for unimportant bots, leaving more performance for the bots you do care about, and, more important, your actual visitors.

Rejecting with 429 Too Many Requests

Since our goal is not to block bots, but to rate limit them nicely, we have to be quite careful with how we reject them. As such, the best way to reject them is with the 429 Too Many Requests message. This tells the visiting bot that the site is there, but the server is currently unavailable. This is a temporary state, so they can retry at a later time. This does not negatively influence the ranking in any search engine, as the site is there when the bot connects at a later time.

How to Configure the Bot Rate Limiter

By default some bots are exempt from rate limiting, like: Google, Bing and several monitoring systems. These bots never get rate limited since they usually abide by the robots.txt. However, there are also bots that don't follow the instructions given in a robots.txt or are simply used by abusive crawlers. These bots will be rate limited at one request per second. Any requests over this limit will then return a 429 error. If you want, you can override the system-wide configuration on who gets blocked and who does not. To get started, place the following in a config file called /data/web/nginx/http.ratelimit:

map $http_user_agent $limit_bots {
default '';
~*(google|bing|heartbeat|uptimerobot|shoppimon|facebookexternal|monitis.com|Zend_Http_Client|magereport.com|SendCloud/|Adyen|contentkingapp) '';
~*(http|crawler|spider|bot|search|Wget/|Python-urllib|PHPCrawl|bGenius|MauiBot) 'bot';
}

Note: do not remove the heartbeat entry! As this will break the monitoring of your Hypernode

As you can see, this sorts all visitors into two groups:

  • On the first (whitelist) line you find the keywords that are exempt from the rate liming, like: ‘google’, ‘bing’, ‘heartbeat’ or ‘monitis.com’
  • On the second (blacklist) line you will find the keyword for generic and abusive bots and crawlers, whom will always be rate limited, like: crawler, spider, bot

The keywords are separated with | characters, since it is a regular expression.

Whitelisting Additional User Agents

To extend the whitelist, first determine what user agent you wish to add. Use the log files to see what bots get blocked and which user agent identification it uses. Say the bot we want to add has the User Agent SpecialSnowflakeCrawler 3.1.4. Which contains the word ‘crawler’, so it matches the second regular expression, and is labeled as a bot. Since the whitelist line overrules the blacklist line, the best way to allow this bot is to add their user agent to the whitelist, instead of removing ‘crawler’ from the blacklist:

map $http_user_agent $limit_bots {
default '';
~*(specialsnowflakecrawler|google|bing|heartbeat|uptimerobot|shoppimon|facebookexternal|monitis.com|Zend_Http_Client|magereport.com|SendCloud/|Adyen|contentkingapp) '';
~*(http|crawler|spider|bot|search|Wget/|Python-urllib|PHPCrawl|bGenius|MauiBot) 'bot';
}

Instead of adding the complete User Agent to the regex, it’s often better to limit it to just an identifying keyword, as shown above. The reason behind this is that the string is evaluated as a Regular Expression, which means that extra care needs to be taken when adding anything other than alphanumeric characters.

Known Rate Limited Plugins and Service Provider

There are a couple of plugins and service providers that tend to hit the blacklisted keyword in the http.ratelimit snippet and therefore may need to be excluded individually. Below we have listed them and their User Agents for your convenience

  • Adyen - Jakarta Commons-HttpClient/3.0.1
  • Adyen - Apache-HttpClient/4.4.1 (Java/1.8.0_74)
  • Adyen - Adyen HttpClient 1.0
  • MailPlus - Jersey/2.23.1
  • Mollie - Mollie.nl HTTP client/1.0
  • Screaming - Screaming Frog SEO Spider

Rate Limiting per IP Address

To prevent a single IP from using all the FPM workers available at the same time, leaving no workers available for other visitors, we implemented a per IP rate limit mechanism. This mechanism sets a maximum amount of PHP-FPM workers that can be used by one IP to 20. This way one single IP address cannot deplete all the available FPM workers, leaving other visitors with an error page or a non-responding site

Exclude IP Addresses from the per IP Rate Limiting

In some cases it might be necessary to exclude specific IP addresses from the per IP rate limiting. If you wish to exclude an IP address you can do so by creating a config file called /data/web/nginx/http.conn_ratelimit with the following content:

geo $conn_limit_map {
default $remote_addr;
1.2.3.4 '';
}

In this example we have excluded the IP address 1.2.3.4 by setting an empty value in the form of ''.

In addition to whitelisting one single IP address, it is also possible to whitelist a whole range of IP addresses. You can do this by using the so-called CIDR notation (e.g. 10.0.0.0/24 to whitelist all IP addresses within the range 10.0.0.0 to 10.0.0.255). In that case you can use the following snippet in /data/web/nginx/http.conn_ratelimit instead:

geo $conn_limit_map {
default $remote_addr;
10.0.0.0/24 '';
}

Disable per IP Rate Limiting

When your shop performance is very poor, it’s possible all your FPM workers are busy just by serving regular traffic. Handling a request takes so much time, that all workers are continuously depleted by a small amount of visitors. If this situation appears, we highly recommend to optimize your shop for speed and temporary upgrade to a bigger node while doing so. Disabling the rate limit will not fix this problem but only change the error message from a Too many requests error to a timeout error.

For debugging purposed however, it could be useful to disable the per IP connection limit for all IP’s. With the following snippet in /data/web/nginx/http.conn_ratelimit it is possible to completely disable IP based rate limiting:

geo $conn_limit_map {
default '';
}

Warning: Only use this setting for debugging purposed! It is highly discouraged to use this setting on production Hypernodes, as your shop can be easily taken offline by a single IP using slow and or flood attacks.

Exclude Specific URLs from the per IP Rate Limiting Mechanism

To exclude specific URLs from being rate limited you can create a file /data/web/nginx/before_redir.ratelimit_exclude with the following content (this could also be done in a http.* file):

set $ratelimit_request_url "$remote_addr";
if ($request_uri ~ ^\/(.*)\/rest\/V1\/example-call\/(.*) ) {
  set $ratelimit_request_url '';
}
 
if ($request_uri ~ ^\/elasticsearch.php$ ) {
  set $ratelimit_request_url '';
}

In the above example the URLs */rest/V1/example-call/* and /elasticsearch.php are the ones that have to be excluded. You can now use the $ratelimit_request variable in the file /data/web/nginx/http.conn_ratelimit (see the example below) to exclude these URLs from the rate limiter and make sure that bots and crawlers will still be rate limited based on their User Agent.

geo $conn_limit_map {
  default $ratelimit_request_url;
}

How to Serve a Custom Static Error Page to Rate Limited IP Addresses

If you would like to, then you may serve a custom error page to IP addresses that are rate limited. Simply create a static HTML file in /data/web/public with any content that you wish to show to these rate limited IP addresses. Furthermore you need to create an Nginx configuration file called /data/web/nginx/server.custom_429  as well. The content of this file should be as follows:

error_page 429 /ratelimited.html;
location = /ratelimited.html {
root /data/web/public;
internal;
}

This snippet will serve a custom static file called ratelimited.html to IP addresses that are using too many PHP workers.

Warning: Only use a static (HTML) page, as creating a PHP script to render an error will be rate limited as well, causing an endless loop.

J
Jesper is the author of this solution article.

Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.