Welcome! Log In Create A New Profile

Advanced

Re: [PATCH] Dynamic rate limiting for limit_req module

Maxim Dounin
January 01, 2023 03:14PM
Hello!

On Fri, Dec 30, 2022 at 10:23:12PM +0000, J Carter wrote:

> Please find below a patch to enable dynamic rate limiting for
> limit_req module.

Thanks for the patch, and Happy New Year.

The fundamental problem with dynamic rates in limit_req, which is
a leaky bucket limiter, is that the rate is a property which
applies to multiple requests as accumulated in the bucket (and
this is basically why it is configured in the limit_req_zone, and
not in limit_req), while the dynamically calculated rate is a
property of the last request. This can easily result in
counter-intuitive behaviour.

For example, consider 5 requests with 1 second interval, assuming
burst 2, and rate being 1r/m or 100r/s depending on some request
properties:

- 1st request, rate 1r/m, request is accepted
- 2st request, rate 1r/m, request is accepted
- 3rd request, rate 1r/m, request is rejected, since there is no
room in the bucket
- 4th request, rate 100r/s, request is accepted
- 5th request, rate 1r/m, request is accepted (unexpectedly)

Note that the 5th request is accepted, while it is mostly
equivalent to the 3rd request, and one could expect it to be
rejected. But it happens to to be accepted because the 4th
request "cleared" the bucket.

Could you please clarify how this problem is addressed in your
patch?

Note well that the same limits now can be configured separately,
in distinct shared memory zones. This makes it possible to limit
them requests consistently, without any unexpected deviations from
the configured behaviour if requests with different rates
interfere.

> /* ----------------------------EXAMPLE---------------------------------*/
>
> geo $traffic_tier {
> default free;
> 127.0.1.0/24 basic;
> 127.0.2.0/24 premium;
> }
>
> map $traffic_tier $rate {
> free 1r/m;
> basic 2r/m;
> premium 1r/s;
> }
>
> limit_req_zone $binary_remote_addr zone=one:10m rate=$rate;
>
> server {
> limit_req zone=one;
> listen 80;
> server_name localhost;
> return 200;
> }
>
> curl --interface 127.0.X.X localhost

Just in case, the very same behaviour can be implemented with
multiple limit_req_zones, with something like this:

map $traffic_tier $limit_free {
free $binary_remote_addr;
}

map $traffic_tier $limit_basic {
basic $binary_remote_addr;
}

map $traffic_tier $limit_premium {
premium $binary_remote_addr;
}

limit_req_zone $limit_free zone=free:10m rate=1r/m;
limit_req_zone $limit_basic zone=basic:10m rate=2r/m;
limit_req_zone $limit_premium zone=premium:10m rate=1r/s;

limit_req zone=free;
limit_req zone=basic;
limit_req zone=premium;

From the example it is not very clear how the suggested change is
beneficial.

See below for some comments about the code.

[...]

> # HG changeset patch
> # User jordanc.carter@outlook.com
> # Date 1672437935 0
> # Fri Dec 30 22:05:35 2022 +0000
> # Branch dynamic-rate-limiting
> # Node ID b2bd50efa81e5aeeb9b8f84ee0af34463add07fa
> # Parent 07b0bee87f32be91a33210bc06973e07c4c1dac9
> Changed 'rate=' to complex value and added limits to the rate value to prevent integer overflow/underflow
>
> diff -r 07b0bee87f32 -r b2bd50efa81e src/http/modules/ngx_http_limit_req_module.c
> --- a/src/http/modules/ngx_http_limit_req_module.c Wed Dec 21 14:53:27 2022 +0300
> +++ b/src/http/modules/ngx_http_limit_req_module.c Fri Dec 30 22:05:35 2022 +0000
> @@ -26,6 +26,7 @@
> /* integer value, 1 corresponds to 0.001 r/s */
> ngx_uint_t excess;
> ngx_uint_t count;
> + ngx_uint_t rate;
> u_char data[1];
> } ngx_http_limit_req_node_t;
>
> @@ -41,7 +42,7 @@
> ngx_http_limit_req_shctx_t *sh;
> ngx_slab_pool_t *shpool;
> /* integer value, 1 corresponds to 0.001 r/s */
> - ngx_uint_t rate;
> + ngx_http_complex_value_t rate;
> ngx_http_complex_value_t key;
> ngx_http_limit_req_node_t *node;
> } ngx_http_limit_req_ctx_t;
> @@ -66,9 +67,9 @@
>
> static void ngx_http_limit_req_delay(ngx_http_request_t *r);
> static ngx_int_t ngx_http_limit_req_lookup(ngx_http_limit_req_limit_t *limit,
> - ngx_uint_t hash, ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account);
> + ngx_uint_t hash, ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account, ngx_uint_t rate);
> static ngx_msec_t ngx_http_limit_req_account(ngx_http_limit_req_limit_t *limits,
> - ngx_uint_t n, ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit);
> + ngx_uint_t n, ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit, ngx_uint_t rate);
> static void ngx_http_limit_req_unlock(ngx_http_limit_req_limit_t *limits,
> ngx_uint_t n);
> static void ngx_http_limit_req_expire(ngx_http_limit_req_ctx_t *ctx,
> @@ -195,10 +196,13 @@
> ngx_http_limit_req_handler(ngx_http_request_t *r)
> {
> uint32_t hash;
> - ngx_str_t key;
> + ngx_str_t key, rate_s;
> ngx_int_t rc;
> ngx_uint_t n, excess;
> + ngx_uint_t scale;
> + ngx_uint_t rate;
> ngx_msec_t delay;
> + u_char *p;
> ngx_http_limit_req_ctx_t *ctx;
> ngx_http_limit_req_conf_t *lrcf;
> ngx_http_limit_req_limit_t *limit, *limits;
> @@ -243,10 +247,34 @@
>
> hash = ngx_crc32_short(key.data, key.len);
>
> + if (ngx_http_complex_value(r, &ctx->rate, &rate_s) != NGX_OK) {
> + ngx_http_limit_req_unlock(limits, n);
> + return NGX_HTTP_INTERNAL_SERVER_ERROR;
> + }
> +
> + scale = 1;
> + rate = NGX_ERROR;
> +
> + if (rate_s.len > 8) {
> +
> + rate = (ngx_uint_t) ngx_atoi(rate_s.data + 5, rate_s.len - 8);
> +
> + p = rate_s.data + rate_s.len - 3;
> + if (ngx_strncmp(p, "r/m", 3) == 0) {
> + scale = 60;
> + } else if (ngx_strncmp(p, "r/s", 3) != 0){
> + rate = NGX_ERROR;
> + }
> + }

Note that this approach implies parsing rate on each request.
This is something we usually try to avoid as long as a static
string is configured in advance, see these commits for an example:

changeset: 7503:b82162b8496a
user: Ruslan Ermilov <ru@nginx.com>
date: Wed Apr 24 16:38:51 2019 +0300
summary: Added ngx_http_set_complex_value_size_slot().

changeset: 7504:c19ca381b2e6
user: Ruslan Ermilov <ru@nginx.com>
date: Wed Apr 24 16:38:54 2019 +0300
summary: Variables support in limit_rate and limit_rate_after (ticket #293).

changeset: 7505:16a1adadf437
user: Ruslan Ermilov <ru@nginx.com>
date: Wed Apr 24 16:38:56 2019 +0300
summary: Variables support in proxy_upload_rate and proxy_download_rate.

A more self-contained solution can be seen in the image filter
(src/http/modules/ngx_http_image_filter.c), which uses values
parsed during configuration parsing if there are no variables, and
complex values if there are variables (e.g., "imcf->angle" and
"imcf->acv").

> +
> + rate = (rate != 0 && rate < NGX_MAX_INT_T_VALUE / 60000000 - 1001) ?
> + rate * 1000 / scale :
> + NGX_MAX_INT_T_VALUE / 60000000 - 1001;
> +
> ngx_shmtx_lock(&ctx->shpool->mutex);
>
> rc = ngx_http_limit_req_lookup(limit, hash, &key, &excess,
> - (n == lrcf->limits.nelts - 1));
> + (n == lrcf->limits.nelts - 1), rate);
>
> ngx_shmtx_unlock(&ctx->shpool->mutex);
>
> @@ -291,7 +319,7 @@
> excess = 0;
> }
>
> - delay = ngx_http_limit_req_account(limits, n, &excess, &limit);
> + delay = ngx_http_limit_req_account(limits, n, &excess, &limit, rate);
>
> if (!delay) {
> r->main->limit_req_status = NGX_HTTP_LIMIT_REQ_PASSED;
> @@ -403,7 +431,7 @@
>
> static ngx_int_t
> ngx_http_limit_req_lookup(ngx_http_limit_req_limit_t *limit, ngx_uint_t hash,
> - ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account)
> + ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account, ngx_uint_t rate)
> {
> size_t size;
> ngx_int_t rc, excess;
> @@ -412,7 +440,6 @@
> ngx_rbtree_node_t *node, *sentinel;
> ngx_http_limit_req_ctx_t *ctx;
> ngx_http_limit_req_node_t *lr;
> -
> now = ngx_current_msec;
>
> ctx = limit->shm_zone->data;

Note that this is a style change, and it is wrong.

> @@ -446,12 +473,14 @@
>
> if (ms < -60000) {
> ms = 1;
> -

The same here.

> } else if (ms < 0) {
> ms = 0;
> + } else if (ms > 60000) {
> + ms = 60000;
> }

This seems to be separate change, irrelevant to what the patch
does. If at all, it should be submitted as a separate patch, see
http://nginx.org/en/docs/contributing_changes.html.

Note though that this changes looks wrong to me, as it will break
request accounting on large time intervals with large burst, such
as with rate=1r/m burst=100.

Note well that it fails to use an empty line before "} else ",
which is a style bug.

>
> - excess = lr->excess - ctx->rate * ms / 1000 + 1000;
> + lr->rate = rate;
> + excess = lr->excess - lr->rate * ms / 1000 + 1000;
>
> if (excess < 0) {
> excess = 0;
> @@ -510,6 +539,7 @@
>
> lr->len = (u_short) key->len;
> lr->excess = 0;
> + lr->rate = rate;

The "lr->rate", which is saved in the shared memory, seems to be
never used except during processing of the same request. Any
reasons to keep it in the shared memory, reducing the number of
states which can be stored there?

>
> ngx_memcpy(lr->data, key->data, key->len);
>
> @@ -534,7 +564,7 @@
>
> static ngx_msec_t
> ngx_http_limit_req_account(ngx_http_limit_req_limit_t *limits, ngx_uint_t n,
> - ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit)
> + ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit, ngx_uint_t rate)
> {
> ngx_int_t excess;
> ngx_msec_t now, delay, max_delay;
> @@ -543,13 +573,13 @@
> ngx_http_limit_req_node_t *lr;
>
> excess = *ep;
> + max_delay = 0;
>

Note that this is a style change, since the max_delay is set in
all possible code paths, and there are no reasons to additionally
initialize it.

> if ((ngx_uint_t) excess <= (*limit)->delay) {
> max_delay = 0;
>
> } else {
> - ctx = (*limit)->shm_zone->data;
> - max_delay = (excess - (*limit)->delay) * 1000 / ctx->rate;
> + max_delay = (excess - (*limit)->delay) * 1000 / rate;
> }
>
> while (n--) {
> @@ -570,9 +600,11 @@
>
> } else if (ms < 0) {
> ms = 0;
> + } else if (ms > 60000) {
> + ms = 60000;
> }

See above.

[...]

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel
Subject Author Views Posted

[PATCH] Dynamic rate limiting for limit_req module

J Carter 565 December 30, 2022 05:24PM

Re: [PATCH] Dynamic rate limiting for limit_req module

Maxim Dounin 135 January 01, 2023 03:14PM

Re: [PATCH] Dynamic rate limiting for limit_req module

J Carter 143 January 02, 2023 01:22AM

Re: [PATCH] Dynamic rate limiting for limit_req module

Maxim Dounin 136 January 02, 2023 06:48PM

Re: [PATCH] Dynamic rate limiting for limit_req module

J Carter 145 January 03, 2023 01:26AM

Re: [PATCH] Dynamic rate limiting for limit_req module

J Carter 106 January 07, 2023 01:06AM

Re: [PATCH] Dynamic rate limiting for limit_req module

J Carter 270 January 07, 2023 10:02PM



Sorry, you do not have permission to post/reply in this forum.

Online Users

Guests: 301
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready