Hello,
Happy new year to you too, and thank you for the detailed feedback.
> "The fundamental problem with dynamic rates in limit_req, which is
> a leaky bucket limiter, is that the rate is a property which
> applies to multiple requests as accumulated in the bucket (and
> this is basically why it is configured in the limit_req_zone, and
> not in limit_req), while the dynamically calculated rate is a
> property of the last request.
>
> This can easily result in counter-intuitive behaviour.
> For example, consider 5 requests with 1 second interval, assuming
> burst 2, and rate being 1r/m or 100r/s depending on some request
> properties:
>
> - 1st request, rate 1r/m, request is accepted
> - 2st request, rate 1r/m, request is accepted
> - 3rd request, rate 1r/m, request is rejected, since there is no
> room in the bucket
> - 4th request, rate 100r/s, request is accepted
> - 5th request, rate 1r/m, request is accepted (unexpectedly)
>
> Note that the 5th request is accepted, while it is mostly
> equivalent to the 3rd request, and one could expect it to be
> rejected. But it happens to to be accepted because the 4th
> request "cleared" the bucket."
This is true - however I do wonder if the behavior (clearing the bucket)
is truly counter intuitive in that scenario.
Should it not be expected that this user/key/state that has been
assigned 100r/s (even if just for one request) would have it's
outstanding excess requests recalculated (and in this case absolved) by
that rate increase?
You have after all assigned it more capacity.
I'm not sure how to elegantly avoid this if it is an issue, since there
is no 'standard' rate to reference (it could interpolate over
time/requests, but that might be even more confusing).
> "Note well that the same limits now can be configured separately,
> in distinct shared memory zones. This makes it possible to limit
> them requests consistently, without any unexpected deviations from
> the configured behaviour if requests with different rates
> interfere.
>
> Just in case, the very same behaviour can be implemented with
> multiple limit_req_zones, with something like this:
>
> map $traffic_tier $limit_free {
> free $binary_remote_addr;
> }
>
> map $traffic_tier $limit_basic {
> basic $binary_remote_addr;
> }
>
> map $traffic_tier $limit_premium {
> premium $binary_remote_addr;
> }
>
> limit_req_zone $limit_free zone=free:10m rate=1r/m;
> limit_req_zone $limit_basic zone=basic:10m rate=2r/m;
> limit_req_zone $limit_premium zone=premium:10m rate=1r/s;
>
> limit_req zone=free;
> limit_req zone=basic;
> limit_req zone=premium;
>
> From the example it is not very clear how the suggested change is
> beneficial."
The issue with this approach (which is a similar set up to the one that
prompted this patch) is the scalability of the solution - one zone is
required per unique rate, and the inability to assign arbitrary rates at
runtime.
Perhaps a better example than the first one I gave to illustrate the
full utility would be with this scenario - involving a user sending a
JWT to determine the rate limit that is applied:
The limit_req_zone is assigned with $jwt_claim_sub as the key and
$jwt_claim_rate as the rate.
A user presents the following JWT with every request, his requests now
have a rate limit of 100r/s.
{
...
"sub": "user01",
"rate": "100r/s"
}
Mid-session, that user then acquires a new JWT to get increased speed -
the limit associated with this state/node is now 1000r/s with the same
key(sub). Note that these rate values do not need to be preset in the
nginx configuration, and it can all be done with a single limit_req_zone.
{
...
"sub": "user01",
"rate": "1000r/s"
}
Similar could also be done with key_val zone (ie. adding and modifying
rate strings at runtime within a key_val zone, and doing a lookup in the
key_val zone for that value based upon a user identifier or group as the
key).
> "Note that this approach implies parsing rate on each request.
> This is something we usually try to avoid as long as a static
> string is configured in advance, see these commits for an example:
>
> changeset: 7503:b82162b8496a
> user: Ruslan Ermilov <ru@nginx.com>
> date: Wed Apr 24 16:38:51 2019 +0300
> summary: Added ngx_http_set_complex_value_size_slot().
>
> changeset: 7504:c19ca381b2e6
> user: Ruslan Ermilov <ru@nginx.com>
> date: Wed Apr 24 16:38:54 2019 +0300
> summary: Variables support in limit_rate and limit_rate_after (ticket #293).
>
> changeset: 7505:16a1adadf437
> user: Ruslan Ermilov <ru@nginx.com>
> date: Wed Apr 24 16:38:56 2019 +0300
> summary: Variables support in proxy_upload_rate and proxy_download_rate.
>
> A more self-contained solution can be seen in the image filter
> (src/http/modules/ngx_http_image_filter.c), which uses values
> parsed during configuration parsing if there are no variables, and
> complex values if there are variables (e.g., "imcf->angle" and
> "imcf->acv")."
Thanks, that makes perfect sense. I'll update accordingly.
> Note that this is a style change, and it is wrong.
>
> The same here.
>
> Note that this is a style change, since the max_delay is set in
> all possible code paths, and there are no reasons to additionally
> initialize it.
>
> Note that this is a style change, since the max_delay is set in
> all possible code paths, and there are no reasons to additionally
> initialize it.
>
> See above.
Understood, I'll work to fix the style of my changes and I will keep it
more consistent with the guidelines in future.
> This seems to be separate change, irrelevant to what the patch
> does. If at all, it should be submitted as a separate patch, see
> http://nginx.org/en/docs/contributing_changes.html.
>
> Note though that this changes looks wrong to me, as it will break
> request accounting on large time intervals with large burst, such
> as with rate=1r/m burst=100.
>
> Note well that it fails to use an empty line before "} else ",
> which is a style bug.
You are correct, it will break for that scenario. I will rework this and
the related portions to more generic overflow detection & handling.
I did include it in this patch as there is a danger (which is already
present) of rate * ms overflowing. rate is only limited by atoi's limits
at present(and it's multiplied by 1000 after that even) . This is more
of a worrisome when arbitrary values (including user request values) are
in use place of static values from the nginx configuration.
I will make those changes into a separate patch as advised.
> The "lr->rate", which is saved in the shared memory, seems to be
> never used except during processing of the same request. Any
> reasons to keep it in the shared memory, reducing the number of
> states which can be stored there?
A persistent record of the rate that was last assigned to a given
node/state is needed exclusively for expiring records (in the expire
function). I don't believe this can be avoided, as the rate of the
current request is unrelated to the 'to be removed' record's.
Thanks again.
On 01/01/2023 17:35, Maxim Dounin wrote:
> Hello!
>
> On Fri, Dec 30, 2022 at 10:23:12PM +0000, J Carter wrote:
>
>> Please find below a patch to enable dynamic rate limiting for
>> limit_req module.
> Thanks for the patch, and Happy New Year.
>
> The fundamental problem with dynamic rates in limit_req, which is
> a leaky bucket limiter, is that the rate is a property which
> applies to multiple requests as accumulated in the bucket (and
> this is basically why it is configured in the limit_req_zone, and
> not in limit_req), while the dynamically calculated rate is a
> property of the last request. This can easily result in
> counter-intuitive behaviour.
>
> For example, consider 5 requests with 1 second interval, assuming
> burst 2, and rate being 1r/m or 100r/s depending on some request
> properties:
>
> - 1st request, rate 1r/m, request is accepted
> - 2st request, rate 1r/m, request is accepted
> - 3rd request, rate 1r/m, request is rejected, since there is no
> room in the bucket
> - 4th request, rate 100r/s, request is accepted
> - 5th request, rate 1r/m, request is accepted (unexpectedly)
>
> Note that the 5th request is accepted, while it is mostly
> equivalent to the 3rd request, and one could expect it to be
> rejected. But it happens to to be accepted because the 4th
> request "cleared" the bucket.
>
> Could you please clarify how this problem is addressed in your
> patch?
>
> Note well that the same limits now can be configured separately,
> in distinct shared memory zones. This makes it possible to limit
> them requests consistently, without any unexpected deviations from
> the configured behaviour if requests with different rates
> interfere.
>
>> /* ----------------------------EXAMPLE---------------------------------*/
>>
>> geo $traffic_tier {
>> default free;
>> 127.0.1.0/24 basic;
>> 127.0.2.0/24 premium;
>> }
>>
>> map $traffic_tier $rate {
>> free 1r/m;
>> basic 2r/m;
>> premium 1r/s;
>> }
>>
>> limit_req_zone $binary_remote_addr zone=one:10m rate=$rate;
>>
>> server {
>> limit_req zone=one;
>> listen 80;
>> server_name localhost;
>> return 200;
>> }
>>
>> curl --interface 127.0.X.X localhost
> Just in case, the very same behaviour can be implemented with
> multiple limit_req_zones, with something like this:
>
> map $traffic_tier $limit_free {
> free $binary_remote_addr;
> }
>
> map $traffic_tier $limit_basic {
> basic $binary_remote_addr;
> }
>
> map $traffic_tier $limit_premium {
> premium $binary_remote_addr;
> }
>
> limit_req_zone $limit_free zone=free:10m rate=1r/m;
> limit_req_zone $limit_basic zone=basic:10m rate=2r/m;
> limit_req_zone $limit_premium zone=premium:10m rate=1r/s;
>
> limit_req zone=free;
> limit_req zone=basic;
> limit_req zone=premium;
>
> From the example it is not very clear how the suggested change is
> beneficial.
>
> See below for some comments about the code.
>
> [...]
>
>> # HG changeset patch
>> # User jordanc.carter@outlook.com
>> # Date 1672437935 0
>> # Fri Dec 30 22:05:35 2022 +0000
>> # Branch dynamic-rate-limiting
>> # Node ID b2bd50efa81e5aeeb9b8f84ee0af34463add07fa
>> # Parent 07b0bee87f32be91a33210bc06973e07c4c1dac9
>> Changed 'rate=' to complex value and added limits to the rate value to prevent integer overflow/underflow
>>
>> diff -r 07b0bee87f32 -r b2bd50efa81e src/http/modules/ngx_http_limit_req_module.c
>> --- a/src/http/modules/ngx_http_limit_req_module.c Wed Dec 21 14:53:27 2022 +0300
>> +++ b/src/http/modules/ngx_http_limit_req_module.c Fri Dec 30 22:05:35 2022 +0000
>> @@ -26,6 +26,7 @@
>> /* integer value, 1 corresponds to 0.001 r/s */
>> ngx_uint_t excess;
>> ngx_uint_t count;
>> + ngx_uint_t rate;
>> u_char data[1];
>> } ngx_http_limit_req_node_t;
>>
>> @@ -41,7 +42,7 @@
>> ngx_http_limit_req_shctx_t *sh;
>> ngx_slab_pool_t *shpool;
>> /* integer value, 1 corresponds to 0.001 r/s */
>> - ngx_uint_t rate;
>> + ngx_http_complex_value_t rate;
>> ngx_http_complex_value_t key;
>> ngx_http_limit_req_node_t *node;
>> } ngx_http_limit_req_ctx_t;
>> @@ -66,9 +67,9 @@
>>
>> static void ngx_http_limit_req_delay(ngx_http_request_t *r);
>> static ngx_int_t ngx_http_limit_req_lookup(ngx_http_limit_req_limit_t *limit,
>> - ngx_uint_t hash, ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account);
>> + ngx_uint_t hash, ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account, ngx_uint_t rate);
>> static ngx_msec_t ngx_http_limit_req_account(ngx_http_limit_req_limit_t *limits,
>> - ngx_uint_t n, ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit);
>> + ngx_uint_t n, ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit, ngx_uint_t rate);
>> static void ngx_http_limit_req_unlock(ngx_http_limit_req_limit_t *limits,
>> ngx_uint_t n);
>> static void ngx_http_limit_req_expire(ngx_http_limit_req_ctx_t *ctx,
>> @@ -195,10 +196,13 @@
>> ngx_http_limit_req_handler(ngx_http_request_t *r)
>> {
>> uint32_t hash;
>> - ngx_str_t key;
>> + ngx_str_t key, rate_s;
>> ngx_int_t rc;
>> ngx_uint_t n, excess;
>> + ngx_uint_t scale;
>> + ngx_uint_t rate;
>> ngx_msec_t delay;
>> + u_char *p;
>> ngx_http_limit_req_ctx_t *ctx;
>> ngx_http_limit_req_conf_t *lrcf;
>> ngx_http_limit_req_limit_t *limit, *limits;
>> @@ -243,10 +247,34 @@
>>
>> hash = ngx_crc32_short(key.data, key.len);
>>
>> + if (ngx_http_complex_value(r, &ctx->rate, &rate_s) != NGX_OK) {
>> + ngx_http_limit_req_unlock(limits, n);
>> + return NGX_HTTP_INTERNAL_SERVER_ERROR;
>> + }
>> +
>> + scale = 1;
>> + rate = NGX_ERROR;
>> +
>> + if (rate_s.len > 8) {
>> +
>> + rate = (ngx_uint_t) ngx_atoi(rate_s.data + 5, rate_s.len - 8);
>> +
>> + p = rate_s.data + rate_s.len - 3;
>> + if (ngx_strncmp(p, "r/m", 3) == 0) {
>> + scale = 60;
>> + } else if (ngx_strncmp(p, "r/s", 3) != 0){
>> + rate = NGX_ERROR;
>> + }
>> + }
> Note that this approach implies parsing rate on each request.
> This is something we usually try to avoid as long as a static
> string is configured in advance, see these commits for an example:
>
> changeset: 7503:b82162b8496a
> user: Ruslan Ermilov <ru@nginx.com>
> date: Wed Apr 24 16:38:51 2019 +0300
> summary: Added ngx_http_set_complex_value_size_slot().
>
> changeset: 7504:c19ca381b2e6
> user: Ruslan Ermilov <ru@nginx.com>
> date: Wed Apr 24 16:38:54 2019 +0300
> summary: Variables support in limit_rate and limit_rate_after (ticket #293).
>
> changeset: 7505:16a1adadf437
> user: Ruslan Ermilov <ru@nginx.com>
> date: Wed Apr 24 16:38:56 2019 +0300
> summary: Variables support in proxy_upload_rate and proxy_download_rate.
>
> A more self-contained solution can be seen in the image filter
> (src/http/modules/ngx_http_image_filter.c), which uses values
> parsed during configuration parsing if there are no variables, and
> complex values if there are variables (e.g., "imcf->angle" and
> "imcf->acv").
>
>> +
>> + rate = (rate != 0 && rate < NGX_MAX_INT_T_VALUE / 60000000 - 1001) ?
>> + rate * 1000 / scale :
>> + NGX_MAX_INT_T_VALUE / 60000000 - 1001;
>> +
>> ngx_shmtx_lock(&ctx->shpool->mutex);
>>
>> rc = ngx_http_limit_req_lookup(limit, hash, &key, &excess,
>> - (n == lrcf->limits.nelts - 1));
>> + (n == lrcf->limits.nelts - 1), rate);
>>
>> ngx_shmtx_unlock(&ctx->shpool->mutex);
>>
>> @@ -291,7 +319,7 @@
>> excess = 0;
>> }
>>
>> - delay = ngx_http_limit_req_account(limits, n, &excess, &limit);
>> + delay = ngx_http_limit_req_account(limits, n, &excess, &limit, rate);
>>
>> if (!delay) {
>> r->main->limit_req_status = NGX_HTTP_LIMIT_REQ_PASSED;
>> @@ -403,7 +431,7 @@
>>
>> static ngx_int_t
>> ngx_http_limit_req_lookup(ngx_http_limit_req_limit_t *limit, ngx_uint_t hash,
>> - ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account)
>> + ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account, ngx_uint_t rate)
>> {
>> size_t size;
>> ngx_int_t rc, excess;
>> @@ -412,7 +440,6 @@
>> ngx_rbtree_node_t *node, *sentinel;
>> ngx_http_limit_req_ctx_t *ctx;
>> ngx_http_limit_req_node_t *lr;
>> -
>> now = ngx_current_msec;
>>
>> ctx = limit->shm_zone->data;
> Note that this is a style change, and it is wrong.
>
>> @@ -446,12 +473,14 @@
>>
>> if (ms < -60000) {
>> ms = 1;
>> -
> The same here.
>
>> } else if (ms < 0) {
>> ms = 0;
>> + } else if (ms > 60000) {
>> + ms = 60000;
>> }
> This seems to be separate change, irrelevant to what the patch
> does. If at all, it should be submitted as a separate patch, see
> http://nginx.org/en/docs/contributing_changes.html.
>
> Note though that this changes looks wrong to me, as it will break
> request accounting on large time intervals with large burst, such
> as with rate=1r/m burst=100.
>
> Note well that it fails to use an empty line before "} else ",
> which is a style bug.
>
>> - excess = lr->excess - ctx->rate * ms / 1000 + 1000;
>> + lr->rate = rate;
>> + excess = lr->excess - lr->rate * ms / 1000 + 1000;
>>
>> if (excess < 0) {
>> excess = 0;
>> @@ -510,6 +539,7 @@
>>
>> lr->len = (u_short) key->len;
>> lr->excess = 0;
>> + lr->rate = rate;
> The "lr->rate", which is saved in the shared memory, seems to be
> never used except during processing of the same request. Any
> reasons to keep it in the shared memory, reducing the number of
> states which can be stored there?
>
>> ngx_memcpy(lr->data, key->data, key->len);
>>
>> @@ -534,7 +564,7 @@
>>
>> static ngx_msec_t
>> ngx_http_limit_req_account(ngx_http_limit_req_limit_t *limits, ngx_uint_t n,
>> - ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit)
>> + ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit, ngx_uint_t rate)
>> {
>> ngx_int_t excess;
>> ngx_msec_t now, delay, max_delay;
>> @@ -543,13 +573,13 @@
>> ngx_http_limit_req_node_t *lr;
>>
>> excess = *ep;
>> + max_delay = 0;
>>
> Note that this is a style change, since the max_delay is set in
> all possible code paths, and there are no reasons to additionally
> initialize it.
>
>> if ((ngx_uint_t) excess <= (*limit)->delay) {
>> max_delay = 0;
>>
>> } else {
>> - ctx = (*limit)->shm_zone->data;
>> - max_delay = (excess - (*limit)->delay) * 1000 / ctx->rate;
>> + max_delay = (excess - (*limit)->delay) * 1000 / rate;
>> }
>>
>> while (n--) {
>> @@ -570,9 +600,11 @@
>>
>> } else if (ms < 0) {
>> ms = 0;
>> + } else if (ms > 60000) {
>> + ms = 60000;
>> }
> See above.
>
> [...]
>
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel