Mikhail Isachenkov
February 26, 2021 03:42AM
Hi Zhao Ping,

I created i3en.6xlarge instance, 2 SSDs with 250k iops each (in raid0)
and reproduced your results. Please find dstat output below.

1 worker, io_uring:
usr sys idl wai stl| read writ| recv send| in out | int csw
4 4 92 0 0|1412M 0 |5742k 1484M| 0 0 | 244k 87k
1 worker, libaio:
usr sys idl wai stl| read writ| recv send| in out | int csw
1 1 95 3 0| 276M 0 |1386k 289M| 0 0 | 50k 8961
4 workers, io_uring:
usr sys idl wai stl| read writ| recv send| in out | int csw
6 6 18 70 0|1349M 0 |7240k 1445M| 0 0 | 296k 120k
4 workers, libaio:
usr sys idl wai stl| read writ| recv send| in out | int csw
3 2 82 13 0| 890M 0 |3570k 931M| 0 0 | 139k 31k

I ran test on i3.metal instance too (with 8 SSD in raid0), results are
different but io_uring still faster:

1 worker, io_uring:
usr sys idl wai stl| read writ| recv send| in out | int csw
1 2 97 0 0|1372M 0 |5845k 1442M| 0 0 | 168k 51k
1 worker, libaio:
usr sys idl wai stl| read writ| recv send| in out | int csw
1 1 98 0 0| 972M 20k|4083k 1023M| 0 0 | 126k 2014
4 workers, io_uring:
usr sys idl wai stl| read writ| recv send| in out | int csw
3 4 92 0 0|2211M 0 |9314k 2346M| 0 0 | 251k 198k
4 workers, libaio:
usr sys idl wai stl| read writ| recv send| in out | int csw
3 3 94 0 0|1867M 0 | 10M 1995M| 0 0 | 220k 38k

There is no difference between libaio and io_uring with a large number
of worker processes (>=12).

26.02.2021 04:22, Zhao, Ping пишет:
> Hi Mikhail,
>
> My Nvme SSD reports 562k IOPS with bs 4k, 128 depth using io_uring.
>
> ./fio -name=fiotest -filename=/dev/nvme2n1 -iodepth=128 -thread -rw=randread -ioengine=io_uring -sqthread_poll=1 -direct=1 -bs=4k -size=10G -numjobs=1 -runtime=600 -group_reporting
>
> Jobs: 1 (f=1): [r(1)][100.0%][r=2173MiB/s][r=556k IOPS][eta 00m:00s]
> fiotest: (groupid=0, jobs=1): err= 0: pid=23828: Fri Feb 26 03:55:40 2021
> read: IOPS=562k, BW=2196MiB/s (2303MB/s)(10.0GiB/4663msec)
>
> BR,
> Ping
>
> -----Original Message-----
> From: nginx-devel <nginx-devel-bounces@nginx.org> On Behalf Of Mikhail Isachenkov
> Sent: Thursday, February 25, 2021 7:01 PM
> To: nginx-devel@nginx.org
> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
>
> Hi Zhao Ping,
>
> Looks like general-purpose AWS EC2 instances does not optimized to high random I/O even with NVMe SSD; I'll try to test it again on bare-metal, storage-optimized instance.
>
> How many 4k random iops your storage can handle? (I'd like to run test on the instance with the same storage performance, according to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage-optimized-instances.html#i2-instances-diskperf).
>
> Thanks in advance!
>
> 25.02.2021 09:59, Zhao, Ping пишет:
>> Hi Mikhail,
>>
>> I tried with CentOS 8.1.1911 + fedora-kernel-5.10.10-200.fc33, and with your test steps & scripts:
>>
>> 1. Created 90k files on nvme SSD with each size 100KB.
>> 2. Created separate cgroup 'nginx': mkdir /sys/fs/cgroup/memory/nginx
>> 3. Limit Nginx to 2GB: echo 2G >
>> /sys/fs/cgroup/memory/nginx/memory.limit_in_bytes
>> 4. Clear cache: echo 3 > /proc/sys/vm/drop_caches 5. Start nginx:
>> cgexec -g memory:nginx 6. Tested with wrk on client: ./wrk -d 30 -t
>> 100 -c 1000 -s add_random.lua http://...
>>
>> Io_uring can achieved 1.9+GB/s
>> usr sys idl wai stl| read writ| recv send| in out | int csw
>> 1 2 97 0 0|1923M 0 |7369k 2136M| 0 0 | 263k 119k
>> 1 2 97 0 0|1911M 0 |7329k 2123M| 0 0 | 262k 119k
>> 1 2 97 0 0|1910M 3592k|7318k 2118M| 0 0 | 264k 117k
>> 1 2 97 0 0|1923M 200k|7353k 2138M| 0 0 | 265k 118k
>> 1 2 97 0 0|1929M 0 |7376k 2142M| 0 0 | 264k 118k
>> 1 2 97 0 0|1924M 32k|7352k 2140M| 0 0 | 265k 118k
>> 1 2 97 0 0|1913M 0 |7320k 2122M| 0 0 | 263k 117k
>> 1 2 97 0 0|1921M 3544k|7336k 2132M| 0 0 | 264k 118k
>> 1 2 97 0 0|1933M 0 |7353k 2146M| 0 0 | 264k 120k
>>
>> Libaio is 260+MB/s
>> usr sys idl wai stl| read writ| recv send| in out | int csw
>> 0 0 98 1 0| 267M 0 |1022k 293M| 0 0 | 41k 12k
>> 0 0 98 1 0| 279M 740k|1093k 314M| 0 0 | 42k 12k
>> 1 0 98 1 0| 268M 0 |1013k 294M| 0 0 | 41k 12k
>> 0 0 98 1 0| 261M 0 |1002k 294M| 0 0 | 41k 12k
>> 1 0 98 1 0| 285M 0 |1057k 310M| 0 0 | 43k 13k
>> 0 0 98 1 0| 276M 4096B|1038k 307M| 0 0 | 42k 13k
>> 0 0 98 1 0| 273M 780k|1019k 303M| 0 0 | 42k 12k
>> 1 0 98 1 0| 275M 0 |1016k 305M| 0 0 | 42k 12k
>> 0 0 98 1 0| 254M 0 | 976k 294M| 0 0 | 40k 12k
>> 0 0 98 1 0| 265M 0 | 975k 293M| 0 0 | 41k 12k
>> 0 0 98 1 0| 269M 0 | 976k 295M| 0 0 | 41k 12k
>>
>> Comparing with your dstat data that disk read keeps at 250M/s, is there anything locked the disk io bw?
>>
>> Regards,
>> Ping
>>
>> -----Original Message-----
>> From: nginx-devel <nginx-devel-bounces@nginx.org> On Behalf Of Mikhail
>> Isachenkov
>> Sent: Tuesday, February 9, 2021 9:31 PM
>> To: nginx-devel@nginx.org
>> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
>>
>> Hi Zhao Ping,
>>
>> Unfortunately, I still couldn't reproduce these results. Maybe you could point me where I'm wrong? Please find my steps below and configuration/lua script for wrk attached.
>>
>> 1. Create 90k files on SSD on Amazon EC2 instance. I created 1k, 100k, 1M files.
>> 2. Create separate cgroup 'nginx': mkdir /sys/fs/cgroup/memory/nginx
>> 3. Limit memory to 80 Mb, for example: echo
>> 80M>/sys/fs/cgroup/memory/nginx/memory.limit_in_bytes
>> 4. Disable limit for locked memory: ulimit -l unlimited 5. Start nginx: cgexec -g memory:nginx /usr/local/sbin/nginx 6. Run wrk on client: ./wrk -d 30 -t 100 -c 1000 -s add_random.lua http://...
>>
>> I tried different values for limit_in_bytes (from 80M to 2G) and different file sizes -- 1k, 100k, 1M. In fact, maximum bandwidth is the same with libaio and io_uring.
>>
>> For example, with 100kb files and 1 worker process:
>>
>> free -lh
>> total used free shared buff/cache
>> available
>> Mem: 15Gi 212Mi 14Gi 13Mi 318Mi
>> 14Gi
>>
>> dstat/libaio
>> 5 6 73 17 0| 251M 0 |1253k 265M| 0 0 | 33k 1721
>> 4 4 73 17 0| 250M 0 |1267k 264M| 0 0 | 33k 1739
>> 6 5 72 16 0| 250M 924k|1308k 270M| 0 0 | 34k 2017
>> 5 5 72 17 0| 250M 0 |1277k 258M| 0 0 | 34k 1945
>> 5 5 73 17 0| 250M 0 |1215k 263M| 0 0 | 33k 1720
>> 5 5 72 16 0| 250M 0 |1311k 267M| 0 0 | 34k 1721
>> 5 5 73 16 0| 250M 0 |1280k 264M| 0 0 | 34k 1718
>> 6 6 72 16 0| 250M 24k|1362k 268M| 0 0 | 35k 1825
>> 5 5 73 17 0| 250M 0 |1342k 262M| 0 0 | 34k 1726
>> dstat/io_uring
>> 5 6 60 29 0| 250M 0 |1079k 226M| 0 0 | 36k 10k
>> 5 6 64 25 0| 251M 0 | 906k 204M| 0 0 | 32k 8607
>> 4 6 62 27 0| 250M 0 |1034k 221M| 0 0 | 35k 10k
>> 5 6 63 26 0| 250M 20k| 909k 209M| 0 0 | 32k 8595
>> 4 6 62 27 0| 250M 0 |1003k 217M| 0 0 | 35k 10k
>> 4 5 61 28 0| 250M 0 |1019k 226M| 0 0 | 35k 9700
>> 4 5 62 27 0| 250M 0 | 948k 210M| 0 0 | 32k 8433
>> 4 6 61 28 0| 250M 0 |1094k 216M| 0 0 | 35k 9811
>> 5 6 62 26 0| 250M 0 |1083k 226M| 0 0 | 35k 9479
>>
>> As you can see, libaio even faster a bit.
>>
>> 09.02.2021 11:36, Zhao, Ping пишет:
>>> Hi Mikhail,
>>>
>>> The performance improvement of Io_uring vs. libaio locates at disk io interface. So it needs exclude other factors when test, such as memory cache storage which is much faster than disk io.
>>>
>>> If I didn't use memory limitation, libaio and io_uring network bandwidth is very close because both of them use memory as cache file location, so we can't see the disk io change from it. In following data, as example, it used 17G memory as cache, network speed is same of io_uring and libaio, both of them has very few disk io load, which means very low io_uring/libaio usage.
>>>
>>> memory
>>> free -lh
>>> total used free shared buff/cache available
>>> Mem: 376Gi 3.2Gi 356Gi 209Mi 17Gi 370Gi
>>>
>>> libaio:
>>> ----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
>>> usr sys idl wai stl| read writ| recv send| in out | int csw
>>> 1 1 99 0 0|4097B 80k|4554k 104M| 0 0 | 77k 1344
>>> 1 1 98 0 0|8192B 104k|9955k 236M| 0 0 | 151k 1449
>>> 1 1 97 0 0| 56k 32k| 10M 241M| 0 0 | 148k 1652
>>> 2 1 97 0 0| 16k 16k|9552k 223M| 0 0 | 142k 1366
>>> 1 1 97 0 0| 16k 24k|9959k 234M| 0 0 | 146k 1570
>>> 1 1 97 0 0| 0 1064k| 10M 237M| 0 0 | 150k 1472
>>> 2 1 97 0 0| 16k 48k|9650k 227M| 0 0 | 143k 1555
>>> 2 1 97 0 0| 12k 16k|9185k 216M| 0 0 | 139k 1304
>>>
>>> Io_uring:
>>> ----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
>>> usr sys idl wai stl| read writ| recv send| in out | int csw
>>> 2 1 97 0 0| 0 0 |9866k 232M| 0 0 | 148k 1286
>>> 2 1 97 0 0| 0 0 |9388k 220M| 0 0 | 144k 1345
>>> 2 1 97 0 0| 0 0 |9080k 213M| 0 0 | 137k 1388
>>> 2 1 97 0 0| 0 0 |9611k 226M| 0 0 | 144k 1615
>>> 1 1 97 0 0| 0 232k|9830k 231M| 0 0 | 147k 1524
>>>
>>> I used a Intel Xeon server Platinum 8280L CPU @ 2.70GHz, with 376G memory, 50G network. If I limit nginx memory to 2GB, the cache memory will be about 2.6G and won't increase during test. And disk io speed is close to network speed which means this can shows the disk io change of libaio vs. io_uring. This shows io_uring performance improvement. My previous data is based on this configuration.
>>>
>>> Memory:
>>> free -lh
>>> total used free shared buff/cache available
>>> Mem: 376Gi 3.2Gi 370Gi 141Mi 2.6Gi 370Gi
>>>
>>> Libaio:
>>> ----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
>>> usr sys idl wai stl| read writ| recv send| in out | int csw
>>> 1 0 98 1 0| 60M 0 |2925k 68M| 0 0 | 50k 16k
>>> 1 0 98 1 0| 60M 8192B|2923k 68M| 0 0 | 50k 16k
>>> 1 0 98 1 0| 61M 0 |2923k 68M| 0 0 | 50k 16k
>>> 0 0 98 1 0| 60M 0 |2929k 68M| 0 0 | 50k 16k
>>> 1 0 98 1 0| 60M 264k|2984k 69M| 0 0 | 51k 16k
>>>
>>> Io_uring:
>>> ----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
>>> usr sys idl wai stl| read writ| recv send| in out | int csw
>>> 1 2 93 4 0| 192M 8192B|7951k 187M| 0 0 | 146k 90k
>>> 1 2 93 4 0| 196M 0 |7953k 187M| 0 0 | 144k 89k
>>> 1 2 93 4 0| 191M 300k|7854k 185M| 0 0 | 145k 87k
>>> 1 2 94 3 0| 186M 8192B|7861k 185M| 0 0 | 143k 86k
>>> 1 2 94 3 0| 180M 16k|7995k 188M| 0 0 | 146k 86k
>>> 2 1 94 3 0| 163M 16k|7273k 171M| 0 0 | 133k 80k
>>> 1 1 94 3 0| 173M 1308k|7995k 188M| 0 0 | 144k 83k
>>>
>>> Considering that server memory won't be always enough for cache storage when traffic increased and then Nginx will use disk as cache storage. In this case, io_uring will shows big performance improvement than libaio on disk io interface. This is the value of this patch.
>>>
>>> BR,
>>> Ping
>>>
>>> -----Original Message-----
>>> From: nginx-devel <nginx-devel-bounces@nginx.org> On Behalf Of
>>> Mikhail Isachenkov
>>> Sent: Tuesday, February 9, 2021 1:17 AM
>>> To: nginx-devel@nginx.org
>>> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
>>>
>>> Hi Zhao Ping,
>>>
>>> First of all, thank you for pointing me to AWS patch -- on Fedora 33 with 5.10 kernel I don't see any errors now.
>>>
>>> I've tested patch on Amazon EC2 NVMe SSD (and found this drive pretty fast!). Server is i3en.xlarge, client is c5n.2xlarge instance type, with up to 25 Gigabit network.
>>>
>>> As in previous test, I've created a number of 100kb files, but try to reach them via proxy_cache as on your stand. After warming up disk cache, I've got the following results:
>>>
>>> a) with 4 worker processes, I've got 3Gb/sec in all tests regardless of sendfile/libaio/io_uring.
>>>
>>> b) with 1 worker process, sendfile is faster (up to 1.9 Gb/sec) than libaio (1.40 Gb/sec) and io_uring (up to 1.45 Gb/sec).
>>>
>>> I didn't use any memory limitations, but I ran 'echo 3 > /proc/sys/vm/drop_caches' before each pass. When I try to limit memory to 2G with cgroups, results are generally the same. Maybe 2G is not enough?
>>>
>>> Could you please run the test for ~60 seconds, and run 'dstat' on other console? I'd like to check disk and network bandwidth at the same timestamps and compare them to mine.
>>>
>>> Thanks in advance!
>>>
>>> 07.02.2021 05:16, Zhao, Ping пишет:
>>>> Hi Mikhail,
>>>>
>>>> I reproduced your problem with kernel 5.8.0-1010-aws. And I tried
>>>> kernel 5.8.0 which doesn't has this problem. I can confirm there's a
>>>> regression of aws patch(linux-aws_5.8.0-1010.10.diff)
>>>>
>>>> Updated 'sendfile on' & 'aio off' test result with 4KB data which is almost same as libaio:
>>>>
>>>> Nginx worker_processes 1:
>>>> 4k 100k 1M
>>>> Io_uring 220MB/s 1GB/s 1.3GB/s
>>>> Libaio 70MB/s 250MB/s 600MB/s(with -c 200, 1.0GB/s)
>>>> sendfile 70MB/s 260MB/s 700MB/s
>>>>
>>>>
>>>> Nginx worker_processes 4:
>>>> 4k 100k 1M
>>>> Io_uring 800MB/s 2.5GB/s 2.6GB/s(my nvme disk io maximum bw)
>>>> libaio 250MB/s 900MB/s 2.0GB/s
>>>> sendfile 250MB/s 900MB/s 1.6GB/s
>>>>
>>>> BR,
>>>> Ping
>>>>
>>>> -----Original Message-----
>>>> From: Zhao, Ping
>>>> Sent: Friday, February 5, 2021 2:43 PM
>>>> To: nginx-devel@nginx.org
>>>> Subject: RE: [PATCH] Add io_uring support in AIO(async io) module
>>>>
>>>> Hi Mikhail,
>>>>
>>>> Added 'sendfile on' & 'aio off' test result with previous table:
>>>>
>>>> Following is the test result with 100KB and 1MB: (4KB to be test)
>>>>
>>>> Nginx worker_processes 1:
>>>> 4k 100k 1M
>>>> Io_uring 220MB/s 1GB/s 1.3GB/s
>>>> Libaio 70MB/s 250MB/s 600MB/s(with -c 200, 1.0GB/s)
>>>> sendfile tbt 260MB/s 700MB/s
>>>>
>>>>
>>>> Nginx worker_processes 4:
>>>> 4k 100k 1M
>>>> Io_uring 800MB/s 2.5GB/s 2.6GB/s(my nvme disk io maximum bw)
>>>> libaio 250MB/s 900MB/s 2.0GB/s
>>>> sendfile tbt 900MB/s 1.6GB/s
>>>>
>>>> Regards,
>>>> Ping
>>>>
>>>> -----Original Message-----
>>>> From: nginx-devel <nginx-devel-bounces@nginx.org> On Behalf Of
>>>> Mikhail Isachenkov
>>>> Sent: Thursday, February 4, 2021 4:55 PM
>>>> To: nginx-devel@nginx.org
>>>> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
>>>>
>>>> Hi Zhao Ping,
>>>>
>>>> My test is much simpler than yours. I created
>>>> /usr/local/html/(11111...99999) files on SSD (100 kb size) and wrote small lua script for wrk that adds 5 random digits to request. There are no such errors without patch with aio enabled.
>>>> These files does not change during test.
>>>>
>>>> I'll try to reproduce this on CentOS 8 -- which repository do you use to install 5.x kernel?
>>>>
>>>> Also, could you please run the test with 'sendfile on' and 'aio off' to get reference numbers for sendfile too?
>>>>
>>>> Thanks in advance!
>>>>
>>>> 04.02.2021 10:08, Zhao, Ping пишет:
>>>>> Another possible cause is that "/usr/local/html/64746" was changed/removed when other user tried to read it.
>>>>>
>>>>> -----Original Message-----
>>>>> From: Zhao, Ping
>>>>> Sent: Thursday, February 4, 2021 10:33 AM
>>>>> To: nginx-devel@nginx.org
>>>>> Subject: RE: [PATCH] Add io_uring support in AIO(async io) module
>>>>>
>>>>> Hi Mikhail,
>>>>>
>>>>> I didn't see this error in my log. Following is my OS/Kernel:
>>>>> CentOS: 8.1.1911
>>>>> Kernel: 5.7.19
>>>>> Liburing: liburing-1.0.7-3.el8.x86_64,
>>>>> liburing-devel-1.0.7-3.el8.x86_64 (from yum repo)
>>>>>
>>>>> Regarding the error: 11: Resource temporarily unavailable. It's probably that too many read "/usr/local/html/64746" at one time which is still locked by previous read. I tried to repro this error with single file but it seems nginx auto store the signal file in memory and I don't see error. How do you perform the test? I want to repro this if possible.
>>>>>
>>>>> My nginx reported this error before:
>>>>> 2021/01/04 05:04:29 [alert] 50769#50769: *11498 pread() read only 7101 of 15530 from "/mnt/cache1/17/68aae9d816ec02340ee617b7ee52a117", client: 11.11.11.3, server: _, request: "GET /_100kobject?version=cdn003191&thread=64 HTTP/1.1", host: "11.11.11.1:8080"
>>>>> Which is fixed by my 2nd patch(Jan 25) already.
>>>>>
>>>>> BR,
>>>>> Ping
>>>>>
>>>>> -----Original Message-----
>>>>> From: nginx-devel <nginx-devel-bounces@nginx.org> On Behalf Of
>>>>> Mikhail Isachenkov
>>>>> Sent: Wednesday, February 3, 2021 10:11 PM
>>>>> To: nginx-devel@nginx.org
>>>>> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
>>>>>
>>>>> Hi Ping Zhao,
>>>>>
>>>>> When I try to repeat this test, I've got a huge number of these errors:
>>>>>
>>>>> 2021/02/03 10:22:48 [crit] 30018#30018: *2 aio read
>>>>> "/usr/local/html/64746" failed (11: Resource temporarily
>>>>> unavailable) while sending response to client, client: 127.0.0.1, server:
>>>>> localhost,
>>>>> request: "GET /64746 HTTP/1.1", host: "localhost"
>>>>>
>>>>> I tested this patch on Ubuntu 20.10 (5.8.0-1010-aws kernel version) and Fedora 33 (5.10.11-200.fc33.x86_64) with the same result.
>>>>>
>>>>> Did you get any errors in error log with patch applied? Which OS/kernel did you use for testing? Did you perform any specific tuning before running?
>>>>>
>>>>> 25.01.2021 11:24, Zhao, Ping пишет:
>>>>>> Hello, add a small update to correct the length when part of request already received in previous.
>>>>>> This case may happen when using io_uring and throughput increased.
>>>>>>
>>>>>> # HG changeset patch
>>>>>> # User Ping Zhao <ping.zhao@intel.com> # Date 1611566408 18000
>>>>>> # Mon Jan 25 04:20:08 2021 -0500
>>>>>> # Node ID f2c91860b7ac4b374fff4353a830cd9427e1d027
>>>>>> # Parent 1372f9ee2e829b5de5d12c05713c307e325e0369
>>>>>> Correct length calculation when part of request received.
>>>>>>
>>>>>> diff -r 1372f9ee2e82 -r f2c91860b7ac src/core/ngx_output_chain.c
>>>>>> --- a/src/core/ngx_output_chain.c Wed Jan 13 11:10:05 2021 -0500
>>>>>> +++ b/src/core/ngx_output_chain.c Mon Jan 25 04:20:08 2021 -0500
>>>>>> @@ -531,6 +531,14 @@
>>>>>>
>>>>>> size = ngx_buf_size(src);
>>>>>> size = ngx_min(size, dst->end - dst->pos);
>>>>>> +#if (NGX_HAVE_FILE_IOURING)
>>>>>> + /*
>>>>>> + * check if already received part of the request in previous,
>>>>>> + * calculate the remain length
>>>>>> + */
>>>>>> + if(dst->last > dst->pos && size > (dst->last - dst->pos))
>>>>>> + size = size - (dst->last - dst->pos); #endif
>>>>>>
>>>>>> sendfile = ctx->sendfile && !ctx->directio;
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: nginx-devel <nginx-devel-bounces@nginx.org> On Behalf Of
>>>>>> Zhao, Ping
>>>>>> Sent: Thursday, January 21, 2021 9:44 AM
>>>>>> To: nginx-devel@nginx.org
>>>>>> Subject: RE: [PATCH] Add io_uring support in AIO(async io) module
>>>>>>
>>>>>> Hi Vladimir,
>>>>>>
>>>>>> No special/extra configuration needed, but need check if 'aio on' and 'sendfile off' is correctly set. This is my Nginx config for reference:
>>>>>>
>>>>>> user nobody;
>>>>>> daemon off;
>>>>>> worker_processes 1;
>>>>>> error_log error.log ;
>>>>>> events {
>>>>>> worker_connections 65535;
>>>>>> use epoll;
>>>>>> }
>>>>>>
>>>>>> http {
>>>>>> include mime.types;
>>>>>> default_type application/octet-stream;
>>>>>> access_log on;
>>>>>> aio on;
>>>>>> sendfile off;
>>>>>> directio 2k;
>>>>>>
>>>>>> # Cache Configurations
>>>>>> proxy_cache_path /mnt/cache0 levels=2 keys_zone=nginx-cache0:400m max_size=1400g inactive=4d use_temp_path=off; ......
>>>>>>
>>>>>>
>>>>>> To better measure the disk io performance data, I do the following steps:
>>>>>> 1. To exclude other impact, and focus on disk io part.(This patch only impact disk aio read process) Use cgroup to limit Nginx memory usage. Otherwise Nginx may also use memory as cache storage and this may cause test result not so straight.(since most cache hit in memory, disk io bw is low, like my previous mail found which didn't exclude the memory cache impact)
>>>>>> echo 2G > memory.limit_in_bytes
>>>>>> use ' cgexec -g memory:nginx' to start Nginx.
>>>>>>
>>>>>> 2. use wrk -t 100 -c 1000, with random 25000 http requests.
>>>>>> My previous test used -t 200 connections, comparing with -t 1000, libaio performance drop more when connections numbers increased from 200 to 1000, but io_uring doesn't. It's another advantage of io_uring.
>>>>>>
>>>>>> 3. First clean the cache disk and run the test for 30 minutes to let Nginx store the cache files to nvme disk as much as possible.
>>>>>>
>>>>>> 4. Rerun the test, this time Nginx will use ngx_file_aio_read to
>>>>>> extract the cache files in nvme cache disk. Use iostat to track
>>>>>> the io data. The data should be align with NIC bw since all data
>>>>>> should be from cache disk.(need exclude memory as cache storage
>>>>>> impact)
>>>>>>
>>>>>> Following is the test result:
>>>>>>
>>>>>> Nginx worker_processes 1:
>>>>>> 4k 100k 1M
>>>>>> Io_uring 220MB/s 1GB/s 1.3GB/s
>>>>>> Libaio 70MB/s 250MB/s 600MB/s(with -c 200, 1.0GB/s)
>>>>>>
>>>>>>
>>>>>> Nginx worker_processes 4:
>>>>>> 4k 100k 1M
>>>>>> Io_uring 800MB/s 2.5GB/s 2.6GB/s(my nvme disk io maximum bw)
>>>>>> libaio 250MB/s 900MB/s 2.0GB/s
>>>>>>
>>>>>> So for small request, io_uring has huge improvement than libaio. In previous mail, because I didn't exclude the memory cache storage impact, most cache file is stored in memory, very few are from disk in case of 4k/100k. The data is not correct.(for 1M, because the cache is too big to store in memory, it wat in disk) Also I enabled directio option "directio 2k" this time to avoid this.
>>>>>>
>>>>>> Regards,
>>>>>> Ping
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: nginx-devel <nginx-devel-bounces@nginx.org> On Behalf Of
>>>>>> Vladimir Homutov
>>>>>> Sent: Wednesday, January 20, 2021 12:43 AM
>>>>>> To: nginx-devel@nginx.org
>>>>>> Subject: Re: [PATCH] Add io_uring support in AIO(async io) module
>>>>>>
>>>>>> On Tue, Jan 19, 2021 at 03:32:30AM +0000, Zhao, Ping wrote:
>>>>>>> It depends on if disk io is the performance hot spot or not. If
>>>>>>> yes, io_uring shows improvement than libaio. With 4KB/100KB
>>>>>>> length
>>>>>>> 1 Nginx thread it's hard to see performance difference because
>>>>>>> iostat is only around ~10MB/100MB per second. Disk io is not the
>>>>>>> performance bottle neck, both libaio and io_uring have the same
>>>>>>> performance. If you increase request size or Nginx threads
>>>>>>> number, for example 1MB length or Nginx thread number 4. In this
>>>>>>> case, disk io became the performance bottle neck, you will see io_uring performance improvement.
>>>>>>
>>>>>> Can you please provide full test results with specific nginx configuration?
>>>>>>
>>>>>> _______________________________________________
>>>>>> nginx-devel mailing list
>>>>>> nginx-devel@nginx.org
>>>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>>>>> _______________________________________________
>>>>>> nginx-devel mailing list
>>>>>> nginx-devel@nginx.org
>>>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>>>>> _______________________________________________
>>>>>> nginx-devel mailing list
>>>>>> nginx-devel@nginx.org
>>>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>>>>>
>>>>>
>>>>> --
>>>>> Best regards,
>>>>> Mikhail Isachenkov
>>>>> NGINX Professional Services
>>>>> _______________________________________________
>>>>> nginx-devel mailing list
>>>>> nginx-devel@nginx.org
>>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>>>> _______________________________________________
>>>>> nginx-devel mailing list
>>>>> nginx-devel@nginx.org
>>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>>>>
>>>>
>>>> --
>>>> Best regards,
>>>> Mikhail Isachenkov
>>>> NGINX Professional Services
>>>> _______________________________________________
>>>> nginx-devel mailing list
>>>> nginx-devel@nginx.org
>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>>> _______________________________________________
>>>> nginx-devel mailing list
>>>> nginx-devel@nginx.org
>>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>>>
>>>
>>> --
>>> Best regards,
>>> Mikhail Isachenkov
>>> NGINX Professional Services
>>> _______________________________________________
>>> nginx-devel mailing list
>>> nginx-devel@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>> _______________________________________________
>>> nginx-devel mailing list
>>> nginx-devel@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>>
>>
>> --
>> Best regards,
>> Mikhail Isachenkov
>> NGINX Professional Services
>> _______________________________________________
>> nginx-devel mailing list
>> nginx-devel@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>
>
> --
> Best regards,
> Mikhail Isachenkov
> NGINX Professional Services
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>

--
Best regards,
Mikhail Isachenkov
NGINX Professional Services
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
Subject Author Views Posted

[PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 1477 January 14, 2021 12:54AM

Re: [PATCH] Add io_uring support in AIO(async io) module

Vladimir Homutov 419 January 18, 2021 02:30AM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 360 January 18, 2021 03:26AM

Re: [PATCH] Add io_uring support in AIO(async io) module

Vladimir Homutov 384 January 18, 2021 09:12AM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 329 January 18, 2021 10:34PM

Re: [PATCH] Add io_uring support in AIO(async io) module

Vladimir Homutov 316 January 19, 2021 11:44AM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 305 January 20, 2021 08:46PM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 295 January 25, 2021 03:26AM

Re: [PATCH] Add io_uring support in AIO(async io) module

Mikhail Isachenkov 332 February 03, 2021 09:12AM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 301 February 03, 2021 09:34PM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 300 February 04, 2021 02:10AM

Re: [PATCH] Add io_uring support in AIO(async io) module

Mikhail Isachenkov 309 February 04, 2021 03:56AM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 346 February 04, 2021 07:56PM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 299 February 05, 2021 01:44AM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 302 February 06, 2021 09:18PM

Re: [PATCH] Add io_uring support in AIO(async io) module

Mikhail Isachenkov 336 February 08, 2021 12:18PM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 354 February 09, 2021 03:38AM

Re: [PATCH] Add io_uring support in AIO(async io) module

Mikhail Isachenkov 352 February 09, 2021 08:32AM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 313 February 15, 2021 01:10AM

Re: [PATCH] Add io_uring support in AIO(async io) module

Mikhail Isachenkov 332 February 15, 2021 03:12AM

Re: [PATCH] Add io_uring support in AIO(async io) module

Vadim Fedorenko 310 February 15, 2021 06:06AM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 343 February 22, 2021 02:40AM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 298 February 25, 2021 02:00AM

Re: [PATCH] Add io_uring support in AIO(async io) module

Mikhail Isachenkov 311 February 25, 2021 06:02AM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 289 February 25, 2021 08:24PM

Re: [PATCH] Add io_uring support in AIO(async io) module

Mikhail Isachenkov 327 February 26, 2021 03:42AM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 334 February 27, 2021 07:50AM

Re: [PATCH] Add io_uring support in AIO(async io) module

Maxim Dounin 372 March 22, 2021 12:18PM

Re: [PATCH] Add io_uring support in AIO(async io) module

Vadim Fedorenko 326 February 21, 2021 04:56PM

RE: [PATCH] Add io_uring support in AIO(async io) module

Zhao, Ping 364 February 22, 2021 03:28AM

Re: [PATCH] Add io_uring support in AIO(async io) module

pingzhao 292 August 25, 2021 09:50PM

Re: [PATCH] Add io_uring support in AIO(async io) module

splitice 375 August 25, 2021 10:28PM



Sorry, you do not have permission to post/reply in this forum.

Online Users

Guests: 191
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready