Welcome! Log In Create A New Profile

Advanced

HTTP/3 QUIC - Very poor throughput with high latency, high bandwidth connections

Hello,

I have been experimenting with the new HTTP/3 quic feature in nginx,

I have noticed that as latency increases, the throughput of download speed seems to exponentially decrease. H2 traffic over TCP does not have the same effect.

At a ping of around 270ms, I see nginx's quic implementation crawl to a download speed of 200KB per second.

Here is my example: Two servers in data centers, both with 10 gigabit internet connections. One in Australia, the other in the UK. iperf3 tests over UDP yields 700mbits per second between them both ways.

H2 over TCP = 42 MBytes per second. Downloads 1G in 27 seconds.
# curl https://www.afamsterdam.nl/1GB.iso > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1024M 100 1024M 0 0 36.8M 0 0:00:27 0:00:27 --:--:-- 42.5M

H3 over UDP = 240KB per second. Did not wait for test to finish (1.5 hours remaining).
# curl --http3 https://www.afamsterdam.nl/1GB.iso > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 1024M 0 7460k 0 0 226k 0 1:16:59 0:00:32 1:16:27 244k^C

When I run a http3 test from a low latency, the speed is similar, though still a bit slower than TCP.

This slowness presents itself in chrome too (1MB images take 9 seconds to download over 1 second on H2). Nothing was seemingly unusual in a chrome export quic JSON dump.

Any ideas?

nginx version: nginx/1.25.2
built by gcc 11.4.0 (Ubuntu 11.4.0-1ubuntu1~22.04)
built with OpenSSL 3.0.10+quic 1 Aug 2023
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_v3_module --with-http_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-compat --add-dynamic-module=../ngx_brotli --add-module=/tmp/modsecurity_nginx/modsecurity-nginx-v1.0.3 --with-openssl=../openssl-openssl-3.0.10-quic1 --with-cc-opt='-g -O2 -ffile-prefix-map=/data/builder/debuild/nginx-1.25.2/debian/debuild-base/nginx-1.25.2=. -flto=auto -ffat-lto-objects -flto=auto -ffat-lto-objects -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -flto=auto -ffat-lto-objects -flto=auto -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'
I can confirm this issue.
tested with between Server and client both connected via 10 gbps.

nginx version:

# nginx -V
nginx version: nginx/1.27.0
built by gcc 13.2.1 20231014 (Alpine 13.2.1_git20231014)
built with OpenSSL 3.1.4 24 Oct 2023 (running with OpenSSL 3.1.5 30 Jan 2024)
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-perl_modules_path=/usr/lib/perl5/vendor_perl --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_v3_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-Os -fstack-clash-protection -Wformat -Werror=format-security -fno-plt -g' --with-ld-opt='-Wl,--as-needed,-O1,--sort-common -Wl,-z,pack-relative-relocs'


normal latency:

# ping 192.168.2.87 -c 3
PING 192.168.2.87 (192.168.2.87) 56(84) bytes of data.
64 bytes from 192.168.2.87: icmp_seq=1 ttl=64 time=0.188 ms
64 bytes from 192.168.2.87: icmp_seq=2 ttl=64 time=0.181 ms
64 bytes from 192.168.2.87: icmp_seq=3 ttl=64 time=0.174 ms

--- 192.168.2.87 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.174/0.181/0.188/0.005 ms




to test this issue I did run:

tc qdisc add dev enp3s0f0 root netem delay 100ms


latency now:

# ping 192.168.2.87 -c 3
PING 192.168.2.87 (192.168.2.87) 56(84) bytes of data.
64 bytes from 192.168.2.87: icmp_seq=1 ttl=64 time=100 ms
64 bytes from 192.168.2.87: icmp_seq=2 ttl=64 time=100 ms
64 bytes from 192.168.2.87: icmp_seq=3 ttl=64 time=100 ms

--- 192.168.2.87 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 100.202/100.213/100.232/0.013 ms




throughput with high latency:

TCP

# curl --http2 -o /dev/null "https://mydomain/1000mb.bin"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1000M 100 1000M 0 0 27.5M 0 0:00:36 0:00:36 --:--:-- 28.7M

UDP

# curl --http3 -o /dev/null "https://mydomain/1000mb.bin"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
10 1000M 10 100M 0 0 631k 0 0:27:00 0:02:42 0:24:18 631k^C




As you can see switching from http2 to http3 reduces the throughput from over 230 mbps to 4 mbps. The issue is persistent no matter with what client I test, curl, firefox or chrome all report the same slow throughput via http3 nginx.
Hello after some more testing against the cloudflare implementation I can confirm this issue happens for me only with nginx while cloudflare has even slightly higher throughput via http3 compared to http2.

# ping -c 3 speed.cloudflare.com
PING speed.cloudflare.com(2606:4700::6810:3c08 (2606:4700::6810:3c08)) 56 data bytes
64 bytes from 2606:4700::6810:3c08 (2606:4700::6810:3c08): icmp_seq=1 ttl=58 time=118 ms
64 bytes from 2606:4700::6810:3c08 (2606:4700::6810:3c08): icmp_seq=2 ttl=58 time=117 ms
64 bytes from 2606:4700::6810:3c08 (2606:4700::6810:3c08): icmp_seq=3 ttl=58 time=116 ms

--- speed.cloudflare.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 116.079/117.160/118.031/0.810 ms


# curl --http3 -o /dev/null "https://speed.cloudflare.com/__down?measId=$id&bytes=1000000000"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 953M 0 953M 0 0 22.6M 0 --:--:-- 0:00:42 --:--:-- 23.9M

# curl --http2 -o /dev/null "https://speed.cloudflare.com/__down?measId=$id&bytes=1000000000"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 953M 0 953M 0 0 20.5M 0 --:--:-- 0:00:46 --:--:-- 22.6M


Would be really cool if somebody from the nginx folks could take a look at that or maybe give a tip if it is a configuration on our side what to set different?

Thank you
I am also facing similar issue. Nginx to client download speed is very slow compared to http2
We have a similar issue, details are mentioned on the nginx-devel mailing list [1]. We are trying to solve this but without success for now.

[1] https://mailman.nginx.org/pipermail/nginx-devel/2024-August/EASUSHOAO4233XN5GNYALAECJTTP6B34.html
Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 305
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 500 on July 15, 2024
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready