Again more information:
When my benchmarks aren't running, I can receive via wget a
single file at 50megabyte/s from one of my benchmark clients.
Note this is not part of my nginx benchmark, I am just showing
my nginx server can receive 50mbyte/s throughput no problem.
Now, when my benchmark clients are running httperf against
the server, there is a max of 8megabyte/s throughput and
there is approx. 1500 concurrent connections open:
# cat /proc/net/sockstat
TCP: inuse 1466 orphan 26 tw 0 alloc 1467 mem 12544
UDP: inuse 3
RAW: inuse 0
FRAG: inuse 0 memory 0
This is when nginx (or the OS) becomes slow to start
serving new connections.
I tried just now, downloading the test file to my server while the
benchmark is running. This is the file I can usually receive at
50megabyte/s. I straced wget and it first hangs in poll()
trying to lookup the hostname. It retries a few times and after
about 5-10 seconds it begins downloading but only at 13
kilobyte/s.
So, it's not a problem of throughput saturation. I don't believe
it's because of any limits like file descriptors, somaxconn or
rmem/wmem, as I have maxed those (and tried them at
smaller values too.) CPU is 97-98% idle the entire time and
there is no waiting on disk.
I honestly have no idea what can be causing this! Any advice
would be very much appreciated.
Steve