Hello,
I was getting the bunch of 413 statuses in the access log along with getting explicit error messages about client (logstash in my case, seems like it was trying to send bodies around 100 megabytes) trying to post body larger than the client_max_body_size. After I raised this setting to 128m, I stopped receiving messages in the error log, but not the access log 413 statuses:
10.3.51.214 - - [05/Sep/2019:15:21:27 +0500] elasticsearch.dev.alamics.ru "POST /_bulk HTTP/1.1" 413 0 "-" "Java/1.8.0_212" "-" "-" 82.609 192.168.57.23:9200 413 -
10.3.51.214 - - [05/Sep/2019:15:23:00 +0500] elasticsearch.dev.alamics.ru "POST /_bulk HTTP/1.1" 413 0 "-" "Java/1.8.0_212" "-" "-" 91.931 192.168.57.23:9200 413 -
10.3.51.214 - - [05/Sep/2019:15:24:24 +0500] elasticsearch.dev.alamics.ru "POST /_bulk HTTP/1.1" 413 0 "-" "Java/1.8.0_212" "-" "-" 83.679 192.168.57.23:9200 413 -
10.3.51.214 - - [05/Sep/2019:15:25:35 +0500] elasticsearch.dev.alamics.ru "POST /_bulk HTTP/1.1" 413 0 "-" "Java/1.8.0_212" "-" "-" 69.195 192.168.57.23:9200 413 -
10.3.51.214 - - [05/Sep/2019:15:27:01 +0500] elasticsearch.dev.alamics.ru "POST /_bulk HTTP/1.1" 413 0 "-" "Java/1.8.0_212" "-" "-" 85.953 192.168.57.23:9200 413 -
I've even tried to set the client_max_body_size to 0, but I'm still getting these 413 like once per minute. As you can see, the request times are about 1.5 minutes, so it's not the case when I'm still seeing past failing requests for old setting.
I'm pretty much stuck at this point.
nginx/1.16.0 on FreeBSD 12-STABLE amd64 from ports.
Thanks.