Hello, I have an old and proven to work config like this (like, literally, for dozen of years): server { listen 8085 default; listen [::]:8085 default; root /usr/local/public/storage; location ~ ^/resize/([\d\-]+)x([\d\-]+)/(.+) { set $width $1; set $height $2; rewrite ^/resize/([\d\-]+)x([\d\-]+)/(.+) /$3 brby drookie - Php-fpm Mailing List - English
Hello, Seems like when using grpc on nginx 1.16.x the client body is _always_ buffered on the disk. Yeah, seems absolutely weird, but: 1) here's the $request_lengths (not $body_bytes_sent), top 25 unique entries: # cat /var/log/nginx/balancer/foo.bar.tld-access.log | awk -F\" '{print $3}' | awk '{print $3}' | sort -urn | head -n 25 1272 1080 308 307 306 305 304 303 302 301by drookie - Php-fpm Mailing List - English
Oh, sorry. It's clear that the upstream is sending 413 errors, not the nginx himself. Should read the log more carefully. Sorry again.by drookie - Nginx Mailing List - English
Hello, I was getting the bunch of 413 statuses in the access log along with getting explicit error messages about client (logstash in my case, seems like it was trying to send bodies around 100 megabytes) trying to post body larger than the client_max_body_size. After I raised this setting to 128m, I stopped receiving messages in the error log, but not the access log 413 statuses: 10.3.51.21by drookie - Nginx Mailing List - English
Hello, I did't find the answer in documentation, but am I right, assuming from my observation, that when the proxy_cache is enabled for a location, and the client requests the file that isn't in the cache yet, nginx starts transmitting this file only after it's fully received from the upstream ? Because I'm seeing the lags equal to the request_time from the upstream. If I'm right, is thereby drookie - Nginx Mailing List - English
Oh, solved. Upstreams do respond with 500.by drookie - Nginx Mailing List - English
(yup, it's still the author of the original post, but my other browser just remembers another set of credentials). If I increase verbosity of the error_log, I'm seeing additional messages in log, like upstream server temporarily disabled while reading response header from <backend IP> but this message doesn't explain why the upstream server was disabled. I understand that the errorby drookie - Nginx Mailing List - English
Is there someone besides Captain Evidence who knows the answer ? This is actually the problem of the modern internet: half of the decent questions is flooded out by people, who not only think they know the answer, but are arrogant enough to insist it, and from the point of an outer observer the topic looks "answered".by drookie - Nginx Mailing List - English
I'm asking about the balancer behaviour, not the backends.by drookie - Nginx Mailing List - English
What is the scope of the upstream member liveness: is it per upstream group, or per vhost ? If the question is unclear, consider I have 3 nginx - one balancer and two backends, and the following config part on the nginx balancer: upstream backends { server 192.168.0.1; server 192.168.0.2; } And on both 192.168.0.1 and 192.168.0.2 the following configs: server { serverby drookie - Nginx Mailing List - English
Hi. Just in case someone will step on it too. 'perl -le 'print crypt("password", "salt")' is a root cause for situation when you can input any random sequence after entering valid password. Like, imagine, you have a password "mys3cr3t", and you generated a hash using perl one-liner above. This way, any password of the following ones (and similar in general) willby drookie - How to...
Hi. I'm trying to get nginx 1.6.2 to authenticate users using their client certificates. I'm using this configuration (besides usual SSL settings, which are proved to work): ssl_stapling on; ssl_client_certificate /etc/nginx/certs/trusted.pem; ssl_verify_client optional_no_ca; trusted.pem contains 3 CA certificates: test CA and 2 production CA (main and intermediate). To pass verifiby drookie - Nginx Mailing List - English