Thanks for your response, Zhang. I included content-length in log_format to see: y.y.y.y - [08/Jun/2017:22:15:46 +0000] "GET /image.jpg HTTP/2.0" 200 466 HIT "Mozilla/5.0 (Linux; Android 5.0.1; GT-I9515L Build/LRX22C) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.83 Mobile Safari/537.36" 44 466 2.384 "image/jpeg" 21221 x.x.x.x - [08/Jun/2017:22:15:46 +0000by geberhart - Nginx Mailing List - English
@itpp2012: I cant replicate the problem using curl from 2 different locations. Its not supposed to return 206 in range requests? @zhang_chao: I'm not sure about this, but its not supposed to return 499 in this case? Tks, Guilherme On Fri, Jun 2, 2017 at 3:45 AM, Zhang Chao <zchao1995@gmail.com> wrote: > Hi! > > Are you sure the client didn't close the connection when the bodby geberhart - Nginx Mailing List - English
I identified a strange behavior in my nginx/1.11.2. Same cached objects are returning different content length. In the logs below, body_bytes_sent changes intermittently between 215 and 3782 bytes. The correct length is 3782. (these objects are not being updated in this interval) xxxxxxxxxx - - [02/Jun/2017:01:29:06 +0000] "GET /img/app/bt_google_play.png HTTP/2.0" 200 *215* "xxxxxby geberhart - Nginx Mailing List - English
Maxim, In these cases, when Vary is present in response headers and generate multiple cache files for the same key, how nginx determines cache file names for variants? Tks, Guilherme On Thu, Nov 19, 2015 at 11:26 AM, Maxim Dounin <mdounin@mdounin.ru> wrote: > Hello! > > On Wed, Nov 18, 2015 at 09:40:45PM -0500, semseoymas wrote: > > > First, the specs: > > ngiby geberhart - Nginx Mailing List - English
Hello! Before Nginx 1.7.7, the file name in a cache was a result of applying the MD5 function to the cache_key. Now the file name, when Vary header is present in a response of the proxied request, is not the MD5 of the cache_key anymore. The above requests generate two different cache files (response header include *Vary: Accept-Encoding*): curl -H 'Accept-Encoding: gzip' http://example.com/scby geberhart - Nginx Mailing List - English
Anton, I already had the same issue logging to NFS, but I'm curious about why nginx hang in some nfs failures. Log phase is the last, if there is no post action, so why nginx stop responding in some NFS failures? Do you think that I can ease the situation tunning nfs client config, such as timeout and retrans ? tks On Tue, Feb 11, 2014 at 3:33 PM, David Birdsong <david.birdsong@gmail.comby geberhart - Nginx Mailing List - English
Happy birthday from Brazil! Sent from my iPhone On 28/09/2012, at 14:14, Mike Dupont <jamesmikedupont@googlemail.com> wrote: > Happy Birthday from Germany, > from Usa > and from Kosovo! > > :D > mike > > On Fri, Sep 28, 2012 at 1:08 PM, António P. P. Almeida <appa@perusio.net> wrote: >> Just seen pass on twitter the mentioning of your birthday. >by geberhart - Nginx Mailing List - English
Sorry... Wrong mail. On 11/07/2012, at 16:54, Jader Henrique da Silva <cad_jsilva@uolinc.com> wrote: > Hello > > I was checking HttpLuaModule docs and saw "ngx.re.split" implementation in the TODO section. > > Is it already implemented? > Are there any details about this implementation (e.g. parameters, returned data)? > > Jader H. Silva > > &by geberhart - Nginx Mailing List - English
Show!! As listas nos ajudam muito. Só uma recomendacao: procure utilizar o seu email pessoal em listas de discussao. Nao utilizamos o corporativo para nao dar indicios do q estamos mexendo no UOL. Abs Sent from my iPhone On 11/07/2012, at 16:54, Jader Henrique da Silva <cad_jsilva@uolinc.com> wrote: > Hello > > I was checking HttpLuaModule docs and saw "ngx.re.split"by geberhart - Nginx Mailing List - English
Hello, I'm using httpluamodule+redis to make a dynamic proxy to use in a mass vhost environment. I need to limit requests/s for specifics http_host. I tried to do something like that: ------------------------------------------------------------------------------------- limit_req_zone $http_host zone=one:10m rate=1r/s; upstream redisbackend { server 127.0.0.1:6379; } server {by geberhart - Nginx Mailing List - English
Enabling proxy_buffering, disk I/O will increase significantly. If you're running nginx and php on the same server, maybe it's not a good idea to enable it. On Wed, Mar 14, 2012 at 6:41 PM, Alexandr Gomoliako <zzz@zzz.org.ua> wrote: > > I'm new to nginx and have a question about the proxy_buffering feature. > > > Our topology is very easy: > > Just nginx - No apache beby geberhart - Nginx Mailing List - English
I'll take a look in lua and auth_request module. Thanks for the suggestions. It was helpful! On Tue, Feb 14, 2012 at 12:52 PM, António P. P. Almeida <appa@perusio.net>wrote: > On 14 Fev 2012 07h48 WET, nginxyz@mail.ru wrote: > > > Have you ever actually used the auth_request module? Or have you at > > least read the part of the auth_request module README file where >by geberhart - Nginx Mailing List - English
On Fri, Feb 10, 2012 at 6:08 PM, Max <nginxyz@mail.ru> wrote: > > 10 февраля 2012, 23:40 от Guilherme <guilherme.e@gmail.com>: > > This would fix the problem, but I don't know the directories that has a > > .htaccess file with allow/deny. > > > > Example: > > > > Scenario: nginx (cache/proxy) + back-end apache > > > > rootby geberhart - Nginx Mailing List - English
On Fri, Feb 10, 2012 at 5:58 PM, António P. P. Almeida <appa@perusio.net>wrote: > On 10 Fev 2012 19h40 WET, guilherme.e@gmail.com wrote: > > > Adrián, > > > > This would fix the problem, but I don't know the directories that > > has a .htaccess file with allow/deny. > > > > Example: > > > > Scenario: nginx (cache/proxy) + back-end apacby geberhart - Nginx Mailing List - English