Ah, let me guess - is the keepalive number "per worker"? On 03/09/13 13:42, Richard Kearsley wrote: > Hi > I seem to have an issue where the upstream keepalives aren't being > re-used > > It shouldn't ever need more than 500 connections to the upstream, but > it keeps making more? and doesn't stick to the 1024 limit.... > What's going on? > ___________________by dbradfield - Nginx Mailing List - English
Hi I seem to have an issue where the upstream keepalives aren't being re-used proxy_http_version 1.1; upstream dev1 { server 10.0.0.11 max_fails=0; keepalive 1024; } location / { proxy_pass http://dev1; proxy_set_header Connection ""; } On a separate server I run 'ab -n 500 -c 500 http://10.0.0.10/test/blah.txt' ... a few times waiting say 10 seconds between runby dbradfield - Nginx Mailing List - English
Hi I'm using the upstream module - with sole purpose to enable keepalives to my backend I don't want to use any of the other features, I only have 1 server in the upstream {} Does that mean max_fails is still being used? (defaults to 1?) and fail_timeout etc..? they both have default values What happens if they are "all" marked as down? If the 10.100.0.11 is down, I would like it toby dbradfield - Nginx Mailing List - English
On 06/08/13 04:02, Dennis Jacobfeuerborn wrote: > Since I determine the reason for the denied access in lua a way to do > it there would also help. I already tried "nginx.status = 403" > followed by a "nginx.exec('/reason1')" but while the right page is > display the status code returned gets reset to 200. Hi You can do it in lua.. you need to do it in the heby dbradfield - Nginx Mailing List - English
On 05/08/13 21:13, Rangel, Raul wrote: > The filesystem is AUFS. It's mounted inside of a docker container. > > > So my assumption is that AUFS does not support writev? So I need to somehow mount a different filesystem? Hi I can't comment about AUFS, but you can change where those temp files are stored if you wanted to make a small partition dedicated as a temp directory http://nginby dbradfield - Nginx Mailing List - English
Hi There's no size limit, it will keep getting bigger until your disk is full Here's a script I use to rotate the log, run it from cron every hour hope it helps #!/bin/sh PID=`cat /usr/local/nginx/logs/nginx.pid` LOG="/usr/local/nginx/logs/access.log" NOW=$(date +"%Y-%m-%d-%H-%M") NEWLOG="${LOG}.${NOW} mv ${LOG} ${NEWLOG} kill -USR1 ${PID} gzip ${NEWLOG} On 25/07/13 09:by dbradfield - Nginx Mailing List - English
the port in proxy_pass is not for listening/accepting incoming connections - it is for connecting outwards to another server/service You must have something else (another httpd, probably not nginx) listening on 8009......? On 23/07/13 17:39, imran_kh wrote: > Hello, > > I am using Nginx web server and getting error “502 bad gateway” while > accessing some sites. > I have obsby dbradfield - Nginx Mailing List - English
Thanks all I think I will just open another port (looks like 6121 is registered for spdy?) because I'm not using hostnames (only IPs) and I don't like redirects so: server { listen 80; listen 443 ssl; listen 6121 ssl spdy; # it will still fall-back to https if the client doesn't support spdy location / { blah; } } Cheers On 08/07/13 17:40, Sajan Parikby dbradfield - Nginx Mailing List - English
Hi I'm trying to set up spdy so that I can choose weather or not to use it based on the server location that's accessed As I understand, the underlying protocol (http/https/spdy) is established first before any request can be sent (e.g. before we know which location it will match) I know this example is totally impossible, but would like to know if there is a real way of doing it: server {by dbradfield - Nginx Mailing List - English
Hi I already checked there, I'm getting a different error ("mp4 atom too large" != "mp4 moov atom is too large") My error message seems to have been added in this patch http://nginx.org/download/patch.2012.mp4.txt In any case, the example given there gives a reasonable example, as '12583268' is around 12MB so increasing it by 2MB would not be an issue However in my case, thby dbradfield - Nginx Mailing List - English
nginx version: nginx/1.4.1 built by gcc 4.2.1 20070831 patched TLS SNI support enabled configure arguments: --with-debug --with-http_ssl_module --with-http_stub_status_module --with-file-aio --with-http_flv_module --with-http_mp4_module --with-http_geoip_module --add-module=../../../lua-nginx-module before u ask :) On 28/06/13 17:11, Richard Kearsley wrote: > Hi > > I use ngx_http_mby dbradfield - Nginx Mailing List - English
Hi I use ngx_http_mp4_module quite heavily, and very occasionally I see this error for a few files: mp4 atom too large:723640794 With the number differing.. Is the number the size of the atom in bytes? If so, 723640794 is around 690MB and the mp4 file is only around 150MB The same file works with the "other" mp4 module What can we do to find the problem? Many thanks _____________by dbradfield - Nginx Mailing List - English
Hi I'm pretty sure I have found the cause, All the videos I see it happening on have short audio (Audio stops before the video) On 22/06/13 16:06, Richard Kearsley wrote: > Hi > I've been able to test a few videos myself and can see it happening > Just to be clear, 99%+ seem to be fine and can seek right up to the end > But on very few, seeking is only possible up to X seconds (X couby dbradfield - Nginx Mailing List - English
Hi I've been able to test a few videos myself and can see it happening Just to be clear, 99%+ seem to be fine and can seek right up to the end But on very few, seeking is only possible up to X seconds (X could be at any point in the video) To seek after X, the error always happens However if I watch the video from start to end, it downloads the full thing and can be watched to the end (the fileby dbradfield - Nginx Mailing List - English
Hi nginx version: nginx/1.4.1 built by gcc 4.2.1 20070831 patched TLS SNI support enabled configure arguments: --with-debug --with-http_ssl_module --with-http_stub_status_module --with-file-aio --with-http_flv_module --with-http_mp4_module --with-http_geoip_module --add-module=../../../lua-nginx-module upgraded version, still have mp4 errors at the same frequency What do the stts and stsc eby dbradfield - Nginx Mailing List - English
Hi I’m using the mp4 module quite heavily, and very occasionally (once every minute or so on a busy website) there is an error written to error.log and status 500 returned in the access log 42078#0: *5510811 start time is out mp4 stts samples in ... (mostly this error) 42072#0: *5524976 start time is out mp4 stsc chunks in ... (sometimes this error) It happens on different videos and onlby dbradfield - Nginx Mailing List - English
On 06/05/13 18:47, Gee wrote: > > > The frustrating thing here is that /tmp/fastcgi.socket does actually > exist. I tried 'touch' and making sure 'wheel' has the appropriate > permissions. The result of 'ls -la /tmp/fastcgi.socket' revealed > nothing awry. > > Does anyone have any ideas/hints? > > To try and save time, here is my config: > see if you can connby dbradfield - Nginx Mailing List - English
Hi I read here that keepalives to backend can be enabled with the upstream module (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive) But can they be used without defining an upstream block? Just a simple proxy_pass as the backend is a variable in my case: 'proxy_pass $proxy_to;' Many thanks _______________________________________________ nginx mailing list nginx@nginx.orgby dbradfield - Nginx Mailing List - English
Hi Are you sure it's not the linux file/buffer cache that's using all your ram? (does ps/top show nginx or the worker processes using it directly?) Linux and most/all other unix variants will fill up unused ram with cached versions of the most recently used files so they don't have to be read from disk each time... it's completely normal and expected behaviour :) On 22/04/13 19:30, Joseph Cabby dbradfield - Nginx Mailing List - English
Hi Is the max value specified in `open_file_cache` on a per-worker basis? e.g. if I set it to 20,000, will it cache 80,000 open fds with 4 workers? Thanks _______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby dbradfield - Nginx Mailing List - English
any hacker will need to be inside your server or have some administration over the network to find those ips On 06/04/13 15:01, Larry wrote: > My concern is that a hacker is able to know my other ips over europe. > > My host is not a problem. The real deal is the outgoing packets I don't want > external people to know where they are going to. > > It would defeat the whole purpoby dbradfield - Nginx Mailing List - English
If you run wireshark on your main box, you will be able to see the ips it connects to (but not the urls because of https). However they would need to be logged into your box to run wireshark and at this point they could just run a netstat command to find the ips it is connected to. If you mean can the network operator find these ips? They can use tools like netflow/sflow on their switches andby dbradfield - Nginx Mailing List - English
Hi That's a good idea but I think it's not possible The key is set before request is sent to backend but it can only know the content length after the request is sent to backend (catch 22) On 04/04/13 12:59, ntib1 wrote: > Hi, > > I'd like to put $content_length in proxy_cache_key in order nginx to check > if file had changed and send it instead of old file if it's case. > > Bby dbradfield - Nginx Mailing List - English
Hi I actually did some quite in-depth comparison with splice() sys call (only available on linux btw), between nginx and haproxy, and even wrote a small standalone proxy server that uses it There was some improvement, but not on the scale that would make it a deciding factor The thing that makes most difference to forwarding is your network card, and if it supports LRO (large receive offload)by dbradfield - Nginx Mailing List - English
Hi I'm trying to tune 'kern.maxbcache' with hope of increasing 'vfs.maxbufspace' so that more files can be stored in buffer memory on freebsd 9.1 It's suggested to tune this value here http://serverfault.com/questions/64356/freebsd-performance-tuning-sysctls-loader-conf-kernel and here http://wiki.nginx.org/FreeBSDOptimizations However, I can't get the value of 'vfs.maxbufspace' to increase:by dbradfield - Nginx Mailing List - English
Hi Many (MANY) people use php-fpm and it's fine If you really need extra performance you should test it yourself on your own application (not hard to do) and see if proxying to apache actually gives any benefit _______________________________________________ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginxby dbradfield - Nginx Mailing List - English
what was the article that you read? you should probably do your own tests to work out the fastest way to do it if you really need as many dynamic requests as possible My thoughts at this point (after using nginx for 3+ years) is that I would avoid using apache - KISS! On 17/02/13 13:42, mottwsc wrote: > Thanks for the suggestion, Steve. I was working from that angle before > based on advby dbradfield - Nginx Mailing List - English
You should give a better (code test case) example of what you want to do If you are using lua then i'm sure there will be a solution On 11/02/13 18:21, amodpandey wrote: > It should not and it does not! If the client does not send a cookie with > name "abc" or "def" I do not assume nginx to have any variable with that > name. Am I missing anything? > > Posted aby dbradfield - Nginx Mailing List - English
yeah, it's different (correct) definition here: http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format should probably update wiki to reflect it I think $http_content_length will give the length of the response also (not the request) As Maxim says, they added it to the trunk a few days ago and I don't mind waiting for next release :) Thanks all On 28/01/13 14:24, Jonathan Matthewby dbradfield - Nginx Mailing List - English
Hi, It's not the same, $request_length is the length of what the client (browser) sent it's request for a file e.g. request headers, request body I suppose I could loop through the headers and body and count their lengths.. but not ideal On 28/01/13 14:05, Jonathan Matthews wrote: > I would think you could get something equivalent and useful out of > $upstream_http_content_length, as perby dbradfield - Nginx Mailing List - English