B.R. Wrote: > BREACH attacks the fact that compressed HTTP content encrypted with > SSL > makes it easy to guess a known existing header field from the request > that > is repeated in the (encrypted) answer looking at the size of the body. > BEAST conclusion is: don't use HTTP compression underneath SSL > encryption. No, the conclusion is: don't echo back values suppby rmalayter - Nginx Mailing List - English
Maxim Dounin Wrote: > > Percentage values are stored in fixed point with 2 digits after > the point. Configuration parsing will complain if you'll try to > specify more digits after the point. > > > How many "buckets" does the hash table for split_clients > > have (it doesn't seem to be configurable)? > > The split_clients algorithm doesn'tby rmalayter - Nginx Mailing List - English
I'm looking for a way to do consistent hashing without any 3rd-party modules or perl/lua. I came up with the idea of generating a split_clients and list of upstreams via script, so we can add/remove backends without blowing out the cache on each upstream when a backend server is added, removed or otherwise offline. What I have looks like the config below. The example only includes 16 upstreamsby rmalayter - Nginx Mailing List - English
Jérôme Loyet Wrote: ------------------------------------------------------- > We were in the same situation and didn't want to take the risk to use > a third party module, so we switched to HTTP and it's just perfect. > > my 2 cents ;) We did the same here. Now that nginx does keep-alives to the back-ends, there's really no advantage to using AJP between the web server and Tby rmalayter - Nginx Mailing List - English
I forgot to mention using a smaller RSA key size. Use at most 2048 bits; however 1024 bit RSA keys are no longer considered to have enough of a "security margin". 4096 bits are super-overkill, but a lot of people choose that thinking "more bits is better" when generating a key.by rmalayter - Nginx Mailing List - English
Almost all of this time in the SSL handshake is probably spent on waiting for the network. But a factor of 10x seems unreasonable; I usually see 3x-4x latency increases for HTTPS compared with HTTP. Things to test out: 1) Disable ephemeral diffie-hellman cipher suites (which real browsers don't use, but OpenSSL testing tools will, skewing your results.) 2) Use RSA+SHA where you can. Theoreticby rmalayter - Nginx Mailing List - English
Alexandr Gomoliako Wrote: > Here's a simple approach on consistent hashing > with embedded perl: > https://gist.github.com/2124034 Interesting. Clearly one could generate the upstream blocks via script. The only potential issues I foresee are: 1) performance, as this perl will be called for 1000+ requests per second, and there are going to be potentially many upstreaby rmalayter - Nginx Mailing List - English
Has anybody used ngx_http_upstream_consistent_hash with newer nginx releases (>1.0)? If so, is it possible to use ngx_http_upstream_consistent_hash with HTTP-based back-ends, or does it only work with memcahe backends? The documentation isn't at all clear. We want to load-balance multiple static file servers behind nginx, and basing the upstream chosen on consistent hash will drasticallyby rmalayter - Nginx Mailing List - English
bard Wrote: ------------------------------------------------------- > Thanks for the pointers. As I wrote, I'd rather > avoid gzipping in the > backend, but if that's the only option so be it. > There's no reason the "backend" for your caching layer cannot be another nginx server block running on a high port bound to localhost. This high-port server block could do gzby rmalayter - Nginx Mailing List - English
Just make sure the "Accept-Encoding: gzip" is being passed to your back-end, and let the back end do the compression. We actually normalize the Accept-Encoding header as well with an if statement. Also use the value of the Accept-Encoding header in your proxy_cache_key. This allows non-cached responses for those clients that don't support gzip (usually coming through an old, weird proxy)by rmalayter - Nginx Mailing List - English
sarim Wrote: ------------------------------------------------------- > Ryan Malayter Wrote: > -------------------------------------------------- > ----- > > nginx server on port 80/443 listening on > > public-facing IP > > proxy_cache enabled > > gzip enabled > > | > > V > > nginx server on localhost port 20080 > > gzip enablby rmalayter - Nginx Mailing List - English
agentzh Wrote: ------------------------------------------------------- > > http://wiki.nginx.org/NginxHttpHeadersMoreModule > > Thanks Cliff Wells for providing such an excellent > facility that makes > me much more motivated to write documentation :) > Great work.. this will be invaluable for conditionally adding keywords like "public" to Cache-Contby rmalayter - Nginx Mailing List - English
Maxim Dounin Wrote: ------------------------------------------------------- > BTW, could you please explain why do you need time > in this particular > format? Many log parsing and event correlation tools need the time with miliseconds, which $time_local doesn't produce. However, many of these tools also cannot hanlde the "number if miliseconds since start of UNIX time"by rmalayter - Nginx Mailing List - English
I was testing nginx 0.8.21 on Windows 2003 R2 x64 edition in a reverse-proxy configuration. Twice during test, I had nginx go completely nuts on me, using 100% CPU, and dumping many GB to its error log. There were only about 20 or so testers using the system when this happened. The error logs looked like this: 2009/11/15 00:00:09 304#3764: signal process started 2009/11/15 00:00:1by rmalayter - Nginx Mailing List - English
I think I've run into a Windows-specific bug with access logging and multiple worker processes with file locking. Whenever I have access logging enabled (at the http or server layers), and I increase the number of worker processes to more than 1, nginx 0.8.19 will not respond to any requests (although it does accept connections on port 80). I have nginx set up as a proxy to a legacy web applby rmalayter - Nginx Mailing List - English