All files from this thread

File Name File Size   Posted by Date  
smime.p7s 3.5 KB open | download Tony Curwen 11/14/2012 Read message
Tony Curwen
November 14, 2012 07:02PM
unsubscribe

On Nov 14, 2012, at 7:26 PM, nginx-request@nginx.org wrote:

> Send nginx mailing list submissions to
> nginx@nginx.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mailman.nginx.org/mailman/listinfo/nginx
> or, via email, send a message with subject or body 'help' to
> nginx-request@nginx.org
>
> You can reach the person managing the list at
> nginx-owner@nginx.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of nginx digest..."
>
>
> Today's Topics:
>
> 1. Can NGinx replace Varnish (gt420hp)
> 2. Re: Caucho Resin: faster than nginx? (Liu Lantao)
> 3. Chunked transfer encoding problem (Piotr Bartosiewicz)
> 4. Re: Chunked transfer encoding problem (Maxim Dounin)
> 5. Re: Can NGinx replace Varnish (Ant?nio P. P. Almeida)
> 6. Re: Can NGinx replace Varnish (Francis Daly)
> 7. proxy_cache_valid for zero seconds (shmapty)
> 8. Connection reset by peer on first request (Cancer)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 14 Nov 2012 09:31:28 -0500
> From: "gt420hp" <nginx-forum@nginx.us>
> To: nginx@nginx.org
> Subject: Can NGinx replace Varnish
> Message-ID:
> <9531e96577bd46912b012ef518b4c69c.NginxMailingListEnglish@forum.nginx.org>
>
> Content-Type: text/plain; charset=UTF-8
>
> We are using Varnish in front of 3 load balanced web servers running apache.
> We had migrated from one hosting platform where we had 1 app server and 1
> database server using Varnish (Drupal 6.x) and had no issues. Now that we
> are running in a load balanced environment (3 load balanced apache web
> servers, a Varnish server, and 1 database server) we are seeing mulitple
> examples of cacheing issues. (Pages not displaying correctly ...style
> issues, data input staying cached and used on another page, etc).
>
> We think we can just replace the Varnish server and use a NGinx server. I
> don't want to necessarily remove all the apache servers, but we have to get
> this cacheing issue corrected....
>
> any thoughts...?
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232796,232796#msg-232796
>
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 14 Nov 2012 23:06:05 +0800
> From: Liu Lantao <liulantao@gmail.com>
> To: nginx@nginx.org
> Subject: Re: Caucho Resin: faster than nginx?
> Message-ID:
> <CAO5q5ULUN9Qpd1Q+yyzCM22EMvXKdgYrXSJCBmVt=VzN5mek=g@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> We are making a nginx benchmark under 10Gbe network. For an empty page, we
> get about 700k rps of nginx, in compare with about 100k rps of resin pro.
>
> In caucho's test, they use i7 4 core / 8 HT, 2.8 GHZ, 8Meg Cache, 8 GB RAM,
> and I use duo intel e5645. I think the result can be improved through some
> tuning.
>
> We tuned server configuration and nginx configuration, but didn't tune much
> on resin. We didn't find any configuration of caucho's testing, neither
> nginx nor resin. so i wonder how to make the rps of resin go above 100k?
>
> On Sat, Aug 18, 2012 at 3:26 PM, Mike Dupont <jamesmikedupont@googlemail.com
>> wrote:
>
>> Resin Pro 4.0.29, so whats the point? We are talking about open source
>> software here, no?
>> mike
>>
>> On Sat, Aug 18, 2012 at 6:39 AM, Adam Zell <zellster@gmail.com> wrote:
>>> More details:
>>>
>> http://blog.caucho.com/2012/07/05/nginx-120-versus-resin-4029-performance-tests/
>>> .
>>>
>>> On Fri, Aug 17, 2012 at 10:14 PM, Mike Dupont
>>> <jamesmikedupont@googlemail.com> wrote:
>>>>
>>>> which version of resin did they use, the open source or pro version?
>>>> mike
>>>>
>>>> On Fri, Aug 17, 2012 at 11:18 PM, Adam Zell <zellster@gmail.com> wrote:
>>>>> FYI:
>>>>>
>>>>>
>> http://www.caucho.com/resin-application-server/press/resin-java-web-server-outperforms-nginx/
>>>>>
>>>>> " Using industry standard tool and methodology, Resin Pro web server
>> was
>>>>> put
>>>>> to the test versus Nginx, a popular web server with a reputation for
>>>>> efficiency and performance. Nginx is known to be faster and more
>>>>> reliable
>>>>> under load than the popular Apache HTTPD. Benchmark tests between
>> Resin
>>>>> and
>>>>> Nginx yielded competitive figures, with Resin leading with fewer
>> errors
>>>>> and
>>>>> faster response times. In numerous and varying tests, Resin handled
>> 20%
>>>>> to
>>>>> 25% more load while still outperforming Nginx. In particular, Resin
>> was
>>>>> able
>>>>> to sustain fast response times under extremely heavy load while Nginx
>>>>> performance degraded. "
>>>>>
>>>>> --
>>>>> Adam
>>>>> zellster@gmail.com
>>>>>
>>>>> _______________________________________________
>>>>> nginx mailing list
>>>>> nginx@nginx.org
>>>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>>
>>>>
>>>>
>>>> --
>>>> James Michael DuPont
>>>> Member of Free Libre Open Source Software Kosova http://flossk.org
>>>> Saving wikipedia(tm) articles from deletion
>>>> http://SpeedyDeletion.wikia.com
>>>> Contributor FOSM, the CC-BY-SA map of the world http://fosm.org
>>>> Mozilla Rep https://reps.mozilla.org/u/h4ck3rm1k3
>>>>
>>>> _______________________________________________
>>>> nginx mailing list
>>>> nginx@nginx.org
>>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>>
>>>
>>>
>>>
>>> --
>>> Adam
>>> zellster@gmail.com
>>>
>>> _______________________________________________
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>>
>>
>> --
>> James Michael DuPont
>> Member of Free Libre Open Source Software Kosova http://flossk.org
>> Saving wikipedia(tm) articles from deletion
>> http://SpeedyDeletion.wikia.com
>> Contributor FOSM, the CC-BY-SA map of the world http://fosm.org
>> Mozilla Rep https://reps.mozilla.org/u/h4ck3rm1k3
>>
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>>
>
>
>
> --
> Liu Lantao
> EMAIL: liulantao ( at ) gmail ( dot ) com ;
> WEBSITE: http://www.liulantao.com/portal .
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: http://mailman.nginx.org/pipermail/nginx/attachments/20121114/2268cd2a/attachment-0001.html
>
> ------------------------------
>
> Message: 3
> Date: Wed, 14 Nov 2012 17:42:49 +0100
> From: Piotr Bartosiewicz <piotr.bartosiewicz@firma.gg.pl>
> To: nginx@nginx.org
> Subject: Chunked transfer encoding problem
> Message-ID: <50A3CA09.1080709@firma.gg.pl>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> Hi,
>
> My nginx (1.2.4) config looks like this (relevant part):
>
> server {
> listen 8888;
>
> location / {
> proxy_http_version 1.1;
> proxy_pass http://localhost:8080;
> }
> }
>
> Backend server handles GET requests and responds with a large body.
> Response is generated and sent on the fly, so content-length is not
> known at the beginning.
> In normal case everything works fine.
>
> But sometimes server catches an exception after a response headers were
> sent.
> I've found that there is a commonly used solution to inform a client
> about incomplite response:
> use Transfer-Encoding chunked and close socket without sending the last
> (0 length) chunk.
> Unfortunately nginx appends termination chunk even when the backend
> server does not
> (both nginx and backed connections are http/1.1 and use chunked encoding).
>
> Is this expected behavior, bug or maybe there is some option to turn
> this off?
>
> Regards
> Piotr Bartosiewicz
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 14 Nov 2012 21:07:35 +0400
> From: Maxim Dounin <mdounin@mdounin.ru>
> To: nginx@nginx.org
> Subject: Re: Chunked transfer encoding problem
> Message-ID: <20121114170735.GZ40452@mdounin.ru>
> Content-Type: text/plain; charset=us-ascii
>
> Hello!
>
> On Wed, Nov 14, 2012 at 05:42:49PM +0100, Piotr Bartosiewicz wrote:
>
>> Hi,
>>
>> My nginx (1.2.4) config looks like this (relevant part):
>>
>> server {
>> listen 8888;
>>
>> location / {
>> proxy_http_version 1.1;
>> proxy_pass http://localhost:8080;
>> }
>> }
>>
>> Backend server handles GET requests and responds with a large body.
>> Response is generated and sent on the fly, so content-length is not
>> known at the beginning.
>> In normal case everything works fine.
>>
>> But sometimes server catches an exception after a response headers
>> were sent.
>> I've found that there is a commonly used solution to inform a client
>> about incomplite response:
>> use Transfer-Encoding chunked and close socket without sending the
>> last (0 length) chunk.
>> Unfortunately nginx appends termination chunk even when the backend
>> server does not
>> (both nginx and backed connections are http/1.1 and use chunked encoding).
>>
>> Is this expected behavior, bug or maybe there is some option to turn
>> this off?
>
> This is sort of known bug. Fixing it would require relatively
> large cleanup of upstream module.
>
> --
> Maxim Dounin
> http://nginx.com/support.html
>
>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 14 Nov 2012 18:39:41 +0100
> From: Ant?nio P. P. Almeida <appa@perusio.net>
> To: nginx@nginx.org
> Subject: Re: Can NGinx replace Varnish
> Message-ID: <87bof0f5ia.wl%appa@perusio.net>
> Content-Type: text/plain; charset=US-ASCII
>
> On 14 Nov 2012 15h31 CET, nginx-forum@nginx.us wrote:
>
>> We are using Varnish in front of 3 load balanced web servers running
>> apache. We had migrated from one hosting platform where we had 1
>> app server and 1 database server using Varnish (Drupal 6.x) and had
>> no issues. Now that we are running in a load balanced environment
>> (3 load balanced apache web servers, a Varnish server, and 1
>> database server) we are seeing mulitple examples of cacheing
>> issues. (Pages not displaying correctly ...style issues, data input
>> staying cached and used on another page, etc).
>
> You can drop Varnish from the picture if something like microcaching
> suits you or you use ngx_cache_purge with the purge module. It depends
> if you have an active invalidation strategy or not. Either way Nginx
> can replace Varnish and work also as load balancer. So you'll have a
> simpler stack.
>
>> We think we can just replace the Varnish server and use a NGinx
>> server. I don't want to necessarily remove all the apache servers,
>> but we have to get this cacheing issue corrected....
>>
>> any thoughts...?
>
> Yep. See above. For Drupal related Nginx issues there's a GDO group:
>
> http://groups.drupal.org/nginx
>
> if want to delve deeper into the issue.
>
> --- appa
>
>
>
> ------------------------------
>
> Message: 6
> Date: Wed, 14 Nov 2012 18:32:53 +0000
> From: Francis Daly <francis@daoine.org>
> To: nginx@nginx.org
> Subject: Re: Can NGinx replace Varnish
> Message-ID: <20121114183253.GI24351@craic.sysops.org>
> Content-Type: text/plain; charset=us-ascii
>
> On Wed, Nov 14, 2012 at 09:31:28AM -0500, gt420hp wrote:
>
> Hi there,
>
>> we are seeing mulitple
>> examples of cacheing issues. (Pages not displaying correctly ...style
>> issues, data input staying cached and used on another page, etc).
>>
>> We think we can just replace the Varnish server and use a NGinx server. I
>> don't want to necessarily remove all the apache servers, but we have to get
>> this cacheing issue corrected....
>
> If the caching issues are because your backend servers are configured
> incorrectly, merely replacing Varnish with nginx is unlikely to fix
> everything.
>
> If they are because your Varnish is configured incorrectly, then
> replacing an incorrectly-configured Varnish with a correctly-configured
> nginx probably will help. But replacing it with a correctly-configured
> Varnish would probably also help.
>
> Good luck with it,
>
> f
> --
> Francis Daly francis@daoine.org
>
>
>
> ------------------------------
>
> Message: 7
> Date: Wed, 14 Nov 2012 16:09:44 -0500
> From: "shmapty" <nginx-forum@nginx.us>
> To: nginx@nginx.org
> Subject: proxy_cache_valid for zero seconds
> Message-ID:
> <f96c9d4c0fb2a6721186694a5c42c5f4.NginxMailingListEnglish@forum.nginx.org>
>
> Content-Type: text/plain; charset=UTF-8
>
> Greetings,
>
> I am trying to configure nginx proxy_cache so that it stores a cached copy
> of a HTTP response, but serves from cache *only* under the conditions
> defined by proxy_cache_use_stale.
>
> I have tried something like this without success:
>
> proxy_cache_valid 200 204 301 302 0s;
> proxy_cache_use_stale error timeout updating invalid_header
> http_500 http_502 http_504;
>
> "0s" appears to avoid caching completely. "1s" stores a cached copy, but
> presumably serves from cache for one second. I am trying to serve from
> cache only when the upstream errs.
>
> Thank you
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232815,232815#msg-232815
>
>
>
> ------------------------------
>
> Message: 8
> Date: Wed, 14 Nov 2012 18:26:55 -0500
> From: "Cancer" <nginx-forum@nginx.us>
> To: nginx@nginx.org
> Subject: Connection reset by peer on first request
> Message-ID:
> <f10a884927959154a88372ff01c2598f.NginxMailingListEnglish@forum.nginx.org>
>
> Content-Type: text/plain; charset=UTF-8
>
> Hi,
>
> I'm using Nginx with php-cgi. A problem arose recently where if you have
> not used my site for a few minutes and then go to it, the first request is
> always 'connection reset by peer'. If you refresh, everything functions
> normally until you leave for a few minutes and go to another link. It
> happens 100% of the time. Does anyone know what could be the problem?
>
> All I get in error log with debug on are these messages:
> 2012/11/14 17:25:46 [info] 3454#0: *2516 client prematurely closed
> connection while reading client request line, client: *, server: domain.com
>
> Also, these coincide with the 400 bad request errors in access.log. I have
> tried restart dns server, nginx, php-cgi, etc, but to no avail.
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232817,232817#msg-232817
>
>
>
> ------------------------------
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> End of nginx Digest, Vol 37, Issue 28
> *************************************

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Attachments:
open | download - smime.p7s (3.5 KB)
Subject Author Posted

unsubscribe Attachments

Tony Curwen November 14, 2012 07:02PM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 220
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready