On Thu, Oct 29, 2009 at 01:38:24PM +0300, Maxim Dounin wrote:
> Hello!
>
> On Thu, Oct 29, 2009 at 09:50:25AM +0300, Igor Sysoev wrote:
>
> > On Thu, Oct 29, 2009 at 11:38:17AM +0900, Zev Blut wrote:
> >
> > > Hello,
> > >
> > > On 10/10/2009 01:42 AM, Igor Sysoev wrote:
> > > > On Fri, Oct 09, 2009 at 08:26:32PM +0400, Igor Sysoev wrote:
> > > >
> > > >> I have got these results via localhost:
> > > >>
> > > >> ab -n 30000 -c 10 ~8200 r/s
> > > >> ab -n 30000 -c 10 -k ~20000 r/s
> > > >>
> > > >> This means that this microbenchmark tests mostly TCP connection
> > > >> establishment via localhost: keepalive is 2.4 faster.
> > > >
> > > > BTW, using embedded perl:
> > > >
> > > > server {
> > > > listen 8010;
> > > > access_log off;
> > > >
> > > > location = /test {
> > > > perl 'sub {
> > > > my $r = shift;
> > > > $r->send_http_header("text/html");
> > > > $r->print("<h1>Hello ", $r->variable("arg_name"), "</h1>");
> > > > return OK;
> > > > }';
> > > > }
> > > > }
> > > >
> > > > "ab -n 30000 -c 10 -k" has got ~7800 r/s.
> > >
> > > In case you are curious, John has posted an update
> > > comparing teepeedee2 vs the above perl module on his laptop.
> > > Here is the link:
> > >
> > > http://john.freml.in/teepeedee2-vs-nginx
> >
> > For some reason, he ran "ab -c1" instead of "ab -c10", while nginx may
> > run perl in 2 workers on Core2 Duo (if worker_processes are 2). I believe,
> > it will twice the benchmark result. Second, he still mosty tests TCP
> > connection establishment via localhost instead of server speed. Why
> > he can not run the benchmark with keepalive ?
>
> Well, it's the "useless benchmarks about nothing" game as
> presented by Alex Kapranoff on last Highload++ conference. It's
> not about server speed, it's about multiple useless numbers and
> fun. Key thing is to keep benchmarks as equal as possible, so
> using keepalive here is no-option as he didn't on previous
> benchmarks.
>
> Using "-c1" instead of "-c10" (as used in original post) looks
> like a bug which rendered new results completely irrelevant. So
> nothing to talk about.
BTW the benchmark is really strange, first he mentions C10K problem (10,000
simultaneous connections), but then talks about 10,000 requests record
per seconds via just 10 simultaneous connections. This is very, very
different thing. I believe only varnish and nginx in this set are ever
able to keep C10K. As to varnish, I do not understand what it does in
the benchmark at all. As I understand varnish is only a caching proxy
server and it can not generate dynamic responses (expect error pages).
--
Igor Sysoev
http://sysoev.ru/en/