Welcome! Log In Create A New Profile

Advanced

Re: Fw: Use Test::Nginx with etcproxy and/or valgrind (Was Re: Test::Nginx::LWP vs. Test::Nginx::Socket)

Antoine Bonavita (personal)
March 15, 2011 11:14AM
Hi agentzh,

I managed to migrate all my tests from my original python approach to
using Test::Nginx. I guess this is good news. However, I must say some
of it was a bit painful.

The main thing is probably a lack of documentation/examples on the
data sections accepted by Test::Nginx. Especially for people who are
not familiar with Test::Base (like me). I understand you don't want to
duplicate the work done by the guys at Test::Base but at least
pointers to some useful tricks like filters and --- ONLY would help
the beginners.

As a side note to this, I don't see any benefit in having the
"request_eval" section. To me (at least in the tests I wrote)
"request_eval" can be replaced by "request eval" (so, applying eval to
the data). May be you should get rid of the _eval versions or maybe
I'm missing something....

I actually wrote a few posts on the migration to Test::Nginx:
* http://www.nginx-discovery.com/2011/03/day-32-moving-to-testnginx.html
* http://www.nginx-discovery.com/2011/03/day-33-testnginx-pipelinedrequests..html

Another thing that annoyed me is the use of shuffle to "on" by
default. I find it more misleading than anything else (especially on
your first runs).

After going through this exercise (and learning quite a few things in
the process), the things that I really think should be improved are:
* Being able to share one config amongst multiple tests.
* Being able to run multiple requests in one test. The
pipelined_requests use the same connection which might not be what I
want. I was thinking of something more natural like : send request 1,
wait for response 1, check response 1, send request 2, wait for
response 2, check response 2, etc.

Of course, I am willing to help with these improvements but I do not
want to start running all over the place without discussing it with
you as I'm likely to miss out something really big.

Antoine.
--
Antoine Bonavita.
Follow my progress with nginx at: http://www.nginx-discovery.com

On Tue, Mar 15, 2011 at 3:43 PM, Antoine BONAVITA
<antoine_bonavita@yahoo.com> wrote:
>> From: Antoine BONAVITA <antoine_bonavita@yahoo.com>
>> To: agentzh <agentzh@gmail.com>
>> Cc: nginx-devel@nginx.org
>> Sent: Thu, March 3, 2011 2:57:44 PM
>> Subject: Re: Use Test::Nginx with etcproxy and/or valgrind (Was Re:
>>Test::Nginx::LWP vs. Test::Nginx::Socket)
>>
>> Agentzh,
>>
>> Thanks a lot, again. I'm going on a ski trip for a week or so.  I'll try that
>> when I come back.
>>
>> Antoine.
>>
>>
>>
>>
>> -----  Original Message ----
>> > From: agentzh <agentzh@gmail.com>
>> > To: Antoine  BONAVITA <antoine_bonavita@yahoo.com>
>> >  Cc: nginx-devel@nginx.org
>> > Sent: Thu,  March 3, 2011 4:49:17 AM
>> > Subject: Use Test::Nginx with etcproxy and/or  valgrind (Was Re:
>> >Test::Nginx::LWP vs. Test::Nginx::Socket)
>> >
>> > On Thu, Mar 3, 2011 at 12:37 AM, Antoine BONAVITA
>> > <antoine_bonavita@yahoo.com>   wrote:
>> > > Following agentzh tips, I'm moving the test cases for my  module  to
>> >Test::Nginx
>> > > (instead of using my python  unit  tests).
>> > >
>> >
>> > There's a lot of undocumented  features in  Test::Nginx::Socket. I'm
>> > sorry. I'd like to document a  bit how to integrate  it with etcproxy
>> > and/or valgrind here because  it's so useful ;)
>> >
>> > Use  Test::Nginx::Socket with   etcproxy
>> > ============================
>> >
>> > Test::Nginx  automatically starts  an nginx instance (from the PATH env)
>> > rooted  at t/servroot/ and the default  config template makes this nginx
>> >  instance listen on the port 1984 by  default.
>> >
>> > The default  settings in etcproxy [1] makes this small TCP proxy  split
>> > the TCP  packets into bytes and introduce 1ms latency among  them.
>> >
>> >  There's usually various TCP chains that we can put etcproxy into,  for
>>example
>> >
>> > Test::Nginx <=>  nginx
>> >  ----------------------------
>> >
>> >    $ ./etcproxy  1234  1984
>> >
>> > Here we tell etcproxy to listen on port 1234  and to delegate all  the
>> > TCP traffic to the port 1984, the default  port that Test::Nginx  makes
>> > nginx listen to.
>> >
>> > And  then we tell Test::Nginx to test against  the port 1234, where
>> >  etcproxy listens on, rather than the port 1984 that  nginx directly
>> >  listens on:
>> >
>> >    $ TEST_NGINX_CLIENT_PORT=1234  prove  -r t/
>> >
>> > Then the TCP chain now looks like  this:
>> >
>> >    Test::Nginx  <=> etcproxy (1234)  <=> nginx (1984)
>> >
>> > So etcproxy can  effectively  emulate extreme network conditions and
>> > exercise "unusual" code   paths in your nginx server by your tests.
>> >
>> > In practice, *tons*  of weird  bugs can be captured by this setting.
>> > Even ourselves  didn't expect that this  simple approach is so
>> > effective.
>> >
>> > nginx <=>  memcached
>> >  -----------------------------
>> >
>> > We first start the  memcached  server daemon on port 11211:
>> >
>> >      memcached -p 11211  -vv
>> >
>> > and then we another etcproxy  instance to listen on port 11984 like  this
>> >
>> >      $ ./etcproxy 11984 11211
>> >
>> > Then we tell our  t/foo.t test  script to connect to 11984 rather than
> 11211:
>> >
>> >     #  foo.t
>> >    use Test::Nginx::Socket;
>> >     repeat_each(1);
>> >     plan tests => 2 * repeat_each() *  blocks();
>> >     $ENV{TEST_NGINX_MEMCACHED_PORT} ||= 11211;   # make this env take  a
>> > default value
>> >     run_tests();
>> >
>> >    __DATA__
>> >
>> >      === TEST 1: sanity
>> >    --- config
>> >     location /foo {
>> >          set $memc_cmd  set;
>> >         set  $memc_key foo;
>> >          set $memc_value bar;
>> >           memc_pass 127.0.0.1:$TEST_NGINX_MEMCACHED_PORT;
>> >      }
>> >    --- request
>> >        GET  /foo
>> >    ---  response_body_like: STORED
>> >
>> >  The Test::Nginx library will automatically  expand the special  macro
>> > "$TEST_NGINX_MEMCACHED_PORT" to the environment with  the  same name.
>> > You can define your own $TEST_NGINX_BLAH_BLAH_PORT macros  as  long as
>> > its prefix is TEST_NGINX_ and all in upper case  letters.
>> >
>> > And  now we can run your test script against the  etcproxy port 11984:
>> >
>> >       TEST_NGINX_MEMCACHED_PORT=11984 prove t/foo.t
>> >
>> > Then the TCP  chains  look like this:
>> >
>> >     Test::Nginx  <=> nginx (1984)  <=> etcproxy (11984) <=> memcached
>>(11211)
>> >
>> > If  TEST_NGINX_MEMCACHED_PORT is not set, then it  will take the default
>> > value  11211, which is what we want when  there's no  etcproxy
>> > configured:
>> >
>> >      Test::Nginx <=> nginx (1984)  <=> memcached (11211)
>> >
>> > This approach also works for proxied mysql  and postgres  traffic.
>> > Please see the live test suite of ngx_drizzle and   ngx_postgres for
>> > more details.
>> >
>> > Usually we set  both  TEST_NGINX_CLIENT_PORT and
>> > TEST_NGINX_MEMCACHED_PORT (and  etc) at the same  time, effectively
>> > yielding the following  chain:
>> >
>> >      Test::Nginx <=> etcproxy  (1234) <=> nginx (1984) <=>  etcproxy
>> > (11984) <=>  memcached (11211)
>> >
>> > as long as you run two  separate  etcproxy instances in two separate
>>terminals.
>> >
>> > It's easy  to  verify if the traffic actually goes through your etcproxy
>> >  server. Just check  if the terminal running etcproxy emits outputs.  By
>> > default, etcproxy always  dump out the incoming and outgoing  data to
>> > stdout/stderr.
>> >
>> > Use  Test::Nginx::Socket  with valgrind  memcheck
>> >  ====================================
>> >
>> > Test::Nginx has   integrated support for valgrind [2] even though by
>> > default it does not  bother  running it with the tests because valgrind
>> > will  significantly slow down the  test sutie.
>> >
>> > First ensure that  your valgrind executable visible in your  PATH env.
>> > And then run  your test suite with the TEST_NGINX_USE_VALGRIND env  set
>> > to  true:
>> >
>> >     TEST_NGINX_USE_VALGRIND=1 prove -r   t
>> >
>> > If you see false alarms, you do have a chance to skip them  by  defining
>> > a ./valgrind.suppress file at the root of your module  source tree,  as
>> > in
>> >
>> >
>>>https://github.com/chaoslawful/drizzle-nginx-module/blob/master/valgrind..suppress
>>s
>> >
>> >
>> > This  is the suppression file for ngx_drizzle. Test::Nginx  will
>> > automatically use  it to start nginx with valgrind memcheck if  this
>> > file does exist at the  expected location.
>> >
>> > If  you do see a lot of "Connection refused" errors  while running the
>> >  tests this way, then you probably have a slow machine (or a  very  busy
>> > one) that the default waiting time is not sufficient for  valgrind  to
>> > start. You can define the sleep time to a larger value  by setting  the
>> > TEST_NGINX_SLEEP env:
>> >
>> >      TEST_NGINX_SLEEP=1 prove -r  t
>> >
>> > The time unit used here is  "second". The default sleep setting  just
>> > fits my ThinkPad  (Core2Duo T9600).
>> >
>> > Applying the no-pool patch to  your  nginx core is recommended while
>> > running nginx with   valgrind:
>> >
>> >    https://github.com/shrimp/no-pool-nginx
>> >
>> > The nginx   memory pool can prevent valgrind from spotting lots of
>> > invalid  memory  reads/writes as well as certain double-free errors. We
>> > did  find a lot more  memory issues in many of our modules when we first
>> >  introduced the no-pool  patch in practice ;)
>> >
>> > There's also  more advanced features in Test::Nginx  that have never
>> > documented.  I'd like to write more about them in the near  future ;)
>> >
>> >  Cheers,
>> > -agentzh
>> >
>> > References
>> >
>> > [1]  etcproxy: https://github.com/chaoslawful/etcproxy
>> > [2] valgrind:  http://valgrind.org/
>> >
>>
>>
>>
>>
>> _______________________________________________
>> nginx-devel mailing  list
>> nginx-devel@nginx.org
>> http://nginx.org/mailman/listinfo/nginx-devel
>>
>
>
>

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://nginx.org/mailman/listinfo/nginx-devel
Subject Author Views Posted

Use Test::Nginx with etcproxy and/or valgrind (Was Re: Test::Nginx::LWP vs. Test::Nginx::Socket)

agentzh 3269 March 02, 2011 10:50PM

Re: Use Test::Nginx with etcproxy and/or valgrind (Was Re: Test::Nginx::LWP vs. Test::Nginx::Socket)

Antoine BONAVITA 1067 March 03, 2011 08:58AM

Re: Fw: Use Test::Nginx with etcproxy and/or valgrind (Was Re: Test::Nginx::LWP vs. Test::Nginx::Socket)

Antoine Bonavita (personal) 1022 March 15, 2011 11:14AM

Re: Fw: Use Test::Nginx with etcproxy and/or valgrind (Was Re: Test::Nginx::LWP vs. Test::Nginx::Socket)

agentzh 871 March 16, 2011 02:50AM

Re: Fw: Use Test::Nginx with etcproxy and/or valgrind (Was Re: Test::Nginx::LWP vs. Test::Nginx::Socket)

agentzh 876 March 16, 2011 03:08AM

Re: Fw: Use Test::Nginx with etcproxy and/or valgrind (Was Re: Test::Nginx::LWP vs. Test::Nginx::Socket)

Antoine Bonavita (personal) 913 March 16, 2011 05:40AM

Re: Fw: Use Test::Nginx with etcproxy and/or valgrind (Was Re: Test::Nginx::LWP vs. Test::Nginx::Socket)

agentzh 901 March 16, 2011 11:44PM

Re: Fw: Use Test::Nginx with etcproxy and/or valgrind (Was Re: Test::Nginx::LWP vs. Test::Nginx::Socket)

Antoine Bonavita 1004 March 17, 2011 04:54PM

Re: Fw: Use Test::Nginx with etcproxy and/or valgrind (Was Re: Test::Nginx::LWP vs. Test::Nginx::Socket)

agentzh 1137 March 17, 2011 11:36PM



Sorry, you do not have permission to post/reply in this forum.

Online Users

Guests: 277
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready