Hello, I got it
2011/08/26 18:59:27 [alert] 6564#0: worker process 6565 exited on signal 11 (core dumped)
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=11.04
DISTRIB_CODENAME=natty
DISTRIB_DESCRIPTION="Ubuntu 11.04"
=====================================================
uname -a
Linux sunlight 2.6.38-10-server #46-Ubuntu SMP Tue Jun 28 16:31:00 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
=====================================================
nginx -v
nginx: nginx version: nginx/1.1.0
=====================================================
gdb `which nginx` core
GNU gdb (Ubuntu/Linaro 7.2-1ubuntu11) 7.2
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from /usr/sbin/nginx...done.
BFD: Warning: /var/www/ngx_coredump/core is truncated: expected core file size >= 2724024320, found: 2103255040.
[New Thread 6565]
Cannot access memory at address 0x7f9558f332c8
Cannot access memory at address 0x7f9558f332c8
Cannot access memory at address 0x7f9558f332c8
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Failed to read a valid object file image from memory.
Core was generated by `nginx:'.
Program terminated with signal 11, Segmentation fault.
#0 0x000000000040cff5 in ngx_vslprintf (buf=Cannot access memory at address 0x7ffffd348fc8
) at src/core/ngx_string.c:253
253 while (*p && buf < last) {
(gdb) bt
#0 0x000000000040cff5 in ngx_vslprintf (buf=Cannot access memory at address 0x7ffffd348fc8
) at src/core/ngx_string.c:253
Cannot access memory at address 0x7ffffd349068
(gdb) backtrace full
#0 0x000000000040cff5 in ngx_vslprintf (buf=Cannot access memory at address 0x7ffffd348fc8
) at src/core/ngx_string.c:253
p = <error reading variable p (Cannot access memory at address 0x7ffffd349050)>
zero = <error reading variable zero (Cannot access memory at address 0x7ffffd34905f)>
d = <error reading variable d (Cannot access memory at address 0x7ffffd349058)>
f = <error reading variable f (Cannot access memory at address 0x7ffffd349048)>
scale = <error reading variable scale (Cannot access memory at address 0x7ffffd349040)>
len = <error reading variable len (Cannot access memory at address 0x7ffffd348fe8)>
slen = <error reading variable slen (Cannot access memory at address 0x7ffffd349038)>
i64 = <error reading variable i64 (Cannot access memory at address 0x7ffffd349030)>
ui64 = <error reading variable ui64 (Cannot access memory at address 0x7ffffd349028)>
ms = <error reading variable ms (Cannot access memory at address 0x7ffffd348fd8)>
width = <error reading variable width (Cannot access memory at address 0x7ffffd349020)>
sign = <error reading variable sign (Cannot access memory at address 0x7ffffd349018)>
hex = <error reading variable hex (Cannot access memory at address 0x7ffffd349010)>
max_width = <error reading variable max_width (Cannot access memory at address 0x7ffffd349008)>
frac_width = <error reading variable frac_width (Cannot access memory at address 0x7ffffd349000)>
n = <error reading variable n (Cannot access memory at address 0x7ffffd348ff8)>
v = <error reading variable v (Cannot access memory at address 0x7ffffd348ff0)>
vv = <error reading variable vv (Cannot access memory at address 0x7ffffd348fe0)>
Cannot access memory at address 0x7ffffd349068
=====================================================
Maxim Dounin Wrote:
-------------------------------------------------------
> Hello!
>
> On Fri, Aug 26, 2011 at 03:08:10AM -0400,
> magicbear wrote:
>
> > Hello, I found a bug, the stub_status module
> shows connection and
> > waiting is only increase without decrease. But
> in fact, system
> > connection haven't such many.
> > Here is the trend chart
> > http://m-b.cc/tmp/bugs.png
>
> What's in error log?
>
> Graph suggests there are worker processes dying
> for some reason,
> there should be messages like "[alert] ... worker
> process ...
> exited on signal ..." in global error log.
>
> If there are such messages, please follow [1] to
> obtain core dump
> and provide backtrace. Some more details as
> outlined in [2] may
> be also helpful.
>
> [1] http://wiki.nginx.org/Debugging#Core_dump
> [2]
> http://wiki.nginx.org/Debugging#Asking_for_help
>
> Maxim Dounin
>
> >
> > Maxim Dounin Wrote:
> >
> --------------------------------------------------
> -----
> > > Hello!
> > >
> > > On Wed, Aug 24, 2011 at 01:11:43PM -0400,
> > > magicbear wrote:
> > >
> > > > Thanks for your hard work, I have found that
> if
> > > using https backend, it
> > > > won't work, server will direct close the
> > > connection.
> > > >
> > > > curl --head 'http://localhost/track.js'
> > > > curl: (52) Empty reply from server
> > >
> > > Yes, thank you for report.
> > >
> > > Keeping https connections will require
> additional
> > > support both in
> > > nginx core and upstream keepalive module. You
> may
> > > try the
> > > attached patches, also available here:
> > >
> > >
> http://mdounin.ru/files/patch-nginx-keepalive-http
> > > s.txt
> > >
> http://mdounin.ru/files/patch-nginx-keepalive-http
> > > s-module.txt
> > >
> > > The first one is for nginx itself, should be
> > > applied after
> > > keepalive patch.
> > >
> > > The second one (actually, two grouped in one
> file)
> > > for upstream
> > > keepalive module.
> > >
> > > Maxim Dounin
> > > # HG changeset patch
> > > # User Maxim Dounin <mdounin@mdounin.ru>
> > > # Date 1314229425 -14400
> > > # Node ID
> ac0a7fd4de491e64d42f218691b681f7b3fa931b
> > > # Parent
> e865cb2cc06a88c01a439bfdd0d0d7dec54713f0
> > > Upstream: create separate pool for peer
> > > connections.
> > >
> > > This is required to support persistant https
> > > connections as various ssl
> > > structures are allocated from connection's
> pool.
> > >
> > > diff --git a/src/http/ngx_http_upstream.c
> > > b/src/http/ngx_http_upstream.c
> > > --- a/src/http/ngx_http_upstream.c
> > > +++ b/src/http/ngx_http_upstream.c
> > > @@ -1146,8 +1146,17 @@
> > > ngx_http_upstream_connect(ngx_http_reque
> > > c->sendfile &= r->connection->sendfile;
> > > u->output.sendfile = c->sendfile;
> > >
> > > - c->pool = r->pool;
> > > + if (c->pool == NULL) {
> > > + c->pool = ngx_create_pool(128,
> > > r->connection->log);
> > > + if (c->pool == NULL) {
> > > +
> ngx_http_upstream_finalize_request(r,
> > > u,
> > > +
>
> > > NGX_HTTP_INTERNAL_SERVER_ERROR);
> > > + return;
> > > + }
> > > + }
> > > +
> > > c->log = r->connection->log;
> > > + c->pool->log = c->log;
> > > c->read->log = c->log;
> > > c->write->log = c->log;
> > >
> > > @@ -2912,6 +2921,7 @@
> > > ngx_http_upstream_next(ngx_http_request_
> > > }
> > > #endif
> > >
> > > +
> > > ngx_destroy_pool(u->peer.connection->pool);
> > >
> ngx_close_connection(u->peer.connection);
> > > }
> > >
> > > @@ -3006,6 +3016,7 @@
> > > ngx_http_upstream_finalize_request(ngx_h
> > > "close http upstream
> > > connection: %d",
> > >
> u->peer.connection->fd);
> > >
> > > +
> > > ngx_destroy_pool(u->peer.connection->pool);
> > >
> ngx_close_connection(u->peer.connection);
> > > }
> > >
> > > # HG changeset patch
> > > # User Maxim Dounin <mdounin@mdounin.ru>
> > > # Date 1314229646 -14400
> > > # Node ID
> 67b12141506c6be2115b6b0aa151068188b97975
> > > # Parent
> f3b50effc1d476b040908700bb772197d31fbd80
> > > Keepalive: set_session and save_session
> callbacks.
> > >
> > > diff --git
> a/ngx_http_upstream_keepalive_module.c
> > > b/ngx_http_upstream_keepalive_module.c
> > > --- a/ngx_http_upstream_keepalive_module.c
> > > +++ b/ngx_http_upstream_keepalive_module.c
> > > @@ -32,6 +32,11 @@ typedef struct {
> > > ngx_event_get_peer_pt
> > > original_get_peer;
> > > ngx_event_free_peer_pt
> > > original_free_peer;
> > >
> > > +#if (NGX_HTTP_SSL)
> > > + ngx_event_set_peer_session_pt
> > > original_set_session;
> > > + ngx_event_save_peer_session_pt
> > > original_save_session;
> > > +#endif
> > > +
> > > ngx_uint_t
> failed;
> > > /* unsigned:1 */
> > >
> > > } ngx_http_upstream_keepalive_peer_data_t;
> > > @@ -59,6 +64,13 @@ static void
> > > ngx_http_upstream_free_keepa
> > > static void
> > >
> ngx_http_upstream_keepalive_dummy_handler(ngx_even
> > > t_t *ev);
> > > static void
> > >
> ngx_http_upstream_keepalive_close_handler(ngx_even
> > > t_t *ev);
> > >
> > > +#if (NGX_HTTP_SSL)
> > > +static ngx_int_t
> > > ngx_http_upstream_keepalive_set_session(
> > > + ngx_peer_connection_t *pc, void *data);
> > > +static void
> > >
> ngx_http_upstream_keepalive_save_session(ngx_peer_
> > > connection_t *pc,
> > > + void *data);
> > > +#endif
> > > +
> > > static void
> > >
> *ngx_http_upstream_keepalive_create_conf(ngx_conf_
> > > t *cf);
> > > static char
> > > *ngx_http_upstream_keepalive(ngx_conf_t *cf,
> > > ngx_command_t *cmd,
> > > void *conf);
> > > @@ -182,6 +194,13 @@
> > > ngx_http_upstream_init_keepalive_peer(ng
> > > r->upstream->peer.get =
> > > ngx_http_upstream_get_keepalive_peer;
> > > r->upstream->peer.free =
> > > ngx_http_upstream_free_keepalive_peer;
> > >
> > > +#if (NGX_HTTP_SSL)
> > > + kp->original_set_session =
> > > r->upstream->peer.set_session;
> > > + kp->original_save_session =
> > > r->upstream->peer.save_session;
> > > + r->upstream->peer.set_session =
> > > ngx_http_upstream_keepalive_set_session;
> > > + r->upstream->peer.save_session =
> > > ngx_http_upstream_keepalive_save_session;
> > > +#endif
> > > +
> > > return NGX_OK;
> > > }
> > >
> > > @@ -423,6 +442,29 @@ close:
> > > }
> > >
> > >
> > > +#if (NGX_HTTP_SSL)
> > > +
> > > +static ngx_int_t
> > >
> +ngx_http_upstream_keepalive_set_session(ngx_peer_
> > > connection_t *pc, void *data)
> > > +{
> > > + ngx_http_upstream_keepalive_peer_data_t
> *kp
> > > = data;
> > > +
> > > + return kp->original_set_session(pc,
> > > kp->data);
> > > +}
> > > +
> > > +
> > > +static void
> > >
> +ngx_http_upstream_keepalive_save_session(ngx_peer
> > > _connection_t *pc, void *data)
> > > +{
> > > + ngx_http_upstream_keepalive_peer_data_t
> *kp
> > > = data;
> > > +
> > > + kp->original_save_session(pc, kp->data);
> > > + return;
> > > +}
> > > +
> > > +#endif
> > > +
> > > +
> > > static void *
> > >
> ngx_http_upstream_keepalive_create_conf(ngx_conf_
> > > t *cf)
> > > {
> > > # HG changeset patch
> > > # User Maxim Dounin <mdounin@mdounin.ru>
> > > # Date 1314229663 -14400
> > > # Node ID
> 3affea1af30649ac1934b01ab30d175abd4fb3be
> > > # Parent
> 67b12141506c6be2115b6b0aa151068188b97975
> > > Keepalive: destroy connection pool.
> > >
> > > diff --git
> a/ngx_http_upstream_keepalive_module.c
> > > b/ngx_http_upstream_keepalive_module.c
> > > --- a/ngx_http_upstream_keepalive_module.c
> > > +++ b/ngx_http_upstream_keepalive_module.c
> > > @@ -353,6 +353,7 @@
> > > ngx_http_upstream_free_keepalive_peer(ng
> > > item = ngx_queue_data(q,
> > > ngx_http_upstream_keepalive_cache_t,
> > > queue);
> > >
> > > +
> > > ngx_destroy_pool(item->connection->pool);
> > >
> > > ngx_close_connection(item->connection);
> > >
> > > } else {
> > > @@ -437,6 +438,7 @@ close:
> > > conf = item->conf;
> > >
> > > ngx_queue_remove(&item->queue);
> > > + ngx_destroy_pool(item->connection->pool);
> > > ngx_close_connection(item->connection);
> > > ngx_queue_insert_head(&conf->free,
> > > &item->queue);
> > > }
> > >
> _______________________________________________
> > > nginx mailing list
> > > nginx@nginx.org
> > >
> http://mailman.nginx.org/mailman/listinfo/nginx
> >
> > Posted at Nginx Forum:
> http://forum.nginx.org/read.php?2,213207,214320#ms
> g-214320
> >
> > _______________________________________________
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx