I am sorry, i make patch file opposite, this is the corrent version. --- a/ngx_cache_purge_module.c 2012-06-28 05:08:44.000000000 +0800 +++ b/ngx_cache_purge_module.c 2011-12-20 20:36:20.000000000 +0800 @@ -41,7 +41,7 @@ char *ngx_http_fastcgi_cache_purge ngx_int_t ngx_http_fastcgi_cache_purge_handler(ngx_http_request_t *r); # endif /* NGX_HTTP_FASTCGI */ -# if (NGX_HTTP_PROXY) +# ifby magicbear - Nginx Development
--- a/ngx_cache_purge_module.c 2012-06-28 05:08:44.000000000 +0800 +++ b/ngx_cache_purge_module.c 2011-12-20 20:36:20.000000000 +0800 @@ -41,7 +41,7 @@ char *ngx_http_fastcgi_cache_purge ngx_int_t ngx_http_fastcgi_cache_purge_handler(ngx_http_request_t *r); # endif /* NGX_HTTP_FASTCGI */ -# if (NGX_HTTP_PROXY || nginx_version >= 1003002) +# if (NGX_HTTP_PROXY) char *ngx_httpby magicbear - Nginx Development
Here is the crash bt. GNU gdb (Ubuntu/Linaro 7.3-0ubuntu2) 7.3-2011.08 Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDby magicbear - Nginx Development
may be try set $var "\$uri=200k" 2011/11/18 bigplum <nginx-forum@nginx.us>: > nginx.conf: > location / { > .... > set $var "$uri=200k" > .... > > and run "curl localhost/" > > The module use ngx_http_get_indexed_variable() for $var will get the > string "/=200k", but I want the value is "$uri=200k&quby magicbear - Nginx Mailing List - English
because nginx is not threadsafe, it will cause segment failure for this version. Bad news. Need to continue improve. 2011/11/17 MagicBear <magicbearmo@gmail.com>: > Download > ===== > http://m-b.cc/share/ngx_batch_purge-0.2.tar.gz > > > About > ===== > nginx_cache_batchpurge is 'nginx' module which adds regex batch purge content > from 'proxy' caches, and is baseby magicbear - Nginx Mailing List - English
Download ===== http://m-b.cc/share/ngx_batch_purge-0.2.tar.gz About ===== nginx_cache_batchpurge is 'nginx' module which adds regex batch purge content from 'proxy' caches, and is base on `ngx_cache_purge`. Notice ===== This module is use only for batch purge, if you want to purge a single object, please install ngx_cache_purge module, this module can work with that module exists. Install ==by magicbear - Nginx Mailing List - English
I have tried for this, but when done the first time, it will have some error for next. I Have thinking for spawn a process at startup or using nginx cache manager process to do this. Piotr Sikora <piotr.sikora@frickle.com> 於 2011-11-17 1:32 寫道: > Hey, > >> P.S: This plugin current will blocking web request, now is only for test. > > I know you've got good intentioby magicbear - Nginx Mailing List - English
This plugin in base on ngx_cache_purge-1.4 P.S: This plugin current will blocking web request, now is only for test. I cannot found a method to make nginx run a new thread. Usage: location ~ ^/regex_purge(/.*) { proxy_cache_batchpurge cache_zone $1$is_args$args; } download: http://m-b.cc/share/ngx_batch_purge-0.1.tgz -- MagicBear _______________________________________________ nginx mailby magicbear - Nginx Mailing List - English
Hello, I think is also not relative to crash, when I increase the zone key size, crash hasn't happen again (but my upstream are also up), so I still unknown where has the crash happen. Just I had update all of my server to 1.1.8, I want to test for your config because some of my server has running dynamic page, and they aren't return the Content-Length. 2011/11/15 Maxim Dounin <mdounin@mdouniby magicbear - Nginx Development
2011/11/13 Maxim Dounin <mdounin@mdounin.ru>: > Hello! > > On Sat, Nov 12, 2011 at 10:07:22PM +0800, MagicBear wrote: > >> happen on a of main upstream server dead. >> >> Here is the config >> >> proxy_cache_path /dev/shm/cdn_cache_comment levels=1:2 >> keys_zone=cache_comment_mem:32m max_size=128m; > > [...] > >> proxby magicbear - Nginx Development
happen on a of main upstream server dead. Here is the config proxy_cache_path /dev/shm/cdn_cache_comment levels=1:2 keys_zone=cache_comment_mem:32m max_size=128m; limit_req_zone $binary_remote_addr zone=limit_comment:16m rate=50r/s; upstream backend_comment { server 10.0.0.1 weight=10 fail_timeout=30s; server 10.0.0.2 backup weight=5 fail_timeout=30s; keepalive 30; } map $http_accept_enby magicbear - Nginx Development
2011/11/12 19:00:16 7552#0: ignore long locked inactive cache entry 26b0312d67bd41ef132ce5b8a4445ffa, count:1 2011/11/12 19:02:17 7552#0: ignore long locked inactive cache entry ac307ce9b33a01a04f4f17c187d9b11a, count:1 2011/11/12 19:02:45 7552#0: ignore long locked inactive cache entry e5fa15e3f856238feb5e0b7128120e20, count:1 2011/11/12 19:03:59 7552#0: ignore long locked inactive cache entrby magicbear - Nginx Development
increase the worker number to 12, and I get such results, I think that may be the maximum. cat logger | sed 's/||/ /g' | awk '{print $3}'| sed 's/\.[0-9]\+//g' | sort | uniq -c 58423 1317950330 85703 1317950331 116036 1317950332 115995 1317950333 116070 1317950334 120604 1317950335 119080 1317950336 118695 1317950337 118231 1317950338 114383 1317950339 104594 1317950340 103047 13179by magicbear - Nginx Mailing List - English
and here is my sysctl # Avoid a smurf attack net.ipv4.icmp_echo_ignore_broadcasts = 1 # Turn on protection for bad icmp error messages net.ipv4.icmp_ignore_bogus_error_responses = 1 # Turn on and log spoofed, source routed, and redirect packets #net.ipv4.conf.all.log_martians = 1 #net.ipv4.conf.default.log_martians = 1 # No source routed packets here net.ipv4.conf.all.accept_source_route = 0 nby magicbear - Nginx Mailing List - English
here is my server results. using 3 of ab, each open 10000 concurrent connections. cat logger | sed 's/||/ /g' | awk '{print $3}'| sed 's/\.[0-9]\+//g' | sort | uniq -c 66776 1317949624 91383 1317949625 92828 1317949626 93364 1317949627 91456 1317949628 93498 1317949629 92916 1317949630 91795 1317949631 91921 1317949632 92935 1317949633 93000 1317949634 89737 1317949635by magicbear - Nginx Mailing List - English
you may need to using ^/人体穴位蛙.html /w/新页面 permanent; note: You need save the file at ASCII mode for match GBK charset. But I am not share may it work for this. 2011/9/25 ohjyops <nginx-forum@nginx.us>: > I want to rewrite an old page to a new location, I use this code: > > rewrite ^/%C8%CB%CC%E5%D1%A8%CE%BB%CD%BC.html /w/新页面 permanent; > > But it does nby magicbear - Nginx Mailing List - English
Fix: patch 3 will cause nginx haven't write access static file will direct return 403. add a new flag "is_rw" for ngx_open_file_info_t to check has get it with write access. Here is the full patch: http://m-b.cc/share/patch-nginx-proxy-304-4.txt diff -ruNp a/src/core/ngx_open_file_cache.c b/src/core/ngx_open_file_cache.c --- a/src/core/ngx_open_file_cache.c 2011-09-14 22:28:55.000000by magicbear - Nginx Development
May be this make a config for user to choose accept copy to compliant with RFC2616? 2011/9/23 Woon Wai Keen <doubleukay@doubleukay.com>: > On 2011-09-19 6:47 PM, Maxim Dounin wrote: >> >> Additional question to consider: what should happen if original >> 200 reply comes with "Cache-Control: max-age=<seconds>" (or >> "Expires:<time>"by magicbear - Nginx Development
2011/9/22 Maxim Dounin <mdounin@mdounin.ru>: > Hello! > > On Tue, Sep 20, 2011 at 12:47:56AM +0800, MagicBear wrote: > [...] > > I don't think that using validity time from 200 is correct either. > What happens if response wasn't 200 (e.g. 206)? What happens if > response was cached up to time specified in Cache-Control > header? > > You may want to actualby magicbear - Nginx Development
2011/9/19 Maxim Dounin <mdounin@mdounin.ru>: > Hello! > > On Sat, Sep 17, 2011 at 04:34:39AM +0800, MagicBear wrote: > >> Hello Maxim! >> >> 2011/9/16 Maxim Dounin <mdounin@mdounin.ru> >> > >> > Hello! >> > >> > On Fri, Sep 16, 2011 at 01:54:04AM +0800, ビリビリⅤ wrote: >> > >> > > Hello guys,by magicbear - Nginx Development
Hello Maxim! 2011/9/16 Maxim Dounin <mdounin@mdounin.ru> > > Hello! > > On Fri, Sep 16, 2011 at 01:54:04AM +0800, ビリビリⅤ wrote: > > > Hello guys, > > I have wrote a module to make nginx support 304 to decrease bandwidth usage. > > note: I have a newbie for nginx module development, so the above module may > > have some problem. Welcome to testby magicbear - Nginx Development
Hello, I have wrote a module to make nginx support 304 to decrease bandwidth usage.. note: I have a newbie for nginx module development, so the above module may have some problem. Welcome to test it and feedback another problem with me. You can download full patch file from here: http://m-b.cc/share/proxy_304.txt # User MagicBear <magicbearmo@gmail.com> Upstream: add $upstream_last_modifieby magicbear - Nginx Mailing List - English
I have run the nginx 1.1.2 via this patch for 7 days, except for one days have a large DDoS so I restart nginx for several seconds, it was very stable to work. Handle about 70million request without problem happen, I think the last problem may be have a memory corruption, you are right. I will check that server when have times. Thanks for your hard work. MagicBear Maxim Dounin Wrote: ---by magicbear - Nginx Mailing List - English
if you are running nginx, you may use "nginx -s reload" to reload the config. or killall -9 nginx; nginx to restart nginx 2011/9/10 ynasser <nginx-forum@nginx.us> > I reinstalled nginx and re-added that directive to the new configutation > file. Now this happens: > > >> sudo nginx > >> nginx: bind() to 10.0.1.187:2525 failed (49: Can't assign > reqby magicbear - Nginx Mailing List - English
server { listen 80; server_name domain.com www.domain.com; root /var/www/domain.com; include /etc/nginx/fastcgi_php; index index.php index.html; rewrite ^/admin https://$http_host$request_uri permanent; } server { listen 443 ssl; ssl_certificate /etc/nginx/ssl/csr.csr; ssl_certificate_key /etc/nginx/ssl/csr.key; keepalive_timeout 60; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers ALLby magicbear - Nginx Mailing List - English
proxy: server { listen some_ip:some_port default; location / { add_header X-h1 h1; add_header X-h2 "h2 h2"; add_header X-h3 h3; add_header X-h4 h4; add_header X-h5 h5; add_header X-h6 "h6 h6"; add_header X-h7 h7; proxy_pass http://backend; } } space: add a double quotes add_header X-h2 "h2 h2"; MagicBearby magicbear - Nginx Mailing List - English
Check your PHP output header for "Content-Type", if it is not text/html and one of the gzip_types, it won't be compress. Your config I have tested on my server, it will compress normally. MagicBearby magicbear - Nginx Mailing List - English
ok, I will try that at another dedicated server, thanks for your Helps. MagicBear Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Mon, Sep 05, 2011 at 11:42:31PM +0800, > ビリビリⅤ wrote: > > > (gdb) fr 0 > > #0 ngx_http_upstream_handler > (ev=0x7fc45735f8a8) > > at src/http/ngx_http_upstreby magicbear - Nginx Mailing List - English
You can check iowait, Discuz when have many of data, suggest you using MySQL Cluster to process. MagicBearby magicbear - Nginx Mailing List - English
(gdb) fr 0 #0 ngx_http_upstream_handler (ev=0x7fc45735f8a8) at src/http/ngx_http_upstream.c:915 915 ctx->current_request = r; (gdb) p *ev $1 = {data = 0x7fc4576aa750, write = 1, accept = 0, instance = 1, active = 1, disabled = 0, ready = 1, oneshot = 0, complete = 0, eof = 0, error = 0, timedout = 1, timer_set = 0, delayed = 0, read_discarded = 0, unexpected_eof = 0, deferred_acceby magicbear - Nginx Mailing List - English
![]() |
![]() |
![]() |
![]() |
![]() |