Thanks. Yeah, I did see the other thread was wondering if I should move to 1.9 or is stable coming soon. On Tue, Dec 22, 2015 at 8:19 PM, Maxim Konovalov <maxim@nginx.com> wrote: > Hi, > > On 12/22/15 5:21 PM, Fasih wrote: > > Hello! > > > > I currently use 1.8 (stable) nginx. Is there an expected timeline to > > have HTTP/2 available as nginx stable? Or baby faskiri.devel - Nginx Development
Hello! I currently use 1.8 (stable) nginx. Is there an expected timeline to have HTTP/2 available as nginx stable? Or backporting HTTP/2 to 1.8.x? Thanks and Regards +Fasih _______________________________________________ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-develby faskiri.devel - Nginx Development
Hi I see that SPDY is enabled per ip and not per server. I hacked up the code to use the SNI information to find the virtual server and negotiate spdy only if it is enabled for that server block. This seems to work but I was wondering if there was any reason to do it per ip. Regards, +Fasih _______________________________________________ nginx-devel mailing list nginx-devel@nginx.org http://maiby faskiri.devel - Nginx Development
Hi I was looking at ngx_event_openssl.c, when I saw this. if (SSL_CTX_set_ex_data(ssl->ctx, ngx_ssl_certificate_index, x509) == 0) { ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, "SSL_CTX_set_ex_data() failed"); X509_free(x509); BIO_free(bio); return NGX_ERROR; } X509_free(x509); We jby faskiri.devel - Nginx Development
Thanks for the reply. I am also doing this. Basically, have a way to see if it the body was read synchronously or asynchronously. If synchronous, let nginx handle it else we have to do that ourselves . But I thought there is a better way to do this. Btw, I think you have to set write_event_handler to empty. Basically, if you dont set it, and there is a write_event (while the body is not read), ngby faskiri.devel - Nginx Development
Hi Guys I am trying to read the request body in pre_access phase. This seems like a regular requirement but I dont seem to find a good way to do this. Since the request body is read asynchronously, I have to do phases++ and core_run_phases myself in the read_completion callback. I also have to set the r->write_event_handler to empty because otherwise a write event calls run_phases which I obviby faskiri.devel - Nginx Development
I see, thanks for the explanation. On Tue, Jan 14, 2014 at 8:27 PM, Ruslan Ermilov <ru@nginx.com> wrote: > On Tue, Jan 14, 2014 at 06:54:44PM +0530, Fasih wrote: > > Thanks! Could you please explain why this is done? > > Modules register their handlers (at different phases > of request processing) one by one, by adding an element > into the corresponding array of handby faskiri.devel - Nginx Development
Thanks! Could you please explain why this is done? On Tue, Jan 14, 2014 at 4:41 PM, Maxim Dounin <mdounin@mdounin.ru> wrote: > Hello! > > On Tue, Jan 14, 2014 at 04:15:32PM +0530, Fasih wrote: > > > Hi > > > > I have a custom plugin that handles rewrite (NGX_HTTP_REWRITE_PHASE). > There > > is another plugin compiled before my plugin that also handleby faskiri.devel - Nginx Development
Hi I have a custom plugin that handles rewrite (NGX_HTTP_REWRITE_PHASE). There is another plugin compiled before my plugin that also handles rewrite (HttpLuaModule). I was expecting to see that my module would rewrite after lua is done, however that is not the case. Some debugging showed that whereas my module pushed into the cmcf->phases.handlers after lua, the cmcf.phase_engine.handlers hadby faskiri.devel - Nginx Development
Created http://trac.nginx.org/nginx/ticket/485#ticket to track this. Thanks! On Mon, Jan 13, 2014 at 9:08 PM, Maxim Dounin <mdounin@mdounin.ru> wrote: > Hello! > > On Sat, Jan 11, 2014 at 10:28:52PM +0530, Fasih wrote: > > > Yes, that's how I noticed it. I am using nginx as a reverse proxy. The > > upstream sends two WWW-Authenticate headers with different realms.by faskiri.devel - Nginx Development
Yes, that's how I noticed it. I am using nginx as a reverse proxy. The upstream sends two WWW-Authenticate headers with different realms. I was processing www_authenticate header and hadnt realized that it was legal to send multiple WWW-Authenticate headers. On Fri, Jan 10, 2014 at 7:19 PM, Maxim Dounin <mdounin@mdounin.ru> wrote: > Hello! > > On Fri, Jan 10, 2014 at 05:42:23PM +by faskiri.devel - Nginx Development
Hi RFC allows a server to respond with multiple WWW-Authenticate header ( http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.47). "User agents are advised to take special care in parsing the WWW- Authenticate field value as it might contain more than one challenge, or if more than one WWW-Authenticate header field is provided, the contents of a challenge itself can contain a commaby faskiri.devel - Nginx Development
Hi guys Hello guys Nginx keepalive seems to retry automatically on failure. However for non-idempotent requests, it is incorrect by RFC to retry automatically because the server could have changed its state before nginx detected the error. Is this a bug that would be fixed or did I not get it right? Relevant RFC section A client, server, or proxy MAY close the transport connection at any timeby faskiri.devel - Nginx Development
Hi I want to have a filter header/body that makes an asynchronous call. On success, a completion handler is called. The result of this completion handler decides the output of filter header/body. I understand subrequest can be used to do this. But are there alternatives to this? Lets say, I want to filter the body of the response to uppercase the body after 10 secs, how do I do that? This is whby faskiri.devel - Nginx Development
Hi I see this crash very very infrequently in nginx. Notice the len parameter=3734714755 #12 0x00007f40b8b45975 in sha1_update (c=0x808bdfe3, data=<optimized out>, len=3734714755) at e_aes_cbc_hmac_sha1.c:156 Walking through the openssl source code didnt help. There are two possibilities: 1. Bug in nginx which corrupts some data that openssl crashes on 2. Bug in openssl I will probably pby faskiri.devel - Nginx Development
Hi Maxim I found the rootcause. This was a problem with my plugin. Your explanation on posted_requests helped a lot in debugging the problem. The issue was, my plugin for some unavoidable reasons holds reference to the ngx_http_request_t and calls finalize once it is done or it sees some error. I didnt call ngx_run_posted_request() like ngx_http_request_handler does. The actual call to writev hapby faskiri.devel - Nginx Development
Hello Thanks for the really quick reply. The ngx_http_run_posted_requests totally made sense and explained the bit that I was missing. I get the bug when writev called in the context of a request handler gets an error. The repro I had was basically with nginx running on a server and client on my laptop over wireless @ work. I am not @ work now and from my home connection I am unable to repro thiby faskiri.devel - Nginx Development
Sorry. Attached the wrong file. On Fri, May 24, 2013 at 7:09 PM, Fasih <faskiri.devel@gmail.com> wrote: > Hi all > > I have been seeing slow but steady socket leak in nginx ever since I > upgraded from 1.0.5 to 1.2.6. I have my custom module in nginx which I was > sure what was the leak. This is how I went about investigating: > 1. Configure nginx with one worker > 2.by faskiri.devel - Nginx Development
Hi all I have been seeing slow but steady socket leak in nginx ever since I upgraded from 1.0.5 to 1.2.6. I have my custom module in nginx which I was sure what was the leak. This is how I went about investigating: 1. Configure nginx with one worker 2. strace on the worker process, tracing read/readv/write/writev/close/shutdown calls 3. Every now and then, for all the open fds (from ls -l /proc/&by faskiri.devel - Nginx Development
Check your fastcgi code. Ensure that it is running properly. Have you checked the logs of that process? On Thu, Jan 24, 2013 at 10:23 AM, coolbhushans@gmail.com < nginx-forum@nginx.us> wrote: > Hi everyone > I am new to nginx so really need u r help > > > My problem is that i set up niginx server on windows and it runs very well > . But i want to run c++ programs i.e(.by faskiri.devel - Nginx Mailing List - English
Thank you again. Makes perfect sense On Fri, Jan 4, 2013 at 8:57 AM, Maxim Dounin <mdounin@mdounin.ru> wrote: > are normally done in nginx, busy workers are > less likely to wait for kernel events and therefore less likely to > get new connections (both with accept mutex enabled or accept > mutex disabled), thus ensuring balancing between workers. > _________________________by faskiri.devel - Nginx Development
Thank you Maxim. Will try to see if I can get to the single threaded model. Basically when I was talking about accept_mutex, what I meant was, if the worker thread is busy with my cpu-intensive work, I would obviously want the other workers to take over. IIUC, if I have the accept mutex I am the one who will take the requests, is it possible that my worker plugin is caught up doing the work *and*by faskiri.devel - Nginx Development
Hi guys I need to write a plugin which does some CPU intensive work. I want to be able to create a thread which does some work and once done post an event into nginx thread via ngx_add_timer. I tried compiling my plugin with NGX_THREADS=pthread but that doesnt configure saying that the threading support is broken. I can obviously switch to a single threaded mode but that would be significantly dby faskiri.devel - Nginx Development
Bump On Sun, Jun 24, 2012 at 12:38 PM, Fasih <faskiri.devel@gmail.com> wrote: > Hi > > Thank you for your suggestion, as I said, I can work around with a > different configuration. But I need to configure the system with many > servers because each of the server is a different virtual host each > with its own configuration. I was trying to understand if this is a > bug/by faskiri.devel - Nginx Development
Hi Thank you for your suggestion, as I said, I can work around with a different configuration. But I need to configure the system with many servers because each of the server is a different virtual host each with its own configuration. I was trying to understand if this is a bug/limitation in the code or something more basic. Best Regards On Sat, Jun 23, 2012 at 5:29 PM, Valentin V. Bartenev &lby faskiri.devel - Nginx Development
Hello all I have a usecase for a server rewrite, what I essentially want to do is have a common domain like common.faskiri.com serve some contents for specific domains like zone1.com, zone2.com etc. for some specific url pattern. For instance: common.faskiri.com/zone1/asset should basically be rewritten to zone1/asset. Now zone1 has its own server section with /asset configured. I tried using rby faskiri.devel - Nginx Development
Thanks a lot for the response. > Just a side note: you may want to avoid setting keepalive bigger > than your backend is able to handle, keeping in mind that it's > not a hard limit on connections established, but rather size of > connection cache kept by each worker. I didnt realize that. I did run the test with keepalive 16 but the results are similiar. > Just a side note: plby faskiri.devel - Nginx Development
Hi All, Few days back I was trying to evaluate the performance of upstream keepalive feature for a website when I noticed a rather unexpected behaviour. It would be help me understand what's going on in the test. Here's what I did: 1. Setup httperf to run a session load. This basically means that a text file with different urls is supplied to httperf. httperf sends all the requests in bursts spaby faskiri.devel - Nginx Development
No problem at all, helped me understand the software a little more :) On Thu, Nov 24, 2011 at 1:48 PM, Igor Sysoev <igor@sysoev.ru> wrote: > On Thu, Nov 24, 2011 at 10:36:11AM +0530, Fasih wrote: > > Hi Igor > > > > Really thankful for your patience with me. I think I now understood what > > you are saying :). > > > > To summarize, header.hash == 0 isby faskiri.devel - Nginx Development
Hi Igor Really thankful for your patience with me. I think I now understood what you are saying :). To summarize, header.hash == 0 is used as a flag in ngx_http_header_filter_module.c:http_header_filter to test whether to send the header downstream or not. Setting it to 1(or anything non-zero) is to ensure that the header is sent downstream. Since headers_out is not used by nginx core to do hasby faskiri.devel - Nginx Development