Hello! On Wed, Jul 31, 2013 at 12:28:02AM -0300, Wandenberg Peixoto wrote: > Hello! > > Thanks for your help. I hope that the patch be OK now. > I don't know if the function and variable names are on nginx pattern. > Feel free to change the patch. > If you have any other point before accept it, will be a pleasure to fix it. Sorry for long delay. As promised, I've looked intoby Maxim Dounin - Nginx Development
Hello guys, I have some doubts, and I will appreciate if someone help me :-) I posted something about this some days ago [1]. Basically, in the node tree that keeps the objects in the cache, I inserted a list that keeps all listeners [2] and a FD that points to tempfile. In ngx_http_upstream.c, after the call to ngx_event_pipe(), I try to send data to all listeners. The list and FD are in sharby Alex Garzão - Nginx Development
Hello! On Mon, Sep 02, 2013 at 11:24:33AM -0300, Wandenberg Peixoto wrote: > did you have opportunity to take a look on this patch? Not yet, sorry. It's in my TODO and I'll try to look at it this week. Overall it seems good enough, but it certainly needs style/cosmetic cleanup before it can be committed. [...] -- Maxim Dounin http://nginx.org/en/donation.html ________________________by Maxim Dounin - Nginx Development
Hi Maxim, did you have opportunity to take a look on this patch? Regards, Wandenberg On Wed, Jul 31, 2013 at 12:28 AM, Wandenberg Peixoto <wandenberg@gmail.com>wrote: > Hello! > > Thanks for your help. I hope that the patch be OK now. > I don't know if the function and variable names are on nginx pattern. > Feel free to change the patch. > If you have any other point bby wandenberg - Nginx Development
The patch in not in the milling list. We just spoke about the same problem before in the list with other developers. Unfortunately I cannot share the patch because it has been made for commercial project. However I am going to ask for permition to share it. On Fri, Aug 30, 2013 at 12:04 PM, SplitIce <mat999@gmail.com> wrote: > Is the patch on this mailing list (forgive me I cant see itby Anatoli Marinov - Nginx Development
Hello Anatoli, > I think this is asynchronous and if the upstream is faster than the > downstream it save the data to cached file faster and the downstream gets > the data from the file instead of the mem buffers. In this case, I don't need to worry about upstream/downstream speed. Very good! > I have the same but in ordered array (simple implementation). Anyway the > rbtree willby Alex Garzão - Nginx Development
Is the patch on this mailing list (forgive me I cant see it)? Ill happily test it for you, although for me to get any personal benefit there would need to be a size restriction since 99.9% of requests are just small HTML documents and would not benifit. Also the standard caching (headers that result in a cache miss e.g cookies, cache-control) would have to be correct. At the very least Ill readby splitice - Nginx Development
I discussed the idea years ago here in the mailing list but nobody from the main developers liked it. However I developed a patch and we have this in production more than 1 year and it works fine. Just think for the following case: You have a new file which is 1 GB and it is located far from the cache. Even so you can download it with 5 MBps through cache upstream so you need 200 seconds to get iby Anatoli Marinov - Nginx Development
This is an interesting idea, while I don't see it being all that useful for most applications there are some that could really benefit (large file proxying first comes to mind). If it could be achieved without introducing too much of a CPU overhead in keeping track of the requests & available parts it would be quite interesting. I would like to see an option to supply a minimum size to restriby splitice - Nginx Development
Hello, On Wed, Aug 28, 2013 at 7:56 PM, Alex Garzão <alex.garzao@azion.com> wrote: > Hello Anatoli, > > Thanks for your reply. I will appreciate (a lot) your help :-) > > I'm trying to fix the code with the following requirements in mind: > > 1) We were upstreams/downstreams with good (and bad) links; in > general, upstream speed is more than downstream speed but,by Anatoli Marinov - Nginx Development
Mates, Is there any written info how dynamic configuration for nginx works. I am wandering is it possible to add new proxy_cache zone with it without reload worker processes? There are several examples how to build dynamic configuration with lua and perl but both approaches cannot dynamicaly create proxy_cache zones (because there is no simple method to transfer shared memory segment from masterby Anatoli Marinov - Nginx Development
I've been running a node with this patch on a production machine for 5 days and am seeing marked improvements. The instance hasn't needed to be restarted due to "ngx_slab_alloc() failed: no memory". The shared memory usage has been growing at a far slower rate compared to a node without the patch. Also, not seeing any significant increase in CPU usage. nginx/1.2.9 Mixture of HTTP/HTTPSby John Watson - Nginx Development
Hello! Thanks for your help. I hope that the patch be OK now. I don't know if the function and variable names are on nginx pattern. Feel free to change the patch. If you have any other point before accept it, will be a pleasure to fix it. --- src/core/ngx_slab.c 2013-05-06 07:27:10.000000000 -0300 +++ src/core/ngx_slab.c 2013-07-31 00:21:08.043034442 -0300 @@ -615,6 +615,26 @@ fail: staby wandenberg - Nginx Development
Hello! On Mon, Jul 29, 2013 at 04:01:37PM -0300, Wandenberg Peixoto wrote: [...] > What would be an alternative to not loop on pool->pages? Free memory blocks are linked in pool->free list, it should be enough to look there. [...] -- Maxim Dounin http://nginx.org/en/donation.html _______________________________________________ nginx-devel mailing list nginx-devel@nginx.org http:/by Maxim Dounin - Nginx Development
Hello! I see your point, and I will split the patch to do both actions, on ngx_slab_free_pages() and on allocation when has a failure. What would be an alternative to not loop on pool->pages? Regards, Wandenberg On Mon, Jul 29, 2013 at 2:11 PM, Maxim Dounin <mdounin@mdounin.ru> wrote: > Hello! > > On Sat, Jul 27, 2013 at 04:10:51PM -0300, Wandenberg Peixoto wrote: > >by wandenberg - Nginx Development
Hello! On Sat, Jul 27, 2013 at 04:10:51PM -0300, Wandenberg Peixoto wrote: > Hello Maxim. > > I've been looking into those functions and guided by your comments > made the following patch to merge continuous block of memory. > Can you check if it is ok? > Comments are welcome. > > --- src/core/ngx_slab.c 2013-05-06 07:27:10.000000000 -0300 > +++ src/core/ngx_slab.by Maxim Dounin - Nginx Development
Hello Maxim. I've been looking into those functions and guided by your comments made the following patch to merge continuous block of memory. Can you check if it is ok? Comments are welcome. --- src/core/ngx_slab.c 2013-05-06 07:27:10.000000000 -0300 +++ src/core/ngx_slab.c 2013-07-27 15:54:55.316995223 -0300 @@ -687,6 +687,25 @@ ngx_slab_free_pages(ngx_slab_pool_t *poo page->next-by wandenberg - Nginx Development
Hello! On Fri, Jun 28, 2013 at 10:36:39PM -0300, Wandenberg Peixoto wrote: > Hi, > > I'm trying to understand how the shared memory pool works inside the Nginx. > To do that, I made a very small module which create a shared memory zone > with 2097152 bytes, > and allocating and freeing blocks of memory, starting from 0 and increasing > by 1kb until the allocation fails. >by Maxim Dounin - Nginx Development
Hi, I'm trying to understand how the shared memory pool works inside the Nginx. To do that, I made a very small module which create a shared memory zone with 2097152 bytes, and allocating and freeing blocks of memory, starting from 0 and increasing by 1kb until the allocation fails. The strange parts to me were: - the maximum block I could allocate was 128000 bytes - each time the allocation faiby wandenberg - Nginx Development
Author: mdounin Date: 2013-04-02 12:34:21 +0000 (Tue, 02 Apr 2013) New Revision: 5165 URL: http://trac.nginx.org/nginx/changeset/5165/nginx Log: nginx-1.2.8-RELEASE Modified: branches/stable-1.2/docs/xml/nginx/changes.xml Modified: branches/stable-1.2/docs/xml/nginx/changes.xml =================================================================== --- branches/stable-1.2/docs/xml/nginx/changesby Anonymous User - Nginx Development
Author: mdounin Date: 2013-03-05 14:35:58 +0000 (Tue, 05 Mar 2013) New Revision: 5099 URL: http://trac.nginx.org/nginx/changeset/5099/nginx Log: nginx-1.3.14-RELEASE Modified: trunk/docs/xml/nginx/changes.xml Modified: trunk/docs/xml/nginx/changes.xml =================================================================== --- trunk/docs/xml/nginx/changes.xml 2013-03-04 15:39:03 UTC (rev 5098) +by Anonymous User - Nginx Development
Hi! On 08.01.2013 13:09, Anatoli Marinov wrote: > Hello Colleagues, > I am wondering is there a method for shared dictionary locking. > In my script I have to flush all records from the dictionary and > after that the script will put new records. In this time I do not > want other workers to read the partially loaded dictionary. > So is it possible to lock it for a very small peby Vladimir Shebordaev - Nginx Development
Author: mdounin Date: 2012-12-10 18:17:32 +0000 (Mon, 10 Dec 2012) New Revision: 4957 URL: http://trac.nginx.org/nginx/changeset/4957/nginx Log: Merge of r4933, r4933: shared memory fixes. *) Fixed location of debug message in ngx_shmtx_lock(). *) Core: don't reuse shared memory zone that changed ownership (ticket #210). nginx doesn't allow the same shared memory zone to be used for differeby Anonymous User - Nginx Development
Author: ru Date: 2012-11-23 12:43:58 +0000 (Fri, 23 Nov 2012) New Revision: 4934 URL: http://trac.nginx.org/nginx/changeset/4934/nginx Log: Core: don't reuse shared memory zone that changed ownership (ticket #210). nginx doesn't allow the same shared memory zone to be used for different purposes, but failed to check this on reconfiguration. If a shared memory zone was used for another purpose iby Anonymous User - Nginx Development
Hello! On Tue, Oct 23, 2012 at 11:14:52AM -0400, Jeff Kaufman wrote: > My module wants to sit in the filter chain passing buffers to an > asynchronous optimization thread and then send them out to the user > when they finish. When a request comes in I have my module roughly > doing: > > body filter: > - if first set of buffers > - create pipe, pass pipe_write_fd toby Maxim Dounin - Nginx Development
My module wants to sit in the filter chain passing buffers to an asynchronous optimization thread and then send them out to the user when they finish. When a request comes in I have my module roughly doing: body filter: - if first set of buffers - create pipe, pass pipe_write_fd to optimization thread - pass all input data to optimization thread - don't call ngx_http_next_body_filter Is tby Jeff Kaufman - Nginx Development
Hello! On Fri, Oct 19, 2012 at 04:03:13PM -0400, YongFeng Wu wrote: > What is an easy way to have a worker process exit and the master process to > start a new worker process? For example, in the case of the memory > allocation fails. You don't want to exit worker process unless the problem encountered is really fatal. Obviously memory allocation failure is only fatal if it affectsby Maxim Dounin - Nginx Development
Hello! On Tue, Oct 02, 2012 at 12:09:48PM +1000, Daniel Black wrote: > > For a quick summary of session tickets look at http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html and for a longer version read the rfc. > > Session tickets are supported in chrome and firefox browsers. > > Both session tickets and session id (the current session implementation) allowby Maxim Dounin - Nginx Development
For a quick summary of session tickets look at http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html and for a longer version read the rfc. Session tickets are supported in chrome and firefox browsers. Both session tickets and session id (the current session implementation) allow the server to resume SSL/TLS session with a quicker round trip and less cryptographic material generatby Daniel Black - Nginx Development
Hi, The task is internal to the module: either updating memory with data from the POST request itself, or triggering memory update from an external source.. But in both cases I need data to be in nginx shared memory, not in an external source to which I could proxy. Thierry -----Message d'origine----- De : nginx-devel-bounces@nginx.org De la part de Tom van der Woerdt Envoyé : mercredi 26 sby MAGNIEN, Thierry - Nginx Development