Hi All, Is there way I configure Nginx not to do SSL offload, i.e Nginx should receive the HTTPS traffic (by listening on port 443) and forward the same to the backend server without doing an offload. I think I can do this if I setup my Nginx in TCP mode (using the third party module tcp proxy). But, I am wondering if the "NO SSL OFFLOAD" can be handled by the standard Nginx. Plby nginxsantos - Nginx Mailing List - English
Thank you. I think agentzh's "memc-nginx-module" module comes with little more features than the default "ngx_http_memcached_module" which Valentin suggested. If I download nginx, I see the "ngx_http_memcached_module". So, it is not a third party module. I will evaluate and see which one to use. Thanks.by nginxsantos - Nginx Mailing List - English
Hi: Have anyone used the thirdparty "memc-nginx-module" module for the memcached operation. I am interested for a memcached module. So, I am evaluating which one to use and their differences and the stability. I also came across another third party module ngx_http_enhanced_memcached_module . Can anyone suggest me a suitable memcached module which is used in production sites?by nginxsantos - Nginx Mailing List - English
Has anyone started looking at nginx with userspace TCP. Is there any opensource TCP stack available which can work well with Nginx. Sandstorm seems to be working in user space and claiming to handle more CPS than Nginx.by nginxsantos - Nginx Mailing List - English
Hi Maxim, Thanks for the response. Are you saying if I convert the processes to threads may be through pthread or rfork, it is not going to work? The thread model is not supported at all? Thanks, Santosby nginxsantos - Nginx Mailing List - English
I tried to compile 1.6.0 with --with-threads. But, looks like this is no longer supported. #--with-threads=*) USE_THREADS="$value" ;; #--with-threads) USE_THREADS="pthreads" ;; Can anyone please comment on this.by nginxsantos - Nginx Mailing List - English
Hi, Can anyone please help me to run nginx as a single process model (threads instead of processes). I am interested on this as I am more incline to run this with a usermode TCP like netmap-rumptcpip. Anyone has done this or investigating on this ? Thanks, Santosby nginxsantos - Nginx Mailing List - English
Thank you. But, my question is when we are allocating a pool of more than one page size why are we putting the max value as one page size and then further leading to memory allocation.by nginxsantos - Nginx Mailing List - English
Any expert opinions???by nginxsantos - Nginx Mailing List - English
Suppose, I am allocating a pool of greater than 4k(page size). Say for example I am calling the function ngx_create_pool with 8096. But, this function will set the max as 4095 even if it has allocated 8K. Not sure, why is it being done like this. p->max = (size < NGX_MAX_ALLOC_FROM_POOL) ? size : NGX_MAX_ALLOC_FROM_POOL; I know, I have created a pool with size 8K, now I am alloby nginxsantos - Nginx Mailing List - English
Thank you for the reply. I know it is simple. But, will we not get more performance benefit if we create the pools before hand. Say I will create a memory pool for the connections (for example say with 4000 entries). Everytime I need one, I will go and get it from that pool and when I free it, I will free that to the pool. Will not that be more efficient rather than for every connection and reqby nginxsantos - Nginx Mailing List - English
Also I noticed that initially for a connection, it allocates a pool of size 256 and if that exceeds, it goes and calls ngx_palloc_large which in turn calls malloc. So, can we not allocate more in the first attempt.by nginxsantos - Nginx Mailing List - English
This module is just to monitor the status page. Is there any SNMP module which can generate some snmp alarms when certain threshold exceeds or when there is a crash?by nginxsantos - Nginx Mailing List - English
Nginx when it accepts a connection, it creates a memory pool for that connection (allocating from heap). After which further memory requirement for that connection will be allocated from that pool. This is good. But, why don't we pre create the memory pools depending upon the number of connections and use that pool. In the current approach if some connections are coming up going down., we will bby nginxsantos - Nginx Mailing List - English