Show all posts by user
Introduce yourselves
Page 1 of 1 Pages: 1
Results 1 - 11 of 11
Hi,
We have a custom configuration for a private server defined as:
ext_private_server http://private-server:8050;
under the server block. This configuration is parsed by our custom nginx module, and then we create a socket to send packets to the server port.
We were wondering if we can somehow use the upstream load balancing of NGINX with this custom configuration, or will we have to
by
dtandon
-
Nginx Mailing List - English
Hi,
In our module code, we are processing the HTTP request body when it is not stored in r->request_body->temp_file.
When I send a 9381 bytes body, NGINX doesn't store the body in temp_file but in the internal buffers. Hence we are able to process the body.
However, when I enable chunked encoding, the same 9381 bytes body, gets stored in the r->request_body->temp_file.
To avoid get
by
dtandon
-
Nginx Mailing List - English
Hi,
Was wondering if this question is more suited for the development forum, since I didn't receive any response on the user forum. Repeating the question below:
I tried with clearing the connections header but NGINX is still sending the 5th response through a new source port. Let me give a more detailed configuration we have. Just to inform you, we have our own auth module instead of using the
by
dtandon
-
Nginx Development
Resending with correct Subject. Sorry for the confusion.
Hi Sergey,
I tried with clearing the connections header but NGINX is still sending the 5th response through a new source port. Let me give a more detailed configuration we have. Just to inform you, we have our own auth module instead of using the NGINX auth module. We call ngx_http_post_request to post subrequests and the code is almost th
by
dtandon
-
Nginx Mailing List - English
Hi Sergey,
I tried with clearing the connections header but NGINX is still sending the 5th response through a new source port. Let me give a more detailed configuration we have. Just to inform you, we have our own auth module instead of using the NGINX auth module. We call ngx_http_post_request to post subrequests and the code is almost the same as that of auth module. For the subrequest sent by
by
dtandon
-
Nginx Mailing List - English
Hi,
We have the following configuration:
location / {
proxy_http_version 1.1;
proxy_pass http://ext-authz-upstream-server;
}
upstream ext-authz-upstream-server {
server 172.20.10.6:9006;
keepalive 4;
}
With this configuration, as per the previous email in nginx-devel, I am expecting that when I sequential
by
dtandon
-
Nginx Mailing List - English
Hi,
I am trying to implement HTTP pipelining through our module. But I am unable to figure out where the source port is allocated. The only function I saw that allocates the port is: ngx_http_upstream_create_round_robin_peer. However, it doesn't get called in the path of ngx_http_run_posted_requests.
Further, ngx_http_upstream_get_round_robin_peer allocates the peer, but every time in my case th
by
dtandon
-
Nginx Development
Hi,
We want to push some string based configuration (maybe in json format) to NGINX server, from our server. The requirement is to NOT put the configuration in nginx.conf file but we will be specifying the configuration server in the nginx.conf file. Whenever we update the configuration in our server, we want NGINX to get updated with the new configuration. What is the safest and most secure way
by
dtandon
-
Nginx Development
Hi Maxim,
Is HTTP Pipelining supported in NGINX? How can I pipeline requests?
I have the following configuration:
location /auth {
internal;
proxy_connect_timeout 5000ms;
proxy_read_timeout 5000ms;
proxy_http_version 1.1;
proxy_set_header Connection "keep-alive";
proxy_pass http://ext-au
by
dtandon
-
Nginx Development
Hi,
We have a auth module in our code that sends requests to a server and waits for response to approve the request before proceeding to forward the request to a proxy server.
We use the function ngx_http_post_request to post the subrequest.
As I understand, this function adds the request to a queue which is then processed by ngx_http_run_posted_requests function.
We observe that every single
by
dtandon
-
Nginx Development
HI - 2 years ago
Hi,
I am new to this forum. At our company we are using NGINX as a proxy server and have been writing NGINX modules for our requirements. How can I join the development mailing list and forum?
Thanks.
by
dtandon
-
New Member Introductions