Hi, I want my nginx listener to use SSL and do both server and client verification. However, I want it to use different certificates and keys for server vs client verification. The reason is that I want to use a properly signed certificate for the server verification and a self signed certificate for client verification (in order to manage allowed clients). Is there a way to achieve this?by spacerobot - Nginx Mailing List - English
We use auth_request right now and it works great. However, we are making a change that the authentication server in the future will only take SSL requests and it also verifies client certificates. I couldn't seem to find information online about how to pass through client SSL certificate when using auth_request. Current configuration: location = /_auth { internal; proxy_method POby spacerobot - Nginx Mailing List - English
Hi nginx experts, I'm trying to achieve an mechanism that for requests to certain endpoints, nginx makes a HTTP call to a certain server (mostly POST and DELETE), gets a response back, parse the response code and body, and based on the response code and the content of the response body, return certain HTTP status codes/response to the caller. Is there an existing module that can help achieve thby spacerobot - Nginx Mailing List - English
Ran into an issue that I needed to set a larger proxy_buffer_size (e.g. to 128k). It works after increasing. However my question is: what's the disadvantages of setting a large buffer size? If there is no disadvantage, why the default is only 8k? Is there a certain value that I certainly shouldn't set it larger than that? Thanks!by spacerobot - Nginx Mailing List - English
> Are you sure the error is returned by this nginx > instance, not by > your http backends? > > Maxim Dounin > Oh Thank you! It was from unicorn, not nginx. Everything began to make sense then. :Dby spacerobot - Nginx Mailing List - English
Hello, spacerobot Wrote: ------------------------------------------------------- > > Most likely you are trying to configure > > > client_header_buffer_size/large_client_header_buff > > > ers in a pure > > virtual server{}. This won't work as request > > headers parsing > > happens before Host header is known (and > virtual > >by spacerobot - Nginx Mailing List - English
> Most likely you are trying to configure > client_header_buffer_size/large_client_header_buff > ers in a pure > virtual server{}. This won't work as request > headers parsing > happens before Host header is known (and virtual > server is > selected), hence parseing happens in a context of > the default > server for a listen socket. > > You havby spacerobot - Nginx Mailing List - English
It appears that no matter how big I set the value of large_client_header_buffers to be, nginx just doesn't care of the setting and still returns 414 on a long request. I tried to make it 16k, 32k, 256k, and 512k, etc and POSTing a request with 1.5k long URL returns 414. It works when I reduce the request URI length to about 1k, regardless of the large_client_header_buffers value as well. Iby spacerobot - Nginx Mailing List - English
My nginx server needs a request whose URI is about 4k long, it currently returns 414 with message Request-URI Too Long. I looked online and added: large_client_header_buffers 4 16k; to the http block in nginx.conf, restarted nginx, still returns 414. I also tried to add client_header_buffer_size and set to 16k and well but didn't help. I have nginx 1.0.12 and built with the followiby spacerobot - Nginx Mailing List - English
Is it possible to specify a range of ports for the servers in upstream? For example, something like: upstream foo { server 10.123.111.100 : 6000 - 7000 weight=1; }by spacerobot - Nginx Mailing List - English
Thanks guys, the split_clients module worked.by spacerobot - Nginx Mailing List - English
This is about traffic control, in the case we get slammed by heavier traffic than expected we want to be able to gracefully only allow part of the users' requests to come through. In my current nginx config, I have a white list based on a $http_x_user_id header in the request and only allow traffic from those users. The way I did it was a condition in the location / block: if ( $http_x_uby spacerobot - Nginx Mailing List - English
![]() |
![]() |
![]() |
![]() |
![]() |