Hi, I have a FASTCGI server listening over UDS to HTTP request from NGINX. For some reason, the requests stopped reaching the FASTCGI server. netstats -nap shows one socket connection in LISTENING state (as expected), one in CONNECTED state (not sure there should be such session hanging around), and many socket connections in CONNECTING state. To remove all these UDS fcgi connections, I stoby nginxuser100 - Nginx Mailing List - English
Thank you B.R. I wonder why 505 was not supported.by nginxuser100 - Nginx Mailing List - English
Thank you Francis. The body content did the trick ... not as aesthetically pleasing to the eyes as the NGINX's "hard-coded reason phrase", but it is better than a blank page. I did not understand what you meant by a config to control the reason phrase. Thanks again.by nginxuser100 - Nginx Mailing List - English
I have my FCGI server send "HTTP/1.1 505 Version Not Supported\r\nStatus: 505 Version Not Supported\r\n\r\n". In nginx.conf, I have: fastcgi_intercept_errors on; error_page 505 /errpage; location /errpage { try_files /version_not_supported.html =505; } If version_not_supported.html is not found, I expected nginx to dby nginxuser100 - Nginx Mailing List - English
Hi, does NGINX support the generation of the error message for HTTP error code 505? For example, I see "401 Authorization Required" when running nginx 1.6.2 but I don't see anything for 505. NGINX would return "505 OK" in the HTTP response. Thank you.by nginxuser100 - Nginx Mailing List - English
I also tried fastcgi_pass_header Status; along with the fastcgi_intercept_errors directive. NGINX still returned 200 OK instead of the 400 sent by the fastcgi server.by nginxuser100 - Nginx Mailing List - English
Hi, I expected fastcgi_intercept_errors to return a static error page AND to have include the HTTP error code (e.g. 400) in the HTTP response header. From what I see, it returns the static error page but with 200 OK. Is it the expected behavior? If yes, is there a way to have nginx return the error page and the error code to the client? Thank you!by nginxuser100 - Nginx Mailing List - English
Thank you Maxim, that was what I was looking for. However, it is still not returning the static error page. Does nginx expect a certain response format from the fcgi server? I tried: "HTTP/1.1 400 Bad Request\r\nStatus: 400 Bad Request\r\n"; and "HTTP/1.1 400 Bad Request"; The nginx.conf has: root ...; location xxx { include fastcgi_params;by nginxuser100 - Nginx Mailing List - English
Hi, I would like nginx to map a fastcgi error response a static error page, and include the HTTP error code in its HTTP response header; e.g. 1. have nginx return the proper error code in its header to the client. 2. have nginx return the proper error page based on the fastcgi_pass server's response error code. For example, if the fastcgi server returns '400 Bad Request', I would like NGINX toby nginxuser100 - Nginx Mailing List - English
Hi, FASTCGI is 'built in' NGINX. Can someone from the NGINX organization confirm that there is no plan to retire the FASTCGI support in NGINX? Thank you!by nginxuser100 - Nginx Mailing List - English
Thank you, that did the trick.by nginxuser100 - Nginx Mailing List - English
Hi, given client --(tcp)-->nginx --(fcgi)-->fcgi server --(tcp)--> back-end server if the client initiates a TCP disconnect, is there a way for NGINX to carry out the termination to the fcgi server? Or if the back-end server disconnects, how can the fcgi server communicate the disconnect all the way to nginx and the client? From what I observed, a client could send a TCP FIN, but Nby nginxuser100 - Nginx Mailing List - English
Hi, I would like nginx to serve all requests of a given TCP connection to the same FCGI server. If a new TCP connection is established, then nginx would select the next UDS FCGI server in round-robin fashion. Can this be achieved with NGINX, and if yes, how? I thought turning on fastcgi_keep_conn on would achieve this goal, but it is not what happened. My obervation was that each FCGI serverby nginxuser100 - Nginx Mailing List - English
Hi, is there a way in nginx to set a limit to the number of "buffered" connections? I am referring to the client's request being buffered on disk)? I was not able to find a directive for this but wanted to confirm, thank you.by nginxuser100 - Nginx Mailing List - English
Hi, how do I get a patch for the fastcgi_request_buffering directive support for nginx version 1.6.2 or any other version going forward? Thank you.by nginxuser100 - Nginx Mailing List - English
I was using 1.7.9 and it was crashing so I now go by the stable version 1.6.2 per http://nginx.org/en/download.html. Whichever version I use, I will need the fastcgi_request_buffering directive patch. Thanks.by nginxuser100 - Nginx Mailing List - English
Hi Kurt, where can I get a patch for nginx version 1.6.2 (the 'official' stable version as of today)? Thank you!by nginxuser100 - Nginx Mailing List - English
Hi, the situation that I am trying to solve is what happens if the client's request is larger than the configured client_max_body_size. Turning off buffering by nginx should resolve the problem as nginx would forward every packet to the back-end server as it comes in. Did I misunderstand the purpose of "fastcgi_request_buffering off;"? Thanks.by nginxuser100 - Nginx Mailing List - English
Thanks Kurt. The patch compiled and got installed fine. I no longer get an unknown directive error msg. However, the client's POST request of 1.5M of data still gives me this error "413 Request Entity Too Large" even though I added "fastcgi_request_buffering off;" location / { include fastcgi_params; fastcgi_request_buffering off; fastcgiby nginxuser100 - Nginx Mailing List - English
Thanks Kurt. In the meantime, is there a way to access the patch? I was not able to access the link to a patch mentioned in this email thread http://trac.nginx.org/nginx/ticket/251 Thanks.by nginxuser100 - Nginx Mailing List - English
Hi, how can tell nginx not to buffer client's requests? I need this capability to upload files larger than the nginx's max buffering size. I got an nginx unknown directive error when I tried the fastcgi_request_buffering directive. Is the directive supported and I am missing a module in my nginx build? I am running nginx 1.7.9. Thank you!by nginxuser100 - Nginx Mailing List - English
Hi, I would like to have the auth_request fastcgi auth server to send some custom variables to the fastcgi back-end server. For example, the Radius server returned some parameters which the fastcgi auth server needs to send to the fastcgi back-end server. locate / { auth_request /auth; fastcgi_pass <back-end server>; <--- would like this server to see the custom paraby nginxuser100 - Nginx Mailing List - English
In case it will help someone else, the problem turned out to be in the FastCGI auth server's printf, the last "statement" of the HTTP header should end with \n\n instead of \r\n. The following was wrong: printf("Content-type: text/html\n\n" "Set-Cookie: name=AuthCookie\r\n" "<html><head><title>FastCGI 9010: Hello!</title></head>by nginxuser100 - Nginx Mailing List - English
Thank you Maxim, it is much better in the sense that I am not getting an error at NGINX start time, but the FastCGI back-end server listening at port 9000 does not seem to get the cookie set by the FastCGI auth server, nor any data from a POST request body or data generated by FastCGI auth app. On a separate note, GET request would get a response, but a POST request would get an Internal errorby nginxuser100 - Nginx Mailing List - English
Hi, Question 1: I would like to have an FastCGI authentication app assign a cookie to a client, and the Fast Auth app is called using auth_request. The steps are as follows: 1. Client sends a request 2. NGINX auth_request forwards the request to a FastCGI app to authenticate. 3. The authentication FastCGI app creates a cookie, using "Set-Cookie: name=value". I would like thisby nginxuser100 - Nginx Mailing List - English
Thanks Sergio, that was helpful!by nginxuser100 - Nginx Mailing List - English
Hi, I am a newbie at nginx and looking at its authentication capabilities. It appears that when using auth_request, every client request would still require an invokation to the auth_request fastcgi or proxy_pass server. Looking at auth_pam, I am not clear on how it works: 1. How does nginx pass the user credentials to the PAM module? 2. Would nginx remember that a user has been authenticatby nginxuser100 - Nginx Mailing List - English