>On the other hand, it looks like sending of the request is still >in progress, and upstream server replies before the request was >completely sent. It might indicate it just doesn't wait long >enough, and the problem is in the backend (and slow connectivity >to the backend). >I don't see any pause in request sending you've claimed in your >initial message. >On tby speedfirst - Nginx Mailing List - English
retry the test while client_max_body_size=0; The size of tmp file is as expected, about 3.7M : root@zm-dev03:/opt/data/tmp/nginx/client# ll 0000000001 -rw------- 1 speedfirst speedfirst 3914486 2012-06-06 01:27 0000000001 Here is the client script from curl: curl -v -u admin@dev03.eng.test.com:test123 -F "file=@test.tgz;filename=test.tgz;type=application/x-compressed-tar" "hby speedfirst - Nginx Mailing List - English
Thanks for your quick response. In my config, client_max_body_size is set to 0. Does it mean "unlimited"? I made this test in two version, 0.9.3 and 1.2.0. Both have the same problem.by speedfirst - Nginx Mailing List - English
by the way, my client request is a POST request.by speedfirst - Nginx Mailing List - English
Hey. In my env, the layout is: client <--> nginx <--> jetty In the client, there is a <input type=file> control. I tried to upload a file with size of 3.7MB. In the client request, the content type is "multipart/form-data", and there is an "Expect: 100-continue" header. Through tcpdump, I could see nginx immediately return an "HTTP/1.1 100 Conby speedfirst - Nginx Mailing List - English
I need to specify a CA file to "ssl_client_certificate" directive. This crt is generated by openssl x509 command with "-trustout" parameter, so starts with "----BEGIN TRUSTED CERTIFICATE-----", rather than common "-----BEGIN CERTIFICATE-----". Nginx will report error: PEM routines:PEM_read_bio:no start line Internally, nginx uses openssl's API "Sby speedfirst - Nginx Mailing List - English
add_header is effective only when the response code is equal to 200, 204, 301, 302 or 304. You should try headers_more module.by speedfirst - Nginx Mailing List - English
Thanks. the general idea of that patch is similar to mine but better. So I'll apply it. If you can tell me why you feel that patch is not enough, I can help to improve it.by speedfirst - Nginx Mailing List - English
A patch has been submitted in nginx-devel maillist, please review. After the fix, the test result is: telnet 2001:250:1800:1::88 Trying 2001:250:1800:1::88... Connected to 2001:250:1800:1::88. Escape character is '^]'. GET / HTTP/1.0 Host: [2001:250:1800:1::88] HTTP/1.1 200 OK Server: nginx/1.1.1 Date: Wed, 24 Aug 2011 12:26:00 GMT Content-Type: application/octet-stream Content-Leby speedfirst - Nginx Mailing List - English
This is the diff to fix the problem of http://forum.nginx.org/read.php?2,214541 Can this be integrated into main branch? Thanks. --- ngx_http_request.c 2011-08-24 05:21:59.354049000 -0700 +++ ngx_http_request.c.backup 2011-08-24 05:05:33.244048997 -0700 @@ -1658,20 +1658,10 @@ size_t i, last; ngx_uint_t dot; -#if (NGX_HAVE_INET6) - ngx_uint_t ipv6 = 0; -#endif - lasby speedfirst - Nginx Development
For a request with an IPv6 Host header like: Host: Host: [2001:250:1800:1::88] The function "ngx_http_validate_host" will consider the trailing ":88]" as a port number and trim it. Therefore the value of "r->headers_in.server" and "$host" will become: [2001:250:1800:1: Test (in Nginx - 1.1.1): Test config: listen [::]:80 locby speedfirst - Nginx Mailing List - English
This is not a real problem of nginx. I just want to confirm the solution under this case. My nginx serving as a web reverse proxy, and it also connects to a third server for authentication with the Cookie and token in URI. The authentication server's timeout is 3s. The authentication is handled by my own module. In a stress test, we make a 200MB fake response from the upstream and start 150by speedfirst - Nginx Mailing List - English
The wiki page of "$host" says $host and $host_addr are different only when there is no "Host" header or the "Host" header is empty. But I found when "Host" contains port number, $host never contains the port number while $http_host is equal to the value of "Host" header. That is, if "Host: foo:8080", then $http_host = foo:8080 $hostby speedfirst - Nginx Mailing List - English
Thanks. Tried but doesn't workby speedfirst - Nginx Mailing List - English
Hey I have a config like this: server { server_name foo; listen 10.117.0.150:3443; ssl_on; ssl_certification /opt/mycrt/nginx.crt; ssl_private_key /opt/mycrt/nginx.key; ssl_verify_client on; ssl_client_certificate /opt/mycrt/nginx.foo.ca.crt; location = /certauth { } location / { return 403; } } server { server_name bar; listby speedfirst - Nginx Mailing List - English
normally, your backend server should do the redirect in a relative URL, unless you really want to redirect to another host. That's something like sendRedirct("/redirectTo/anotherPath"); In this way, the backend server will send redirect URL with the HOST header specified in the request. In your case, it should be: Location: http://mybackendserver:8080/redirectTo/anotherPaby speedfirst - Nginx Mailing List - English
Here is my configuration scenario. One nginx host and several upstream servers. For the upstream server, giving a request with URL "/abc", it will redirect to "/abc/". The nginx config is: upstream ups { server xxxxxxx; server xxxxxxx; ... } location / { proxy_pass ups; proxy_add_header $host:$server_port; } Therefore, if user access httby speedfirst - Nginx Mailing List - English
I find that if I uses proxy_pass to an upstream, the FULL URL match will fail: upstream upservers { server 10.1.2.3:1234; } location / { proxy_pass http://upservers; proxy_redirect http://upservers/ http://$host:$server_port; } However, if I directly proxy_pass to the backend, it works: location / { proxy_pass http://10.1.2.3:1234; proxy_redirect http://10.1.by speedfirst - Nginx Mailing List - English
If the backend server uses FULL URL to initiate a redirect, like sendRedirect("http://10.1.2.3:1234/redirect"); Can proxy_redirect match this URL and do the redirect? I write this in config but it seems doesn't work: proxy_redirect http://10.1.2.3.4:1234/redirect http://$host:$server_port; but nginx still return "Location: http://10.1.2.3.4:1234/redirect"by speedfirst - Nginx Mailing List - English
so what will happen if config like this? Only the 2nd proxy_pass takes effect?by speedfirst - Nginx Mailing List - English
sorry. typo. proxy_pass to upstream2 if proxy_pass to upstream1 fails?by speedfirst - Nginx Mailing List - English
I saw some nginx configs written like this: location ~ { ... proxy_pass http://upstream1; proxy_pass http://upstream2; ... } What this config meaning? proxy_pass to upstream2 if proxy_pass to upstream fails? Thanks.by speedfirst - Nginx Mailing List - English
So will it trigger a bug like this? 1. client access nginx via https 2. nginx proxy pass the request to upstream 3. before upstream response, client close the connection, then dispose current ngx_http_request_t object and its memory pool 4. upstream sends back the response and invoke the read handler 5. the handler uses a disposed ngx_http_request_t object, and segment fault.by speedfirst - Nginx Mailing List - English
In the function "ngx_http_upstream_check_broken_connection" of ngx_http_upstream.c, there is a code segment like below: n = recv(c->fd, buf, 1, MSG_PEEK); ... else { /* n == 0 */ err = 0; } Here use whether receiving 0 bytes from downstream client to judge whether the connection has been closed. However, if the downstream connection is https, and the connectiby speedfirst - Nginx Mailing List - English
Thanks for the reply. What I mean is to change the short error msg after the error code in the HTTP response status line, not the response HTML page. My motivation is to keep nginx consistent with the upstream server. Although the error code consistence is enough in the functionality, it would be better if the short error msg can be changed by configuring or simply programming.by speedfirst - Nginx Mailing List - English
I think the best way is write a config generator by yourself with velocity or freemarkerby speedfirst - Nginx Mailing List - English
For example, 403's msg is "Forbidden". If there is someway to append some customized msg to this HTTP response status line like: HTTP/1.1 403 Forbiden, no id provided Date: xxxx Content-Type: text/html ... Thanks.by speedfirst - Nginx Mailing List - English
Is this correct? r->count++; then return NGX_DONE?by speedfirst - Nginx Mailing List - English
Hey, I'm making a custom http handler. The idea is, the user connects to this handler, and this handler connect to another server to fetch some information. Therefore the handler itself can't return the http response. The code framework is as below: ngx_http_my_custom_handler (ngx_http_request * r) { //create a peer connection to another peer ... ngx_event_peer_connect(peer);by speedfirst - Nginx Mailing List - English
OSCP is an important protocol for client cert authentication, so I'm wondering when nginx will support it. Do you have a plan? Thanksby speedfirst - Nginx Mailing List - English