Show all posts by user
Discussions in Spanish
Page 1 of 1 Pages: 1
Results 1 - 28 of 28
Hi,
We recently received a sshscan report for a nginx listen port.
it reports that the nginx server accept the SHA1 signature algorithm.
it can be verified also by "openssl s_client -connect <ip>:<port> -sigalgs "RSA+SHA1""
So, can this be fixed with any existing configurations?
by
allenhe
-
Nginx Mailing List - English
Hi,
As the nginx websocket proxying doc says, after the http upgrade procedure completes, the client and the server communicate with each other in a two-way tunnel. I am just wondering if this two-way communication is still proxyed by nignx?
I understand the nginx only proxy the http upgrade messages, am I correct?
1) upgrade phase
client --> nginx --> server
2) playload
by
allenhe
-
Nginx Mailing List - English
Hi,
Thanks for the quick reply.
Yes, I know that ngx_http_grpc_module already supports grpc proxy. the fact is that we have a legacy product running with a pretty old nginx version and hope that it can be used to proxy grpc as well, so, can stream module proxy the grpc request/response correctly?
Thanks,
Allen
by
allenhe
-
Nginx Mailing List - English
Hi,
Can somebody show the minimal configuration to suppport large file upload with multipart/form-data Content-Type?
When I upload 8GB file with the following global configs, I will always get 600s timeout with the error: upstream timed out (110: Connection timed out) while reading response header from upstream
client_body_buffer_size 128k;
client_header_buffer_size 16K;
client_ma
by
allenhe
-
Nginx Mailing List - English
Hi,
Thanks for the reply!
So, if there is no error and the downstream/upstream didn't actively close the connection, the nginx won't timeout and close the tcp connection with the downstream or with the upstream at all? is that correct? and I suppose the SO_KEEPALIVE option is turn on by default on connection sockets, right?
The "session" refers to the ngx_stream_session_t in th
by
allenhe
-
Nginx Mailing List - English
Hi,
As we know there are some keepalive options in the nginx http modules to reuse tcp connections,
But are there corresponding options in the nginx stream module to achieve the same?
How nginx persist tcp connection with downstream?
How nginx persist tcp connection with upstream?
What is the "session" meaning in the stream context?
BR,
Allen
by
allenhe
-
Nginx Mailing List - English
Looking into the big loop code, it may happen that the worker process may close the keepalive connection before consuming any pending read events?
for ( ;; ) {
if (ngx_exiting) {
if (ngx_event_no_timers_left() == NGX_OK) {
ngx_log_error(NGX_LOG_NOTICE, cycle->log, 0, "exiting");
ngx_worker_process_exit(cycle);
by
allenhe
-
Nginx Mailing List - English
Hi Maxim Dounin,
Is it possible that the nginx is closing the keepalive connection while there is input data queued?
As we know If a stream socket is closed when there is input data queued, the TCP connection is reset rather than being cleanly closed.
Br,
Allen
by
allenhe
-
Nginx Mailing List - English
Can someone elaborate this a little bit?
"NGINX supports WebSocket by allowing a tunnel to be set up between both client and back-end servers."
what is the "tunnel" here?
Does it mean the client will talk with the back-end server directly after the http Upgrade handshakes?
by
allenhe
-
Nginx Mailing List - English
A non root process needs to signal reload to nginx master (as root) without sudo
I've tried using setcap and setpriv with CAP_KILL, both not work.
# getcap nginx/sbin/nginx
nginx/sbin/nginx = cap_kill+ip
#su user01 -s /bin/sh -c 'nginx/sbin/nginx -s reload'
nginx: kill(68, 1) failed (1: Operation not permitted)
#setpriv --inh-caps +cap_5 --ambient-caps +cap_5 su user001 -s /bin/sh
by
allenhe
-
Nginx Mailing List - English
Hi,
I found most times using "r" after ngx_http_free_request() won't have any problem. the core dump would happen once for a while in the high load.
I am wondering if the "ngx.pfree" does not return the memory back to the os when it's called?
by
allenhe
-
Nginx Mailing List - English
to be more self assurance,
can somebody confirm that the "r" is no longer accessable after this?
ngx_http_free_request(r, 0);
thank you in advance!
by
allenhe
-
Nginx Mailing List - English
I was wrong, the request object was created on the fly with the pool object.
here the pool was detroyed before the r was referenced which caused the core dump.
by
allenhe
-
Nginx Mailing List - English
version: 1.17.8
debug log:
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2020/09/03 14:09:21 320#320: *873195 upstream timed out (110: Connection timed out) while connecting to upstream, client: 10.68.23.2, server: , request: "POST /api/hmf-controller/v1/com
by
allenhe
-
Nginx Mailing List - English
Hi Francis,
Thanks for the reply!
w.r.t. the "http://nginx.org/r/proxy_buffering", the doc does not mention if the buffering works for header, body or both, I'm wondering if nginx can postpone the sending of upstream header in any ways? otherwise the client will get wrong status code in this case.
Allen
BR
by
allenhe
-
Nginx Mailing List - English
Will nginx buffer the header before receiving of the whole body?
If not, what if error happens in the middle of body receiving? nginx has no chance to resend the error status then.
by
allenhe
-
Nginx Mailing List - English
I understand the nginx would proxy the header first and then the body, in the case the connection with the upstream is broken during the transfer of body, what status code the client would get? since the nginx would proxy the 200 OK from upstream first to the client, but will nginx send another 5xx header to the client if the upstream connetction is broken?
by
allenhe
-
Nginx Mailing List - English
Patrick Wrote:
-------------------------------------------------------
> On 2019-07-07 22:39, allenhe wrote:
> > Per my understanding, the reloading would only replace the old
> workers with
> > new ones, while during testing (constantly reloading), I found the
> output of
> > "ps -ef" shows multiple masters and shutting down workers which
> would fa
by
allenhe
-
Nginx Mailing List - English
Hi,
Per my understanding, the reloading would only replace the old workers with new ones, while during testing (constantly reloading), I found the output of "ps -ef" shows multiple masters and shutting down workers which would fade away very quickly, so I guess the master process may undergo the same replacement.
Could some experts help confirm this?
What's strange is that in a pr
by
allenhe
-
Nginx Mailing List - English
Hi,
I found this is valid, and want to know what scenario it's used for.
deny 0.0.0.1;
Thanks,
Allen
by
allenhe
-
Nginx Mailing List - English
Nginx version: 1.13.6.1
1) In our use case, the Nginx is reloaded constantly. you will see lots worker process hanging at "nginx: worker process is shutting down" after couple days:
58 root 0:00 nginx: master process ./openresty/nginx/sbin/nginx -p /opt/applicatio
1029 nobody 0:22 nginx: worker process is shutting down
1030 nobody 0:27 nginx: worker process
by
allenhe
-
Nginx Mailing List - English
I see. so in this case the request was completely sent in single write without blocking that there is no need to schedule a write timer anymore, otherwise it is necessary.
Thanks for the explanations!
b.t.w, have you ever seen the work process is listening on the socket?
by
allenhe
-
Nginx Mailing List - English
I understand the connection establish timer, write timer and read timer should be setup and removed in order, but where is the write timer? are there lines in the logs saying I'm going to send the bytes, the sending is on-going, and the bytes has been sent out?
by
allenhe
-
Nginx Mailing List - English
but it looks to me the timer was set for the write not the read, also the subsequent message isn't telling the nginx was interrupted while sending the request?
by
allenhe
-
Nginx Mailing List - English
Hi,
The Nginx hangs at proxying request body to the upstream server but with no error indicating what's happening until the client close the font-end connection. can somebody here help give me any clue? following is the debug log snippet:
2019/04/12 14:49:38 92#92: *405 epoll add connection: fd:29 ev:80002005
2019/04/12 14:49:38 92#92: *405 connect to 202.111.0.40:1084, fd:29 #406
2019/0
by
allenhe
-
Nginx Mailing List - English
Hi,
My Nginx is configured with:
proxy_next_upstream error timeout http_429 http_503;
But I find it won't try the next available upstream server with the following error returned:
2019/04/05 20:11:41 85#85: *4903418 recv() failed (104: Connection reset by peer) while reading response header from upstream....
The "error" part for the proxy_next_upstream states:
error
an er
by
allenhe
-
Nginx Mailing List - English
Hi,
I understand it is the master process listen to the binding socket since that's what I see from the netstat output in most time:
tcp 0 0 0.0.0.0:28002 0.0.0.0:* LISTEN 12990/nginx: master
while sometimes I found the worker process also doing the same thing:
tcp 0 0 0.0.0.0:28886 0.0.0.0:* LISTEN 12987/ngi
by
allenhe
-
Nginx Mailing List - English