Fond the answer: The new ciphersuites are defined differently and do not specify the certificate type (e.g. RSA, DSA, ECDSA) or the key exchange mechanism (e.g. DHE or ECHDE). This has implications for ciphersuite configuration. So only need to put TLS 1.3 ciphers into the list.by meteor8488 - German Forum
Hi All, In the past, with TLS .1.1/1.2, It's suggested to add both ECC/RSA certificate to web server to make sure if browser support, use ECC certificate to speed up the web site and if browser not support the fail back to RSA certificate. Now I'm trying to enable TLS 1.3 for my website. But it seems TLS 1.3 doesn't support ECC certificate. All the ssl_cipher for TLS 1.3 are as following:by meteor8488 - German Forum
Thanks for reply. Server 1 is for php and server 2 is for static files. I want to enable sndbuf on server 2. Then how can I do that?by meteor8488 - Nginx Mailing List - English
Hi All, If I use server { listen 443 accept_filter=dataready ssl http2; } server { listen 443 http2 sndbuf=512k; } I'll get error duplicate listen options for 0.0.0.0:443 I know it's caused by http2 in server 2. But how can I enable http2 on two servers?by meteor8488 - Nginx Mailing List - English
Well, then we are in a endless loop now: Nginx: You should approach FreeBSD folks. It still doesn't offer this functionality. FreeBSD: Nginx is using reuseport this way appears to be an inappropriate use of that method. Linux hacked the wrong thing to make this work that way instead of using what they should have. Asking FreeBSD to do the wrong thing is asking a bit much, though I kby meteor8488 - Nginx Mailing List - English
It's almost 1 year since I post the question, for now is there any update for nginx to enable reuseport on freebsd?by meteor8488 - Nginx Mailing List - English
Hi all, I just updated my configuration files as following location ~ \.php$ { try_files $uri =404; if ($arg_mod = "upload" ) { return 485; break; } if ($request_method = POST){ return 484; break; } error_page 484 = @post; error_page 485 = @flash; fastcgi_paramby meteor8488 - Nginx Mailing List - English
Thanks for your quickly response. One more question, for client_body_buffer_size 16K, if the $request_body >16K, it seems nginx will put the request body into a temp file, and then no logs in log file, even though I enabled the request log. Does that mean the best way to keep the post log is to enable client_body_in_file_only? But the thing is enable client_body_in_file_only will slow dowby meteor8488 - Nginx Mailing List - English
Hi Team, I always use below configuration to record the post date of my webserver (for security resaon) http { ... log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; log_format plog '$remote_addr - $remote_user [$time_local]by meteor8488 - Nginx Mailing List - English
Hi all, I'm running a website which is based on php. And I'm trying to use naxsi for my website. But it seems to enable naxsi, we need to put below line within a location: include /etc/nginx/naxsi.rules; And an interesting thing I found about location, is that people are using different location sections as following: example 1 server { root... location / {by meteor8488 - Nginx Mailing List - English
Thanks. You're right. After I load the module, it works. Another question is for now I have 3 modules -r-xr-xr-x 1 root wheel 17K 4 6 07:27 ngx_http_geoip_module.so* -r-xr-xr-x 1 root wheel 25K 4 6 07:27 ngx_http_headers_more_filter_module.so* -r-xr-xr-x 1 root wheel 328K 4 6 07:27 ngx_http_lua_module.so* But it seems I only need to load ngx_http_headers_more_filter_by meteor8488 - Nginx Mailing List - English
Hi All, I'm using FreeBSD with nginx-devel. It seems that this problem is lasting for a long time (at least start from nginx 1.9.10 ). Even though I built the source with this module, this module is still not working. After add below configuration into http {}, more_set_headers "Server: my_server"; If try to start nginx, will always get error : nginx: unknown directby meteor8488 - Nginx Mailing List - English
Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Mon, Mar 28, 2016 at 03:54:40AM -0400, meteor8488 wrote: > > > Hi All, > > > > I'm using deny to deny some IPs for my server. > > > > http { > > deny 192.168.1.123; # this is an example > > > > > > server { > &gby meteor8488 - Nginx Mailing List - English
Hi All, I'm using deny to deny some IPs for my server. http { deny 192.168.1.123; # this is an example server { error_page 403 /error/403.htm; error_page 404 /error/404.htm; error_page 502 /error/502.htm; error_page 503 /error/503.htm; location = /error/403.htm { index 403.htm; access_log /var/log/403.log main; } location ~* ^/(data|image)by meteor8488 - Nginx Mailing List - English
Thanks for your reply. The default value is 128 for http2_max_concurrent_streams I tried to change it 64, no big difference. And I also checked http2 document, it suggested that this kind of value should not less than 100by meteor8488 - Nginx Mailing List - English
Hi, Thanks for your reply. I tried to disable http/2, then this issue got fixed.So pretty sure this issue is caused by http2by meteor8488 - Nginx Mailing List - English
Hi All, After I upgrade nginx to 1.9.12 and enabled http2 for my website. I found a wired issue related with download pictures. My website is a photo-sharing websites. So on each page there are about 100-200 pictures, the size of each of them may from 10K to 500K. In the past (http and https with spdy), I'm using below settings: client_body_timeout 10; client_header_timeout 10;by meteor8488 - Nginx Mailing List - English
Hi Guys, Thanks for all these information. But, is there any way for FreeBSD to enable it?by meteor8488 - Nginx Mailing List - English
Hi All, I just upgrade Nginx from 1.8 o 1.9 on my FreeBSD box. After I enabled reuseport on my server, it seems now there is one worker always takes up 100% CPU and all the rest workers are use less than 1% CPU. In the day time it's OK because my website doesn't have lots of users. But at night it's very slow. Active connections: 5716 server accepts handled requests 175477 175477 180564by meteor8488 - Nginx Mailing List - English
Hi all, I just enabled http2 for my website. Due to my website has lots of big pictures, it seems that the http2 new feature " single, multiplexed connection" will cause browser to download lots of data at the same time and use up all bandwidth. And on some mobile devices, it will freeze the web browser for several seconds. So, is there possible to add a limit for "single, muby meteor8488 - Nginx Mailing List - English
Hi Francis, I put the "deny" directives in http{} part. Here is my nginx.conf. http { deny 4.176.128.153; deny 23.105.85.0/24; deny 36.44.146.99; deny 42.62.36.167; deny 42.62.74.0/24; deny 50.116.28.209; deny 50.116.30.23; deny 52.0.0.0/11; deny 54.72.0.0/13; deny 54.80.0.0/12; deny 54.160.0.0/12; deny 54.176.0.0/12; deny 54.176.195.13; deny 54.193by meteor8488 - Nginx Mailing List - English
Thanks for your suggestion. My thought is 1. Is it a robot? 2. If yes, then does't it have a X_forward_IP? 3. If yes, then deny. Your method is 1. Is it a robot? 2. If yes, then if x_forward_ip the same with realip? 3. If no, then deny. I think there is no big different...by meteor8488 - Nginx Mailing List - English
It seems that I can't edit my post. I have to post my question here: I tried to use "deny" to deny access from an IP. But it seems that it can still access my server. In my http part: deny 69.85.92.0/23; deny 69.85.93.235; But when I check the log, I still can find 69.85.93.235 - - [05/May/2015:19:44:22 +0800] "GET /thread-1251687-1-1.html HTTP/1.0" 302 154 &qby meteor8488 - Nginx Mailing List - English
Hi All, Recently I found that someguys are trying to mirror my website. They are doing this in two ways: 1. Pretend to be google spiders . Access logs are as following: 89.85.93.235 - - [05/May/2015:20:23:16 +0800] "GET /robots.txt HTTP/1.0" 444 0 "http://www.example.com" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "66.249.by meteor8488 - Nginx Mailing List - English
thanks for your reply. I know that I can use if to enable conditional logging. But what I want to do is if $spiderbot=0, then log to location_access.log if $spiderbot=1, then log to spider_access.log. And I don't want the same logs write to different files. How can I do that? thanksby meteor8488 - Nginx Mailing List - English
Hi all, I'm trying to separate the robot access log and human access log, so I'm using below configuration: http { .... map $http_user_agent $ifbot { default 0; "~*rogerbot" 3; "~*ChinasoSpider" 3; "~*Yahoo" 1; "~*Bot" 1; "~*Spider" 1;by meteor8488 - Nginx Mailing List - English
I'm using valid_referers to block access to my website's picture from other site. I'm using below config: location ~*^.+\.(jpg|jpeg|gif|png|bmp)$ { expires 7d; valid_referers none blocked www.example.com; if ($invalid_referer) { return 403; } } It works well in the past. But now I foundby meteor8488 - How to...
Hi All, In these days, php-fpm took up all cpu resources several times a day.. Each php-fpm may take up 25% cpu resource which should be less than 5% in normal status. Below is the php-fpm.log. [30-Mar-2013 22:18:47] NOTICE: about to trace 29600 [30-Mar-2013 22:18:47] NOTICE: finished trace of 29600 [30-Mar-2013 22:18:47] NOTICE: child 29599 stopped for tracing [30-Mar-2013 22:18:47] NOTICE: aboby meteor8488 - Nginx Mailing List - English