February 13, 2019 03:40AM
Hi Jeff,

This is pretty much what I'm now looking at doing, with Some HA proxy servers behind the External Google LB and then their backend being an internal Google LB which then balances across our Varnish caching layer and eventually the Nginx app servers.

Thank you for sharing your config it'll be a good base for us to start from. We moved from AWS for cost/performance reasons but also because Google LBs allow for us to have a static public facing IP address. Currently our customers point the base of their domain at a server which just redirects requests to the www subdomain and the www subdomain is pointed at a friendly CNAME which used to then be pointed at an AWS ELB CNAME. It now points at an IP address and we can slowly get our customers to update their DNS to point the root of the domain at the Google LB IP before this work is ready.

Once again many thanks Jeff and for everyone else for their replies,

Kind regards,
Richard

On Tue, 2019-02-12 at 10:37 -0500, Jeff Dyke wrote:
Hi Richard. HAProxy defaults to reading all certs in a directory and matching hosts names via SNI. Here is the top of my haproxy config, you can see how i redirect LE requests to another server, which solely services up responses to acme-challenges:

frontend http
mode http
bind 0.0.0.0:80http://0.0.0.0:80

#if this is a LE Request send it to a server on this host for renewals
acl letsencrypt-request path_beg -i /.well-known/acme-challenge/
redirect scheme https code 301 unless letsencrypt-request
use_backend letsencrypt-backend if letsencrypt-request

frontend https
mode tcp
bind 0.0.0.0:443http://0.0.0.0:443 ssl crt /etc/haproxy/certs alpn h2,http/1.1 ecdhe secp384r1
timeout http-request 10s
log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts \ %ac/%fc/%bc/%sc/%rc %sq/%bq SSL_version:%sslv SSL_cypher:%sslc SNI:%[ssl_fc_has_sni]"
#send all HTTP/2 traffic to a specific backend
use_backend http2-nodes if { ssl_fc_alpn -i h2 }
#send HTTP/1.1 and HTTP/1.0 to default, which don't speak HTTP/2
default_backend http1-nodes

I'm not sure exactly how this would work with GCP, but if you use AWS ELB's they will give you certs (you have to prove you own the domain), but you have to be able to use an ELB, which could change ips at any time. Unfortunately this didn't work for us b/c a few of our larger customers whitelist ips and not domain names. which is why i have stayed with HAProxy.

Jeff

On Tue, Feb 12, 2019 at 4:04 AM Richard Paul <Richard@primarysite.net<mailto:Richard@primarysite.net>> wrote:
Hi Jeff

That's interesting, how do you manage the progamming to load the right certificate for the right domain coming in as the server name? We need to load the right certificate for the incoming domain and the 12000 figure is the number of unique vanity domains without the www. subdomains.

We're planning to follow the same path as you though, we're essentially putting these Nginx TLS terminators (fronted by GCP load balancers) in front of our existing Varnish caching and Nginx backend infrastructure which currently only listen on port 80.

I couldn't work out what the limits are at LE as it's not clear with regards to adding new unique domains limits. I'm going to have to ask in the forums at some point so that I can work out what our daily batches are going to be.

Kind regards,
Richard

On Mon, 2019-02-11 at 14:33 -0500, Jeff Dyke wrote:
I use haproxy in a similar way as stated by Rainer, rather than having hundreds and hundreds of config files (yes there are other ways), i have 1 for haproxy and 2(on multiple machines defined in HAProxy). One for my main domain that listens to an "real" server_name and another that listens to `server_name _;` All of the nginx servers simply listen on 80 and 81 to handle non H2 clients and the application does the correct thing with the domain. Which is where YMMV as all applications differ.

I found this much simpler and easier to maintain over time. I got around the LE limits by a staggered migration, so i was only requesting what was in the limit each day, then have a custom script that calls LE (which is also on the same machine as HAProxy) when certs are about 10 days out, so the staggering stays within the limits. When i was using custom configuration, i was build them via python using a yaml file and nginx would effectively be a jinja2 template. But even that became onerous. When going down the nginx path ensure you pay attention to the variables that control domain hash sizes. http://nginx.org/en/docs/hash.html

HTH, good luck!
Jeff

On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner <rainer@ultra-secure.de<mailto:rainer@ultra-secure.de>> wrote:


Am 11.02.2019 um 16:16 schrieb rick_pri <nginx-forum@forum.nginx.org<mailto:nginx-forum@forum.nginx.org>>:

However, our customers, with about 12000 domain names at present have


Let’s Encrypt rate limits will likely make these very difficult to obtain and also to renew.

If you own the DNS, maybe using Wildcard DNS entries is more practical.

Then, HAProxy allows to just drop all the certificates in a directory and let itself figure out the domain-names it has to answer.
At least, that’s what my co-worker told me.

Also, there’s the fabio LB with similar goal-posts.




_______________________________________________
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________

nginx mailing list

<mailto:nginx@nginx.org>

nginx@nginx.org

http://mailman.nginx.org/mailman/listinfo/nginx

http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________

nginx mailing list

<mailto:nginx@nginx.org>

nginx@nginx.org


http://mailman.nginx.org/mailman/listinfo/nginx

http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Subject Author Posted

I'm about to embark on creating 12000 vhosts

rick_pri February 11, 2019 10:16AM

Re: I'm about to embark on creating 12000 vhosts

Ben Schmidt February 11, 2019 10:36AM

Re: I'm about to embark on creating 12000 vhosts

rick_pri February 11, 2019 11:00AM

Re: I'm about to embark on creating 12000 vhosts

Robert Paprocki February 11, 2019 01:36PM

Re: I'm about to embark on creating 12000 vhosts

Peter Booth via nginx February 11, 2019 02:56PM

Re: I'm about to embark on creating 12000 vhosts

Anoop Alias February 11, 2019 09:02PM

Re: I'm about to embark on creating 12000 vhosts

rick_pri February 12, 2019 04:46AM

Re: I'm about to embark on creating 12000 vhosts

rick_pri February 12, 2019 04:42AM

Re: I'm about to embark on creating 12000 vhosts

rick_pri February 12, 2019 03:46AM

Re: I'm about to embark on creating 12000 vhosts

Anonymous User February 12, 2019 04:34AM

Re: I'm about to embark on creating 12000 vhosts

Rainer Duffner February 11, 2019 02:00PM

Re: I'm about to embark on creating 12000 vhosts

jeffdyke February 11, 2019 02:34PM

Re: I'm about to embark on creating 12000 vhosts

rick_pri February 12, 2019 04:06AM

Re: I'm about to embark on creating 12000 vhosts

Lucas Rolff February 12, 2019 04:34AM

Re: I'm about to embark on creating 12000 vhosts

rick_pri February 12, 2019 04:58AM

Re: I'm about to embark on creating 12000 vhosts

rick_pri February 12, 2019 05:08AM

Re: I'm about to embark on creating 12000 vhosts

jeffdyke February 12, 2019 10:38AM

Re: I'm about to embark on creating 12000 vhosts

rick_pri February 13, 2019 03:40AM

Re: I'm about to embark on creating 12000 vhosts

A. Schulze February 11, 2019 02:56PM

Re: I'm about to embark on creating 12000 vhosts

rick_pri February 12, 2019 04:08AM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 84
Record Number of Users: 6 on February 13, 2018
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready