Welcome! Log In Create A New Profile


Re: Load Balancing NTLM over HTTP with NGINX

November 19, 2022 07:38PM
On Sat, Nov 19, 2022 at 4:04 PM Maxim Dounin <mdounin@mdounin.ru> wrote:

> Hello!
> On Fri, Nov 18, 2022 at 10:30:29PM -0500, Michael B Allen wrote:
> > NTLM over HTTP is a 3 request "handshake" that must occur over the same
> > connection.
> > My HTTP service implements the NTLMSSP acceptor and uses the clients
> remote
> > address and port like "" to track the authentication
> state
> > of each TCP connection.
> >
> > My implementation also uses a header called 'Jespa-Connection-Id' that
> > allows the remote address and port to be supplied externally.
> > NGINX can use this to act as a proxy for NTLM over HTTP with a config
> like
> > the following:
> >
> > server {
> > location / {
> > proxy_pass http://localhost:8080;
> > proxy_set_header Jespa-Connection-Id
> > $remote_addr:$remote_port;
> > }
> > }
> I'm pretty sure you're aware of this, but just for the record.
> Note that NTML authentication is not HTTP-compatible, but rather
> requires very specific client behaviour. Further, NTLM
> authentication can easily introduce security issues as long as any
> proxy servers are used between the client and the origin server,
> since it authenticates a connection rather than particular
> requests, and connections are not guaranteed to contain only
> requests from a particular client.

Hi Maxim,

Hijacking NTLM authenticated TCP connections is not THAT easy.
But generally, we assume HTTP TLS is being used if people care at all about
AFAIK TLS can't go through proxies without tunnelling so either way, you
shouldn't be able to hijack a TLS connection.

NTLM is used because it's fast, reliable and provides a truly password-free
SSO experience.
While Kerberos provides superior security, it can be fickle (client access
to DC, time sync, depends heavily on DNS, SPNs, ...).
Since NTLM is the fallback mechanism, it always works.

NTLM has issues that are more significant than what you described.
But they can be managed.

> More generally, do you see any problems with this scheme?
> As of now, nginx by default does not use keepalive connections to
> the upstream servers. These are, however, can be configured by
> using the "keepalive" directive (http://nginx.org/r/keepalive),
> and obviously enough this will break the suggested scheme as there
> will be requests from other clients on the same connection.

My implementation works with connection caching (keepalive) to backends.
Here's the config I'm testing right now and so far it's holding up:

upstream backend {
server localhost:8080;
server localhost:8081;
keepalive 16;
server {
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Jespa-Connection-Id $remote_addr:$remote_port;

Loopback captures look right.

Note the key difference in my scheme is the Jespa-Connection-Id which gives
the backend the id it needs to properly map clients to security contexts.


Michael B Allen
Java AD DS Integration
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-leave@nginx.org
Subject Author Posted

Load Balancing NTLM over HTTP with NGINX

Michael B Allen November 18, 2022 10:32PM

Re: Load Balancing NTLM over HTTP with NGINX

Michael B Allen November 19, 2022 12:14PM

Re: Load Balancing NTLM over HTTP with NGINX

Maxim Dounin November 19, 2022 04:04PM

Re: Load Balancing NTLM over HTTP with NGINX

Michael B Allen November 19, 2022 07:38PM

Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 244
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready