Welcome! Log In Create A New Profile

Advanced

Problem with TCP load balancing and Windows upstream

Posted by pulsemanchaotix 
Problem with TCP load balancing and Windows upstream
August 07, 2017 02:54PM
Hello everyone.

I have had some trouble after provided a Nginx TCP load balancer with MS Windows upstream application.
This scenary consists in a backend application connected to a SQL database.
Clients using a desktop app that connects to these servers to do your main operations. But in this case, they have realized some weird behavior.
When direct connected to a single server everything works fine. When behind the balancer, some specific operations to writing data fails but in particular if before did that the user has passed few minutes inactive. They only can continue with the specific operation if close the app and reopen it.
Asking our client, he said that your app have already a health check to the server. In theory it's assure that timeout between client and server shouldn't happen ever.
Doing an analysis in Nginx logs we can't found anything. No errors are reported, there's no disconnections.
Our doubt is if the balancer are doing a wrong control of these timeouts. I'll paste my config file here:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
worker_rlimit_nofile 30000;

events {
worker_connections 4096;
}

stream {
upstream upstream_backend {
zone upstream_backend 128k;
hash $remote_addr consistent;
server 10.101.34.7:212;
server 10.101.34.8:212;
server 10.101.34.29:212;
}
server {
listen 212 so_keepalive=4h:1h:10;
proxy_timeout 2h;
proxy_connect_timeout 2h;
proxy_pass upstream_backend;
}
}

Any help, I'll appreciate it.
Re: Problem with TCP load balancing and Windows upstream
August 10, 2017 05:08PM
pulsemanchaotix Wrote:
-------------------------------------------------------
> Hello everyone.
>
> I have had some trouble after provided a Nginx TCP load balancer with
> MS Windows upstream application.
> This scenary consists in a backend application connected to a SQL
> database.
> Clients using a desktop app that connects to these servers to do your
> main operations. But in this case, they have realized some weird
> behavior.
> When direct connected to a single server everything works fine. When
> behind the balancer, some specific operations to writing data fails
> but in particular if before did that the user has passed few minutes
> inactive. They only can continue with the specific operation if close
> the app and reopen it.
> Asking our client, he said that your app have already a health check
> to the server. In theory it's assure that timeout between client and
> server shouldn't happen ever.
> Doing an analysis in Nginx logs we can't found anything. No errors are
> reported, there's no disconnections.
> Our doubt is if the balancer are doing a wrong control of these
> timeouts. I'll paste my config file here:
>
> user nginx;
> worker_processes auto;
> error_log /var/log/nginx/error.log;
> pid /run/nginx.pid;
> worker_rlimit_nofile 30000;
>
> events {
> worker_connections 4096;
> }
>
> stream {
> upstream upstream_backend {
> zone upstream_backend 128k;
> hash $remote_addr consistent;
> server 10.101.34.7:212;
> server 10.101.34.8:212;
> server 10.101.34.29:212;
> }
> server {
> listen 212 so_keepalive=4h:1h:10;
> proxy_timeout 2h;
> proxy_connect_timeout 2h;
> proxy_pass upstream_backend;
> }
> }
>
> Any help, I'll appreciate it.


I've changed my config to:

worker_processes auto;
error_log /var/log/nginx/nginx_error.log error;
error_log /var/log/nginx/info.log info;
pid /run/nginx.pid;
worker_rlimit_nofile 60000; #before default

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 6144; #before 4096
}

stream {
upstream upstream_backend {
hash $remote_addr consistent;
server 10.101.34.7:212;
server 10.101.34.8:212;
server 10.101.34.29:212;
}
server {
listen 212 so_keepalive=30m:1:10; #before 2h::10
proxy_timeout 1h; #before 1h
proxy_connect_timeout 120s; #before 3600s
proxy_pass upstream_backend;
}
}

The problem still continues. We believe it's an application problem. Although, any help I'll be grateful.



Edited 3 time(s). Last edit at 08/10/2017 05:11PM by pulsemanchaotix.
Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 285
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready