Welcome! Log In Create A New Profile

Advanced

Problem with big files

June 16, 2014 05:05PM
Hi

Recently I hit quite big problem with huge files. Nginx is a cache fronting an origin which serves huge files (several GB). Clients use mostly range requests (often to get parts towards the end of the file) and I use a patch Maxim provided some time ago allowing range requests to receive HTTP 206 if a resource is not in cache but it's determined to be cacheable...

When a file is not in cache and I see a flurry of requests for the same file I see that after proxy_cache_lock_timeout - at that time the download didn't reach the first requested byte of a lot of requests - nginx establishes a new connection to upstream for each client and initiates another download of the same file. I understand why this happens and that it's by design but...
That kills the server. Multiple writes to temp directory basically kill the disk performance (which in turn blocks nginx worker processes).

Is there anything that can be done to help that? Keeping in mind that I can't afford serving HTTP 200 to a range request and also I'd like to avoid clients waiting for the first requested byte forever...

Thanks in advance!

Regards,
Kuba
Subject Author Posted

Problem with big files

jakubp June 16, 2014 05:05PM

Re: Problem with big files

Justin Dorfman June 16, 2014 05:14PM

Re: Problem with big files

jakubp June 17, 2014 06:51PM

Re: Problem with big files

Maxim Dounin June 19, 2014 06:42AM



Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 191
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready