Hello, hopefully someone can tell me if i am attempting the impossible.
We use nginx and php5-fpm to handle the upload of test files used by our web app that may be several gigabytes in size.
To achieve this we use a location block for the upload url and define the fastcgi_pass directive to provide the location of the fastcgi socket i.e.
location /api/filter/analysis/upload {
fastcgi_pass unix:/tmp/php5-fpm.sock;
include fastcgi_params;
fastcgi_params SCRIPTFILENAME $document_root/PHP/uploadFile.php;
}
The above configuration functions however we have observed that nginx will cache the entire request body before calling the php script which effectively copies the uploaded temporary file to a different location.
Under certain circumstances this can be a major issue for us as our host environment is within an instrument that is designed to capture data continuously and will fill the hard drive close to 100% capacity before stopping. If a user instigates the upload of a 1gb file that is cached first by nginx and then copied elsewhere by the php script we can find ourselves out of disk space at which point nginx (and other processes) understandably stops functioning leaving the instrument in a state that is difficult to recover from.
If we can prevent nginx from caching the entire request body and have it pass it straight to our php script we should be able to take preventative steps in the script and reject the upload attempt if disk space is too low.
I have looked at the directives 'proxy_request_buffering off;' and 'fastcgi_request_buffering off;' however i have not been successful in stopping the initial nginx upload before our php script is called.
If this is the wrong approach can anyone suggest a different one? basically we need to intercept the file upload request and reject it if we are too low on disk space the current setup effectively uploads the file before we can perform such a check.
Many thanks