Hi, Reinis has probably covered this but the default php.ini file has a 'File Upload section' with...
; Temporary directory for HTTP uploaded files (will use system default if not
; specified).
; upload_tmp_dir =
I just uncommented the attribute and set it to a location on our main disk e.g.
upload_tmp_dir = /opt/tmp
--
The complete solution involved using the http_auth_request_module. So in our nginx configuration file for big file upload url...
# PHP - file upload - bigf
location /api/bigf/analysis/upload {
auth_request /bigf/auth;
error_page 403 =413 /bigfLowDiskSpace.html;
error_page 413 /bigfTooBigError.html;
fastcgi_request_buffering off; # pass the request straight to php without buffering
fastcgi_read_timeout 1h;
fastcgi_pass unix:/opt/tmp/php-fpm.sock;
include fastcgi_params;
# Command specific parameters
fastcgi_param PERMITTED_FILETYPE "bigf";
fastcgi_param HOME_FOLDER "/home/instrument";
fastcgi_param DEST_FOLDER "Analysis/Filters";
fastcgi_param SCRIPT_FILENAME $document_root/PHP/uploadFile.php;
}
# from the auth_request directive in the above block
location /bigf/auth {
internal;
fastcgi_pass_request_body off;
fastcgi_pass unix:/opt/tmp/php-fpm.sock;
fastcgi_intercept_errors on;
include fastcgi_params;
fastcgi_param BIGF_UPLOAD_SIZE $content_length;
fastcgi_param BIGF_UPLOAD_MARGIN_BYTES 10737418240; # reject if < 10GB free after upload
fastcgi_param HOME_FOLDER "/home/instrument";
fastcgi_param DEST_FOLDER "Analysis/Filters";
fastcgi_param CONTENT_LENGTH "";
fastcgi_param SCRIPT_FILENAME $document_root/PHP/bigfAuthUpload.php;
}
# Custom error pages for the error_page directives specified above
location /bigfLowDiskSpace.html {
root /opt/lib/webapp/errorPages;
allow all;
}
location /bigfTooBigError.html {
root /opt/lib/webapp/errorPages;
allow all;
}
---
The bigfAuthUpload.php script just checks the space available on the destination drive and if the space available minus the approximate incoming file size (as it includes some bytes from the request header) breaks the allowed margin (10G in this case) we reject the upload by calling http_response_code(403). If there is enough space http_response_code(200) is set which 'authorizes' the upload and allows the uploadFile.php script to be called.
The error_page 403 =413 redirect allows us to return an error page specific to the rejected bigfAthUpload.php call.
The error_page 413 redirect allows us to intercept the nginx file size restriction (we set the directive 'client_max_body_size' to 5G in the server configuration block of our nginx configuration file), i believe the use of 'fastcgi_intercept_errors on;' in our auth block facilitates this.
By using the fastcgi_pass_request_body off; directive in the 'bigf/auth' location block the bigfAuthUpload.php script is passed the request header without the body so we can reject the upload before the request body is written to /opt/tmp.
The uploadFile.php script effectively copies the file from /opt/tmp to the destination location. It allows us to handle different $_FILES content (dependent on the client used to call our upload service we need to cope with $_FILES['upload'], $_FILES['Data'] and $_FILES['file'] variants to extract the file name) and to rename the file to cope with duplicates.
The fastcgi_params file we include in the above location blocks include the lines
Hope this helps someone else out, thanks to everyone who contributed!