Hello,
I have a website serving a number of different PHP based applications.
Some of them natively serve their own gzipped pages, others only serve compressed HTML but uncompressed .css and .js files, others don't compress anything.
So far I have a per /location/ Nginx gzip configuration. Gzip is globally disabled and is explicitly enabled per location.
The apps I need or want to be compressed by Nginx simply have this:
location /blah/ {
include my_gzip_custom.conf
}
That conf file works great for all but one web app that serves a weird mix of compressed and uncompressed pages / related files.
What happens if Nginx gets served by PHP-FPM a compressed page and maybe 1 compressed .css file + 5 uncompressed .css files and 6 .js?
Does it "spend time and resources" re-compressing the already compressed streams?
Does it wrap the gzipped compressed streams inside its own compressed streams? (I admit my ignorance about if nested compressed streams are even supported by web standards).
Or does it - hopefully - detect an incoming stream "magic numbers" or headers and transparently skips re-compression / wrapping?
That is, may I get an efficient output where Nginx only compresses the specific page-included files (.css etc) that required it and passes-through the already compressed contents?
If not, is it possible to implement such a mechanism by means of configuration and how?
Thanks in advance.