Hi *B. R.*!
Thanks a lot for the reply and information! The KEY however, does not contain different data from http_accept_encoding. When viewing the contents of the cache file it contains the exact same KEY for both MD5 hashes. Also, it does not matter what browser is used for the first request. For example, using a Google PageSpeed test at the first request will create the expected MD5 hash for the KEY, and a next request using Chrome will create a new hash for a file that contains the line "KEY: ..." that matches the KEY for the first MD5 hash.
The third request also has a different KEY. I did not test any further, it may be that the KEY will change for every new client. The KEY does remain the same however for the same client. For example, the first request uses the MD5 hash as expected for the KEY (as generated by MD5) and it will keep using it in next requests.
As gzip compression causes a huge overhead on servers with high traffic, I was wondering if Nginx would cache the gzip compressed result and if so, if there is a setting with a maximum cache size. It would however, cause a waste of cache space.
In tests the overhead added 4 tot 10ms on a powerful server for every request compared with loading pre-compressed gzip HTML directly. It makes me wonder what will be the effect on servers with high traffic.
As there appears to be no solution in Google, finding an answer may be helpful for a lot of websites and it will make Nginx the best option for full page cache.
Best Regards,
Jan Jaap