In Linux a deep directory structure with a lot of files cause a huge I/O overhead due to the following problems:
* directory path does not fit in 15 characters so won't be cached in VFS d-cache
* too many blocks are used to store directory information so they don't fit in memory buffers
* uncached, accessing a lot of small files in a deep directory structure causing a lot of block reads while resolving the path
It would be nice to have an nginx module which knows the following as a workaround for file accesses in deep directories:
* cache accessed static files' hard links in a flat directory structure near the root path, sg like: /x
* naming hard links with their unique inode number to keep file names short
* storing url/file name -> hard link references in memory (few hundred megabytes for a few million files)
* collecting garbage by deleting hard links for files that are older than a certain age
In my opinion this solution would cause a little overhead with creating hard links to files once but on the other hand it would win a lot of I/O resources with making most block reads unnecessary for path resolution due to shorter path names.