Welcome! Log In Create A New Profile

Advanced

Caching static files as hard links to solve deep directory I/O overhead

Posted by rpspace 
Caching static files as hard links to solve deep directory I/O overhead
March 04, 2010 04:45AM
In Linux a deep directory structure with a lot of files cause a huge I/O overhead due to the following problems:

* directory path does not fit in 15 characters so won't be cached in VFS d-cache
* too many blocks are used to store directory information so they don't fit in memory buffers
* uncached, accessing a lot of small files in a deep directory structure causing a lot of block reads while resolving the path

It would be nice to have an nginx module which knows the following as a workaround for file accesses in deep directories:

* cache accessed static files' hard links in a flat directory structure near the root path, sg like: /x
* naming hard links with their unique inode number to keep file names short
* storing url/file name -> hard link references in memory (few hundred megabytes for a few million files)
* collecting garbage by deleting hard links for files that are older than a certain age

In my opinion this solution would cause a little overhead with creating hard links to files once but on the other hand it would win a lot of I/O resources with making most block reads unnecessary for path resolution due to shorter path names.
Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 175
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready