Re: Question about optimal filesystem with many small files.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Thu, 2009-07-09 at 10:09 -0700, James A. Peltier wrote:
> On Thu, 9 Jul 2009, oooooooooooo ooooooooooooo wrote:
> 
> >
> > It's possible that I will be able to name the directory tree based in the hash of te file, so I would get the structure described in one of my previous post (4 directory levels, each directory name would be a single character from 0-9 and A-F, and 65536 (16^4) leaves, each leave containing 200 files). Do you think that this would really improve performance? Could this structure be improved?
> >
> 
> If you don't plan on modifying the file after creation I could see it 
> working.  You could consider the use of a Berkley DB style database for 
> quick and easy lookups on large amounts of data, but depending on your 
> exact needs maintenance might be a chore and not really feasable.

MUMPS DB will go at it even faster.

> It's an interesting suggestion but I don't know if it would actually work 
> like you describe based on having to always compute the hash first.
> 
Indeed interesting. Actually it would be the same as taking the file to
base 64 on final storage. My thoughts are it would would. Even faster
would be to implement this with the table in RAM.

john

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux