On Wed, Feb 23, 2011 at 06:32:17AM -0500, Theodore Tso wrote: > > On Feb 22, 2011, at 11:44 PM, Rogier Wolff wrote: > > > > > I'll shoot off an Email to the TDB guys as well. > I'm pretty sure this won't come as a surprise to them. I'm using > the last version of TDB which was licensed under the GPLv2, and they > relicensed to GPLv3 quite a while ago. I remember hearing they had > added a new hash algorithm to TDB since the relicensing, but those > newer versions aren't available to e2fsprogs.... Well then.... You're free to use my "new" hash function, provided it is kept under GPLv2 and not under GPLv3. My implementation has been a "cleanroom" implementation in that I've only looked at the specifications and implemented it from there. Although no external attestation is available that I have been completely shielded from the newer GPLv3 version... On a slightly different note: A pretty good estimate of the number of inodes is available in the superblock (tot inodes - free inodes). A good hash size would be: "a rough estimate of the number of inodes." Two or three times more or less doesn't matter much. CPU is cheap. I'm not sure what the estimate for the "dircount" tdb should be. The amount of disk space that the tdb will use is at least: overhead + hash_size * 4 + numrecords * (keysize + datasize + perrecordoverhead) There must also be some overhead to store the size of the keys and data as both can be variable length. By implementing the "database" ourselves we could optimize that out. I don't think it's worth the trouble. With keysize equal 4, datasize also 4 and hash_size equal to numinodes or numrecords, we would get overhead + numinodes * (12 + perrecordoverhead). In fact, my icount database grew to about 750Mb, with only 23M inodes, so that means that apparently the perrecordoverhead is about 20 bytes. This is the price you pay for using a much more versatile database than what you really need. Disk is cheap (except when checking a root filesystem!) So... -- I suggest that for the icount tdb we move to using the superblock info as the hash size. -- I suggest that we use our own hash function. tdb allows us to specify our own hash function. Instead of modifying the bad tdb, we'll just keep it intact, and pass a better (local) hash function. Does anybody know what the "dircount" tdb database holds, and what is an estimate for the number of elements eventually in the database? (I could find out myself: I have the source. But I'm lazy. I'm a programmer you know...). On a separate note, my filesystem finished the fsck (33 hours (*)), and I started the backups again... :-) Roger. *) that might include an estimated 1-5 hours of "Fix <y>?" waiting. -- ** R.E.Wolff@xxxxxxxxxxxx ** http://www.BitWizard.nl/ ** +31-15-2600998 ** ** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 ** *-- BitWizard writes Linux device drivers for any device you may have! --* Q: It doesn't work. A: Look buddy, doesn't work is an ambiguous statement. Does it sit on the couch all day? Is it unemployed? Please be specific! Define 'it' and what it isn't doing. --------- Adapted from lxrbot FAQ -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html