On Tue, 2 Jan 2007, Miklos Szeredi wrote:
It seems like the posix idea of unique <st_dev, st_ino> doesn't
hold water for modern file systems
are you really sure?
Well Jan's example was of Coda that uses 128-bit internal file ids.
and if so, why don't we fix *THAT* instead
Hmm, sometimes you can't fix the world, especially if the filesystem
is exported over NFS and has a problem with fitting its file IDs uniquely
into a 64-bit identifier.
Note, it's pretty easy to fit _anything_ into a 64-bit identifier with
the use of a good hash function. The chance of an accidental
collision is infinitesimally small. For a set of
100 files: 0.00000000000003%
1,000,000 files: 0.000003%
I do not think we want to play with probability like this. I mean...
imagine 4G files, 1KB each. That's 4TB disk space, not _completely_
unreasonable, and collision probability is going to be ~100% due to
birthday paradox.
You'll still want to back up your 4TB server...
Certainly, but tar isn't going to remember all the inode numbers.
Even if you solve the storage requirements (not impossible) it would
have to do (4e9^2)/2=8e18 comparisons, which computers don't have
enough CPU power just yet.
It is remembering all inode numbers with nlink > 1 and many other tools
are remembering all directory inode numbers (see my other post on this
topic). It of course doesn't compare each number with all others, it is
using hashing.
It doesn't matter if there are collisions within the filesystem, as
long as there are no collisions between the set of files an
application is working on at the same time.
--- that are all files in case of backup.
Miklos
Mikulas
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html