> > > > High probability is all you have. Cosmic radiation hitting your > > > > computer will more likly cause problems, than colliding 64bit inode > > > > numbers ;) > > > > > > Some of us have machines designed to cope with cosmic rays, and would be > > > unimpressed with a decrease in reliability. > > > > With the suggested samefile() interface you'd get a failure with just > > about 100% reliability for any application which needs to compare a > > more than a few files. The fact is open files are _very_ expensive, > > no wonder they are limited in various ways. > > > > What should 'tar' do when it runs out of open files, while searching > > for hardlinks? Should it just give up? Then the samefile() interface > > would be _less_ reliable than the st_ino one by a significant margin. > > You need at most two simultenaously open files for examining any > number of hardlinks. So yes, you can make it reliable. Well, sort of. Samefile without keeping fds open doesn't have any protection against the tree changing underneath between first registering a file and later opening it. The inode number is more useful in this respect. In fact inode number + generation number will give you a unique identifier in time as well, which is a _lot_ more useful to determine if the file you are checking is actually the same as one that you've come across previously. So instead of samefile() I'd still suggest an extended attribute interface which exports the file's unique (in space and time) identifier as an opaque cookie. For filesystems like FAT you can basically only guarantee that two files are the same as long as those files are in the icache, no matter if you use samefile() or inode numbers. Userpace _can_ make the inodes stay in the cache by keeping the files open, which works for samefile as well as checking by inode number. Miklos - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html