Hi!
If user (or script) doesn't specify that flag, it
doesn't help. I think
the best solution for these filesystems would be
either to add new syscall
int is_hardlink(char *filename1, char *filename2)
(but I know adding syscall bloat may be objectionable)
it's also the wrong api; the filenames may have been
changed under you
just as you return from this call, so it really is a
"was_hardlink_at_some_point()" as you specify it.
If you make it work on fd's.. it has a chance at least.
Yes, but it doesn't matter --- if the tree changes under
"cp -a" command, no one guarantees you what you get.
int fis_hardlink(int handle1, int handle 2);
Is another possibility but it can't detect hardlinked
symlinks.
Ugh. Is it even legal to hardlink symlinks?
Why it shoudln't be? It seems to work quite fine in Linux.
Anyway, cp -a is not the only application that wants to do hardlink
detection.
I tested programs for ino_t collision (I intentionally injected it) and
found that CP from coreutils 6.7 fails to copy directories but displays
error messages (coreutils 5 work fine). MC and ARJ skip directories with
colliding ino_t and pretend that operation completed successfuly. FTS
library fails to walk directories returning FTS_DC error. Diffutils, find,
grep fail to search directories with coliding inode numbers. Tar seems
tolerant except incremental backup (which I didn't try). All programs
except diff were tolerant to coliding ino_t on files.
ino_t is no longer unique in many filesystems, it seems like quite serious
data corruption possibility.
Mikulas
Pavel
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html