I've been reading through this whole thread, and it appears to me that the only real, long-term solution is to rely on file system UUID's (for those file systems that support real 128-bit UUID's), and optionally, for those file systems which support it, a new, "snapshot" or "fs-instance" UUID. The FS UUID is pretty simple; we just need an ioctl (or similar interface) which returns the UUID for a particular file system. The Snapshot UUID is the one which is bit trickier. If the underlying block device can supply something unique --- for example, the WWN or WWID which is defined by FC, ATA, SATA, SCSI, NVMe, etc. then that plus a partition identifier could be hashed to form a Snapshot UUID. LVM volumes have a LV UUID that could be used for a similar purpose. If you have a block device which doesn't relibly provide a WWN or WWID, or we could could imagine that a file system has a field in the superblock, and a file system specific program could get used to set that field to a random UUID, and that becomes part of the snapshot process. This may be problematic for remote/network file systems which don't have such a concept, but life's a bitch.... With that, then userspace can fetch the st_dev, st_ino, the FS UUID, and the Snapshot UUID, and use some combination of those fields (as available) to try determine whether two objects are unique or not. Is this perfect? Heck, no. But ultimately, this is a hard problem, and trying to wave our hands and create something that works given one set of assumptions --- and completely breaks in a diferent operating environment --- is going lead to angry users blaming the fs developers. It's a messy problem, and I think all we can do is expose the entire mess to userspace, and make it be a userspace problem. That way, the angry users can blame the userspace developers instead. :-) - Ted