Matt Helsley wrote: > These are the same kinds of problems encountered during backup. You > can play fast and loose -- like taking a backup while everything is > running -- or you can play it conservative and freeze things. Not really. The issue isn't files getting deleted during the checkpoint, it's files deleted or renamed over _prior_ to beginning checkpoint. That's a common situation. For example if someone did a software package update, you can easily have processes which reference deleted files running for months. Same if a program keeps open a data file which is edited by a text editor, which renames when saving. Etc, etc. > I think btrfs snapshots are just one possible solution and it's not > overkill. I don't think btrfs snapshots solves the problem anyway, unless you also have a way to look up a file by inode number or equivalent, or the other ideas discussed such as making a link to a deleted file. Note that it isn't _just_ deleted files. The name in question may be deleted but there may still be other links to the file. Or it could be opened via different link names, some or all of which have been deleted or renamed over. In thoses cases it would be a bug to make a copy of the deleted file in the checkpoint state, or in the filesystem, as were mentioned earlier... > I imagine fanotify could also be useful so long as userspace has marked > things correctly prior to checkpoint. My high level understanding of > fanotify was we'd be able to delay (or deny) deletion until checkpoint > is complete. Yes, that might be a way to block filesystem changes during checkpoint, although fanotify's capabilities weren't complete enough for this, last time I looked. (It didn't give sufficient information directory operations.) -- Jamie -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html