On Wed, 2020-12-23 at 19:51 -0700, Chris Murphy wrote: > (I did read the whole thread but I'm gonna reply three times anyway :D) > > On Wed, Dec 23, 2020 at 6:29 AM Patrick O'Callaghan > <pocallaghan@xxxxxxxxx> wrote: > > > > I have a directory I use to hold a Windows VM disk image > > (/home/Windows/...), and would like to snapshot it before playing with > > the QEMU settings. However it's already part of the /home subvolume, so > > is there a way of splitting it off on its own without having to create > > a new subvolume and sending the contents over? AFAIK subvolumes can be > > hierarchical so it would seem like a useful thing to be able to convert > > a subtree without all the copying, but the man page doesn't seem to > > address it. I'm going to take some time to digest the full answer, but I'll just reply to this part for now: > Your best bet really is to > fallocate a raw file in a subvolume or directory with chattr +C set > and don't snapshot or reflink copy it. The Windows image file is in fact just an fallocated file used as a raw disk, not a qcow. It's on an SSD and seems quite fast, so I wouldn't worry overmuch about fragmentation. This system is basically only used for gaming with GPU passthrough so pretty much everything can be lost. My interest in snapshotting it is just to avoid the pain of a misconfig in QEMU, which is notoriously picky (lomng story short, I'm trying to modify it to use a Q35 CPU instead of the basic i440). The VM was set up on virt-manager when the filesystem was ext4, so it won't have any of the special BTRFS stuff unless I add it myself. Expect further questions later :-) poc _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx