On Wed, Dec 23, 2020 at 7:30 AM Qiyu Yan <yanqiyu@xxxxxxxxxxxxxxxxx> wrote: > > Patrick O'Callaghan <pocallaghan@xxxxxxxxx> 于2020年12月23日周三 下午9:29写道: > > > > I have a directory I use to hold a Windows VM disk image > > (/home/Windows/...), and would like to snapshot it before playing with > > the QEMU settings. However it's already part of the /home subvolume, so > > is there a way of splitting it off on its own without having to create > > a new subvolume and sending the contents over? AFAIK subvolumes can be > > hierarchical so it would seem like a useful thing to be able to convert > > a subtree without all the copying, but the man page doesn't seem to > > address it. > > You could create a subvolume, use cp --reflink=always to create a > reflink, and delete the old directory, this turns a directory into > subvolume easily without actual copying. But I don't suggest you do > so, since using snapshot/reflink on VM images will make it > Copy-on-Write, VM images should be nocow for performance. > > But can anyone figure out if creating a reflink then removing old > references before any write to the file happens turns on CoW for the > file? I am not sure. But writing after snapshot definitely turns on > CoW. They are immediately marked as shared upon snapshot or reflink copy. But if nothing writes to either file, there's no issue. Once you delete the original file, the extents become exclusive to the new file. And thuse nodatacow applies. And it COWs only writes that are intended to overwrite a shared extent. That results in COW of a new exclusive extent. Subsequent writes to that file for that same block are nocow. -- Chris Murphy _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx