Dear ceph users and developers, we're struggling with strange issue which I think might be a bug causing snapshot data corruption while migrating RBD image we've tracked it to minimal set of steps to reproduce using VM with one 32G drive: rbd create --size 32768 sata/D2 virsh create xml_orig.xml rbd snap create ssd/D1@snap1 rbd export-diff ssd/D1@snap1 - | rbd import-diff - sata/D2 rbd export --export-format 1 --no-progress ssd/D1@snap1 - | xxh64sum 505dde3c49775773 rbd export --export-format 1 --no-progress sata/D2@snap1 - | xxh64sum 505dde3c49775773 # <- checksums match - OK virsh shutdown VM rbd migration prepare ssd/D1 sata/D1Z virsh create xml_new.xml rbd snap create sata/D1Z@snap2 rbd export-diff --from-snap snap1 sata/D1Z@snap2 - | rbd import-diff - sata/D2 rbd migration execute sata/D1Z rbd migration commit sata/D1Z rbd export --export-format 1 --no-progress sata/D1Z@snap2 - | xxh64sum 19892545c742c1de rbd export --export-format 1 --no-progress sata/D2@snap2 - | xxh64sum cc045975baf74ba8 # <- snapshosts differ OS is alma 9 based, kernel 5.15.105, CEPH 17.2.6, qemu-8.0.3 we tried disabling VM disk caches as well as discard, to no avail. my first question is, is it correct to assume creating snapshots while live migrating data is safe? if so, any ideas on where the problem could be? If I could provide more info, please let me know with regards nikola ciprich -- ------------------------------------- Ing. Nikola CIPRICH LinuxBox.cz, s.r.o. 28.rijna 168, 709 00 Ostrava tel.: +420 591 166 214 fax: +420 596 621 273 mobil: +420 777 093 799 www.linuxbox.cz mobil servis: +420 737 238 656 email servis: servis@xxxxxxxxxxx ------------------------------------- _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx