Hi, Our Ceph is used as backend storage for Openstack. We use the "images" pool for glance and the "compute" pool for instances. We need to migrate our images pool which is on HDD drives to SSD drives. I copied all the data from the "images" pool that is on HDD disks to an "ssdimages" pool that is on SSD disks, made sure the crush rules are all good. I used "rbd deep copy" to migrate all the objects. Then I renamed the pools, "images" to "hddimages" and "ssdimages" to "images". Our Openstack instances are on the "compute" pool. All the instances that are created using the image show the parent as an image from the images pool. I thought renaming would point to the new pool that is on SSD disks with renamed as "images" but now interestingly all the instances rbd info are now pointing to the parent "hddimages". How to make sure the parent pointers stay as "images" only instead of modifying to "hddimages"? Before renaming pools: lab [root@ctl01 /]# rbd info compute/e669fe16-dd2a-4a17-a2c3-c7f5428d781f_disk rbd image 'e669fe16-dd2a-4a17-a2c3-c7f5428d781f_disk': size 100GiB in 12800 objects order 23 (8MiB objects) block_name_prefix: rbd_data.8f51c347398c89 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten flags: create_timestamp: Tue Mar 15 21:36:55 2022 parent: images/909e6734-6f84-466a-b2fa-487b73a1f50a@snap overlap: 10GiB lab [root@ctl01 /]# After renaming pools, the parent value autoamitclaly gets modified: lab [root@ctl01 /]# rbd info compute/e669fe16-dd2a-4a17-a2c3-c7f5428d781f_disk rbd image 'e669fe16-dd2a-4a17-a2c3-c7f5428d781f_disk': size 100GiB in 12800 objects order 23 (8MiB objects) block_name_prefix: rbd_data.8f51c347398c89 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten flags: create_timestamp: Tue Mar 15 21:36:55 2022 parent: hddimages/909e6734-6f84-466a-b2fa-487b73a1f50a@snap overlap: 10GiB lab [root@ctl01 /]# Thanks, Pardhiv _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx