Hi team, I’m experimenting a bit CentOS Stream 9 on our infrastructure as we’re migrating away from CentOS Stream 8. As our deployment model is an hyperconverged one, I have CEPH and OPENSTACK running on the same hosts (OSDs+NOVA/CINDER). That prohibits me to let CEPH nodes running on CentOS Stream 8. However, I’ve noticed an issue related to LVM, which is a bit annoying, when running CentOS Stream 8 based ceph/daemon container over a CentOS Stream 9 node. Indeed, when the ceph-volume lvm batch is processed and perform the lvm subcommand process call, the pvcreate do not persist anymore on the host and so consequently, when you reboot, all ceph related disks disappear as LVM can’t find their PVID or device references. Even a pvscan doesn’t find them. The only workaround that we found out right now is to pvcreate the host disks, pvremove them, dd them and then run the installation process again, from my understanding it kinda warmup the host lvm system and consequently let it keep up with the ceph-volume run later on. For additional information: We deploy the cluster using ceph-ansible stable-6.0 branch (pacific). The container come from quay.io and is 6.0.11 CentOS Stream 8 release. This release is working perfectly on a CentOS Stream 8 based host. One weird thing that we catched is that on a CentOS Stream 8 host using this CentOS Stream 8 based image, lvm/dm are creating new dm block devices named after ceph vg/lv where on a CentOS Stream 9 based host, lvm/dm create the new vg/lv as symlink to dm-xY dm block devices. And last question, is there any planned build of the ceph/daemon:6.0.11 or greater image based on CentOS Stream 9 ? Thanks a lot! PS: I did forget the tag at first post and don’t know if it’s something important for the mailing distribution. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx