For Ceph based LVM volumes, you would do this to import: Map every one of the RBDs to the host Include this in /etc/lvm/lvm.conf: types = [ "rbd", 1024 ] pvscan vgscan pvs vgs If you see the VG: vgimportclone -n <make a name for VG> /dev/rbd0 /dev/rbd1 ... --import Now you should be able to vgchange -a y <your VG> and see the LVs -- Alex Gorbachev www.iss-integration.com On Sun, Feb 4, 2024 at 2:55 AM duluxoz <duluxoz@xxxxxxxxx> wrote: > Hi All, > > All of this is using the latest version of RL and Ceph Reef > > I've got an existing RBD Image (with data on it - not "critical" as I've > got a back up, but its rather large so I was hoping to avoid the restore > scenario). > > The RBD Image used to be server out via an (Ceph) iSCSI Gateway, but we > are now looking to use plain old kernal module. > > The RBD Image has been RBD Mapped to the client's /dev/rbd0 location. > > So now I'm trying a straight `mount /dev/rbd0 /mount/old_image/` as a test > > What I'm getting back is `mount: /mount/old_image/: unknown filesystem > type 'LVM2_member'.` > > All my Google Foo is telling me that to solve this issue I need to > reformat the image with a new file system - which would mean "losing" > the data. > > So my question is: How can I get to this data using rbd kernal modules > (the iSCSI Gateway is no longer available, so not an option), or am I > stuck with the restore option? > > Or is there something I'm missing (which would not surprise me in the > least)? :-) > > Thanks in advance (as always, you guys and gals are really, really helpful) > > Cheers > > > Dulux-Oz > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx