Out of curiosity, how are you mapping the rbd? Have you tried using guestmount? I'm just spitballing, I have no experience with your issue, so probably not much help or useful. On Mon, 5 Feb 2024, 10:05 duluxoz, <duluxoz@xxxxxxxxx> wrote: > ~~~ > Hello, > I think that /dev/rbd* devices are flitered "out" or not filter "in" by > the fiter > option in the devices section of /etc/lvm/lvm.conf. > So pvscan (pvs, vgs and lvs) don't look at your device. > ~~~ > > Hi Gilles, > > So the lvm filter from the lvm.conf file is set to the default of `filter > = [ "a|.*|" ]`, so that's accept every block device, so no luck there :-( > > > ~~~ > For Ceph based LVM volumes, you would do this to import: > Map every one of the RBDs to the host > Include this in /etc/lvm/lvm.conf: > types = [ "rbd", 1024 ] > pvscan > vgscan > pvs > vgs > If you see the VG: > vgimportclone -n <make a name for VG> /dev/rbd0 /dev/rbd1 ... --import > Now you should be able to vgchange -a y <your VG> and see the LVs > ~~~ > > Hi Alex, > > Did the above as you suggested - the rbd devices (3 of them, none of which > were originally part of an lvm on the ceph servers - at least, not set up > manually by me) still do not show up using pvscan, etc. > > So I still can't mount any of them (not without re-creating a fs, anyway, > and thus losing the data I'm trying to read/import) - they all return the > same error message (see original post). > > Anyone got any other ideas? <hopeful tone in voice> :-) > > Cheers > > Dulux-Oz > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx