Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Cedric,

That's what I thought - the access method shouldn't make a difference.

No, no lvs details at all - I mean, yes, the osds show up with the lvs command on the ceph node(s), but not on the individual pools/images (on the ceph node or the client) - this is, of course, that I'm doing this right (and there's no guarantee of that).

To clarify: entering `lvs` on the client (which has the rbd image "attached" as /dev/rbd0) returns nothing, and `lvs` on any of the ceph nodes only returns the data for each OSD/HDD.

Full disclosure (as I should have done in the first post): the pool/image was/is used as block device for oVirt VM disk images - but as far as I'm aware this should not be the cause of this issue (because we also use glusterfs and we've got similar VM disk images on gluster drives/bricks and the VM images show up as "simple" files (yes, I'm simplifying things a bit with that last statement)

On 04/02/2024 19:16, Cedric wrote:
Hello,

Data on a volume should be the same independently on how they are being accessed.

I would think the volume was previously initialized with an LVM layer, did "lvs" shows any logical volume on the system ?

On Sun, Feb 4, 2024, 08:56 duluxoz <duluxoz@xxxxxxxxx> wrote:

    Hi All,

    All of this is using the latest version of RL and Ceph Reef

    I've got an existing RBD Image (with data on it - not "critical"
    as I've
    got a back up, but its rather large so I was hoping to avoid the
    restore
    scenario).

    The RBD Image used to be server out via an (Ceph) iSCSI Gateway,
    but we
    are now looking to use plain old kernal module.

    The RBD Image has been RBD Mapped to the client's /dev/rbd0 location.

    So now I'm trying a straight `mount /dev/rbd0 /mount/old_image/`
    as a test

    What I'm getting back is `mount: /mount/old_image/: unknown
    filesystem
    type 'LVM2_member'.`

    All my Google Foo is telling me that to solve this issue I need to
    reformat the image with a new file system - which would mean "losing"
    the data.

    So my question is: How can I get to this data using rbd kernal
    modules
    (the iSCSI Gateway is no longer available, so not an option), or am I
    stuck with the restore option?

    Or is there something I'm missing (which would not surprise me in the
    least)?  :-)

    Thanks in advance (as always, you guys and gals are really, really
    helpful)

    Cheers


    Dulux-Oz
    _______________________________________________
    ceph-users mailing list -- ceph-users@xxxxxxx
    To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux