Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jayanth,

Only a couple of glusterfs volumes ie the glusterfs bricks are sitting on lvs which are on a lv spares volume on a vg which spans two pvs

My Google Foo led me to believe that the above set-up would (should?) be entirely independent of anything to do with rbd/ceph - was I wrong in this?

Cheers

On 04/02/2024 19:34, Jayanth Reddy wrote:
Hi,
Anything with "pvs" and "vgs" on the client machine where there is /dev/rbd0?

Thanks
------------------------------------------------------------------------
*From:* duluxoz <duluxoz@xxxxxxxxx>
*Sent:* Sunday, February 4, 2024 1:59:04 PM
*To:* yipikai7@xxxxxxxxx <yipikai7@xxxxxxxxx>; matthew@xxxxxxxxxxxxxxx <matthew@xxxxxxxxxxxxxxx>
*Cc:* ceph-users@xxxxxxx <ceph-users@xxxxxxx>
*Subject:* Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please
Hi Cedric,

That's what I thought - the access method shouldn't make a difference.

No, no lvs details at all - I mean, yes, the osds show up with the lvs
command on the ceph node(s), but not on the individual pools/images (on
the ceph node or the client) - this is, of course, that I'm doing this
right (and there's no guarantee of that).

To clarify: entering `lvs` on the client (which has the rbd image
"attached" as /dev/rbd0) returns nothing, and `lvs` on any of the ceph
nodes only returns the data for each OSD/HDD.

Full disclosure (as I should have done in the first post): the
pool/image was/is used as block device for oVirt VM disk images - but as
far as I'm aware this should not be the cause of this issue (because we
also use glusterfs and we've got similar VM disk images on gluster
drives/bricks and the VM images show up as "simple" files (yes, I'm
simplifying things a bit with that last statement)

On 04/02/2024 19:16, Cedric wrote:
> Hello,
>
> Data on a volume should be the same independently on how they are
> being accessed.
>
> I would think the volume was previously initialized with an LVM layer,
> did "lvs" shows any logical volume on the system ?
>
> On Sun, Feb 4, 2024, 08:56 duluxoz <duluxoz@xxxxxxxxx> wrote:
>
>     Hi All,
>
>     All of this is using the latest version of RL and Ceph Reef
>
>     I've got an existing RBD Image (with data on it - not "critical"
>     as I've
>     got a back up, but its rather large so I was hoping to avoid the
>     restore
>     scenario).
>
>     The RBD Image used to be server out via an (Ceph) iSCSI Gateway,
>     but we
>     are now looking to use plain old kernal module.
>
>     The RBD Image has been RBD Mapped to the client's /dev/rbd0 location.
>
>     So now I'm trying a straight `mount /dev/rbd0 /mount/old_image/`
>     as a test
>
>     What I'm getting back is `mount: /mount/old_image/: unknown
>     filesystem
>     type 'LVM2_member'.`
>
>     All my Google Foo is telling me that to solve this issue I need to
>     reformat the image with a new file system - which would mean "losing"
>     the data.
>
>     So my question is: How can I get to this data using rbd kernal
>     modules
>     (the iSCSI Gateway is no longer available, so not an option), or am I
>     stuck with the restore option?
>
>     Or is there something I'm missing (which would not surprise me in the
>     least)?  :-)
>
>     Thanks in advance (as always, you guys and gals are really, really
>     helpful)
>
>     Cheers
>
>
>     Dulux-Oz
>     _______________________________________________
>     ceph-users mailing list -- ceph-users@xxxxxxx
>     To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Peregrine IT Signature

*Matthew J BLACK*
  M.Inf.Tech.(Data Comms)
  MBA
  B.Sc.
  MACS (Snr), CP, IP3P

When you want it done /right/ ‒ the first time!

Phone: 	+61 4 0411 0089
Email: 	matthew@xxxxxxxxxxxxxxx <mailto:matthew@xxxxxxxxxxxxxxx>
Web: 	www.peregrineit.net <http://www.peregrineit.net>

View Matthew J BLACK's profile on LinkedIn <http://au.linkedin.com/in/mjblack>

This Email is intended only for the addressee.  Its use is limited to that intended by the author at the time and it is not to be distributed without the author’s consent.  You must not use or disclose the contents of this Email, or add the sender’s Email address to any database, list or mailing list unless you are expressly authorised to do so.  Unless otherwise stated, Peregrine I.T. Pty Ltd accepts no liability for the contents of this Email except where subsequently confirmed in writing.  The opinions expressed in this Email are those of the author and do not necessarily represent the views of Peregrine I.T. Pty Ltd.  This Email is confidential and may be subject to a claim of legal privilege.

If you have received this Email in error, please notify the author and delete this message immediately.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux