Re: Automanage block devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Interesting, but weird...

I use Quincy
root@hvs001:/# ceph versions
{
    "mon": {
        "ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy (stable)": 6
    },
    "mds": {
        "ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy (stable)": 2
    },
    "overall": {
        "ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy (stable)": 13
    }
}

And the ceph-volume inventory does show rdb devices...

root@hvs001:/# rbd --pool libvirt-pool map --image PCVIRTdra
/dev/rbd0
root@hvs001:/# ceph-volume inventory

Device Path               Size         rotates available Model name
/dev/rbd0                 140.00 GB    False   False
/dev/sda                  894.25 GB    False   False     MZILS960HEHP/007
/dev/sdb                  894.25 GB    False   False     MZILS960HEHP/007

I put a comment on the git-hub page...

> -----Oorspronkelijk bericht-----
> Van: Etienne Menguy <etienne.menguy@xxxxxxxxxxx>
> Verzonden: maandag 29 augustus 2022 14:34
> Aan: Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>
> CC: ceph-users@xxxxxxx
> Onderwerp: RE: Automanage block devices
> 
> I now understand, thanks for explanation.
> 
> You use latest ceph-volume version? I see there was a change to ignore rbd
> devices in ceph-volume.
> 
> https://tracker.ceph.com/issues/53846
> https://github.com/ceph/ceph/pull/44604
> 
> Étienne
> 
> > -----Original Message-----
> > From: Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>
> > Sent: lundi 29 août 2022 14:15
> > To: Etienne Menguy <etienne.menguy@xxxxxxxxxxx>
> > Cc: ceph-users@xxxxxxx
> > Subject: RE: Automanage block devices
> >
> > [You don't often get email from dominique.ramaekers@xxxxxxxxxx. Learn
> > why this is important at https://aka.ms/LearnAboutSenderIdentification
> > ]
> >
> > Hi Etienne,
> >
> > Maybe I didn't make myself clear...
> >
> > When I map an rbd-image from my cluster to a /dev/rbd, ceph wants to
> > automatically add the /dev/rbd as an OSD. This is undesirable behavior.
> > Trying to add a /dev/rdb mapped to an image in the same cluster???
> Scary...
> >
> > Luckily the automatic creation of the OSD fails.
> >
> > Nevertheless, I would feel better if ceph just doesn't try to add the
> > /dev/rbd to the cluster.
> >
> > Do I risk a conflict between my operations on a mapped rbd image/device?
> >
> > Will at some point ceph alter my image unintentionally?
> >
> > Do I risk ceph to actually add such an image as an osd?
> >
> > I can disable the managed feature of the osd-management, but then I
> > lose automatic functions of ceph. Is there a way to tell ceph to
> > exclude /dev/rdb* devices from the autodetect/automanage?
> >
> > Greetings,
> >
> > Dominique.
> >
> > > -----Oorspronkelijk bericht-----
> > > Van: Etienne Menguy <etienne.menguy@xxxxxxxxxxx>
> > > Verzonden: maandag 29 augustus 2022 13:44
> > > Aan: Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>
> > > CC: ceph-users@xxxxxxx
> > > Onderwerp: RE: Automanage block devices
> > >
> > > Hey,
> > >
> > > /usr/sbin/ceph-volume ... lvm batch --no-auto /dev/rbd0 You want to
> > > add an OSD using rbd0?
> > >
> > > To map a block device, just use rbd map (
> > >
> >
> https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs
> > >
> >
> .ceph.com%2Fen%2Fquincy%2Fman%2F8%2Frbdmap%2F&amp;data=05%7C
> > 01%7Cetien
> > >
> >
> ne.menguy%40ubisoft.com%7C4738083baf3f47d6193008da89b80c84%7Ce01
> > bd386f
> > >
> >
> a514210a2a429e5ab6f7ab1%7C0%7C0%7C637973720834251057%7CUnknown
> > %7CTWFpb
> > >
> >
> GZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI
> > 6Mn0
> > >
> >
> %3D%7C3000%7C%7C%7C&amp;sdata=5b6nh0%2BoSl2eGzd3gbU7CgegmW
> > Axluxemi%2F8
> > > TYEdeTc%3D&amp;reserved=0 )
> > >
> > > Étienne
> > >
> > > > -----Original Message-----
> > > > From: Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>
> > > > Sent: lundi 29 août 2022 12:32
> > > > To: ceph-users@xxxxxxx
> > > > Subject:  Automanage block devices
> > > >
> > > > [Some people who received this message don't often get email from
> > > > dominique.ramaekers@xxxxxxxxxx. Learn why this is important at
> > > > https://aka.ms/LearnAboutSenderIdentification ]
> > > >
> > > > Hi,
> > > >
> > > > I really like the behavior of ceph to auto-manage block devices.
> > > > But I get ceph status warnings if I map an image to a /dev/rbd
> > > >
> > > > Some log output:
> > > > Aug 29 11:57:34 hvs002 bash[465970]: Non-zero exit code 2 from
> > > > /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM
> > > > --net=host
> > > > -- entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk
> > > > --init -e
> > > >
> > >
> >
> CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:43f6e905f3e34abe4adbc90
> > > > 42b9d6f6b625dee8fa8d93c2bae53fa9b61c3df1a -e
> NODE_NAME=hvs002 -
> > e
> > > > CEPH_USE_RANDOM_NONCE=1 -e
> > > CEPH_VOLUME_OSDSPEC_AFFINITY=all-
> > > > available-devices -e CEPH_VOLUME_SKIP_RESTORECON=yes -e
> > > > CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/dd4b0610-b4d2-11ec-
> bb58-
> > > > d1b32ae31585:/var/run/ceph:z -v /var/log/ceph/dd4b0610-b4d2-11ec-
> > > bb58-
> > > > d1b32ae31585:/var/log/ceph:z -v
> > > > /var/lib/ceph/dd4b0610-b4d2-11ec-bb58-
> > > > d1b32ae31585/crash:/var/lib/ceph/crash:z -v /dev:/dev -v
> > > > /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v
> > > > /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-
> > > > tmpke1ihnc_:/etc/ceph/ceph.conf:z -v /tmp/ceph-
> > > > tmpaqbxw8ga:/var/lib/ceph/bootstrap-osd/ceph.keyring:z
> > > >
> > >
> >
> quay.io/ceph/ceph@sha256:43f6e905f3e34abe4adbc9042b9d6f6b625dee8fa
> > > > 8d93c2bae
> > > >  53fa9b61c3df1a lvm batch --no-auto /dev/rbd0 --yes --no-systemd
> > > > Aug
> > > > 29
> > > > 11:57:34 hvs002 bash[465970]: /usr/bin/docker: stderr  stderr: lsblk:
> > > > /dev/rbd0: not a block device
> > > >
> > > > Aug 29 11:57:34 hvs002 bash[465970]: cluster 2022-08-
> > > > 29T09:57:33.973654+0000 mon.hvs001 (mon.0) 34133 : cluster [WRN]
> > > > Health check failed: Failed to apply 1 service(s):
> > > > osd.all-available-devices
> > > > (CEPHADM_APPLY_SPEC_FAIL)
> > > >
> > > > If I map a image to a rdb, the automanage feature want to add it
> > > > as an osd. It fails (as it apparently isn't detected as a block
> > > > device), so I guess my images are untouched, but still I worry
> > > > because I can't find a lot of information about these warnings.
> > > >
> > > > Do I risk a conflict between my operations on a mapped rbd
> > image/device?
> > > > Wil at some point ceph alter my image unintentionally?
> > > >
> > > > Do I risk ceph to add such an image as an osd?
> > > >
> > > > I can disable the managed feature of the osd-management, but then
> > > > I lose automatic functions of ceph. Is there a way to tell ceph to
> > > > exclude /dev/rdb* devices from the autodetect/automanage?
> > > >
> > > > Greetings,
> > > >
> > > > Dominique.
> > > >
> > > >
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send
> > > > an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux