Re: How to speed up OSD deployment process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thank you. The Ceph cluster is running smoothly so far. However, during our
testing, we re-installed it multiple times and observed that the
ceph-volume command took over a minute to activate the OSD.
In the activation stage, ceph-volume called "ceph-bluestore-tool
show-label".

 It appears that the command scans all disks to identify which disk is
being activated.

Best regards,
Yufan Chen

Joachim Kraftmayer <joachim.kraftmayer@xxxxxxxxx> 於 2024年11月23日 週六 00:50 寫道:

> Hi,
> I can remember that it took with the tools before cephadm somehow 8 hours
> to deploy a ceph cluster with more than 2000 osds.
>
> But I also know that CBT has a much faster approach to installing a Ceph
> cluster.
> Just an idea: maybe you can look at the approach at CBT to make cephadm
> faster.
>
> Regards, Joachim
>
>   joachim.kraftmayer@xxxxxxxxx
>
>   www.clyso.com
>
>   Hohenzollernstr. 27, 80801 Munich
> <https://www.google.com/maps/search/Hohenzollernstr.+27,+80801+Munich?entry=gmail&source=g>
>
> Utting | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE275430677
>
>
>
> Am Fr., 22. Nov. 2024 um 15:46 Uhr schrieb Eugen Block <eblock@xxxxxx>:
>
> > Hi,
> >
> > I don't see how it would be currently possible. The OSD creation is
> > handled by ceph-volume, which activates each OSD separately:
> >
> > [2024-11-22 14:03:08,415][ceph_volume.main][INFO  ] Running command:
> > ceph-volume  activate --osd-id 0 --osd-uuid
> > aacabeca-9adb-465c-88ee-935f06fa45f7 --no-systemd --no-tmpfs
> >
> > [2024-11-22 14:03:09,343][ceph_volume.devices.raw.activate][INFO  ]
> > Activating osd.0 uuid aacabeca-9adb-465c-88ee-935f06fa45f7 cluster
> > e57f7b6a-a8d9-11ef-af3c-fa163e2ad8c5
> >
> > The ceph-volume lvm activate description [0] states:
> >
> > > It is possible to activate all existing OSDs at once by using the
> > > --all flag. For example:
> > >
> > > ceph-volume lvm activate --all
> > >
> > > This call will inspect all the OSDs created by ceph-volume that are
> > > inactive and will activate them one by one.
> >
> > I assume that even if the OSD creation process could be tweaked in a
> > way that all OSDs are created first without separate activation, and
> > then cephadm would issue "ceph-volume lvm activate --all", the OSDs
> > would still be activated one by one.
> >
> > But as Tim already stated, an hour for almost 200 OSDs is not that
> > bad. ;-) I guess you could create a tracker issue for an enhancement,
> > maybe some of the devs can clarify why the OSDs need to be activated
> > one by one.
> >
> > Regards,
> > Eugen
> >
> > [0] https://docs.ceph.com/en/latest/ceph-volume/lvm/activate/
> >
> >
> > Zitat von YuFan Chen <wiz.chen@xxxxxxxxx>:
> >
> > > Hi,
> > >
> > > I’m setting up a 6-node Ceph cluster using Ceph Squid.
> > > Each node is configured with 32 OSDs (32 HDDs and 8 NVMe SSDs for
> > > db_devices).
> > >
> > > I’ve created an OSD service specification and am using cephadm to
> > > apply the configuration.
> > > The deployment of all 192 OSDs takes about an hour to complete.
> > >
> > > However, I’ve noticed that cephadm creates the OSDs sequentially.
> > > Then, on each node, it starts a single OSD and waits for it to become
> > > ready before moving on to the next.
> > >
> > > Is there a way to speed up the OSD deployment process?
> > > Thanks in advance for your help!
> > >
> > > Best regards,
> > > Yufan Chen
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux