Re: Manual deployment of an OSD failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den ons 18 aug. 2021 kl 21:49 skrev Francesco Piraneo G. <fpiraneo@xxxxxxxxxxx>:
>
> Il 17.08.21 16:34, Marc ha scritto:
>
> > ceph-volume lvm zap --destroy /dev/sdb
> > ceph-volume lvm create --data /dev/sdb --dmcrypt
> >
> > systemctl enable ceph-osd@0
>
>
> Hi Marc,
>
> it worked! Thank you very much!
>
> I have some question:
>
> 1. ceph-volume already enable and run ceph-osd, so I'm not required to
> run systemctl enable , is this correct?

Correct.

> 2. I know there is also the possibility to define two different
> partitions on OSD for journal and data, properly sizing the journal
> partition taking account of device throughput; in our case how is sized
> the journal? It's on the same partition (I suppose inspecting the device
> with lsblk)? There is a significant performance gain manually sizing the
> journal and putting it on different device partition?

If your kernel is old (especially like 2.6.x when we started with
ceph) having more separate devices (ie, partitions) seemed to allow
SSD and NVME to send more IO to the drive, but I think the more recent
your kernel is, the better it handles such things now, so the gains
might not be as big as they used to be. Also, sizing correctly can be
hard, and if too small, it spills over to the data device anyhow, so I
think it mostly evens out.

> 3. As I read on RedHat doc for production clusters they suggest to
> introduce new OSD with a prepare/activate instead of a one-shot OSD
> creation "avoiding large amounts of data being rebalanced"; in your
> opinion it's possible a gradual OSD integration on a running cluster of
> several OSD taking a look at when the rebalancing operation end?

You can set the ceph.conf [osd] section so that new devices get an
initial crush weight of 0.001 or something really small but non-zero,
then it joins the cluster but doesn't get any large amounts of data or
often no data at all. After all new OSDs are in and up, you can slowly
increase the crush weights to what they should actually be for the
size of the drive at your own pace.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux