Re: Best Practice for OSD Balancing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It's a complicated topic and there is no one answer, it varies for each
cluster and depends. You have a good lay of the land.

I just wanted to mention that the correct "foundation" for equally utilized
OSDs within a cluster relies on two important factors:

- Symmetry of disk/osd quantity and capacity (weight) between hosts.
- Achieving the correct amount of PGs-per-osd (typically between 100 and
200).

Without having reasonable settings/configurations for these two factors the
various higher-level balancing techniques wont work too well/at all.

Respectfully,

*Wes Dillingham*
wes@xxxxxxxxxxxxxxxxx
LinkedIn <http://www.linkedin.com/in/wesleydillingham>


On Tue, Nov 28, 2023 at 3:27 PM Rich Freeman <r-ceph@xxxxxxxxx> wrote:

> I'm fairly new to Ceph and running Rook on a fairly small cluster
> (half a dozen nodes, about 15 OSDs).  I notice that OSD space use can
> vary quite a bit - upwards of 10-20%.
>
> In the documentation I see multiple ways of managing this, but no
> guidance on what the "correct" or best way to go about this is.  As
> far as I can tell there is the balancer, manual manipulation of upmaps
> via the command line tools, and OSD reweight.  The last two can be
> optimized with tools to calculate appropriate corrections.  There is
> also the new read/active upmap (at least for non-EC pools), which is
> manually triggered.
>
> The balancer alone is leaving fairly wide deviations in space use, and
> at times during recovery this can become more significant.  I've seen
> OSDs hit the 80% threshold and start impacting IO when the entire
> cluster is only 50-60% full during recovery.
>
> I've started using ceph osd reweight-by-utilization and that seems
> much more effective at balancing things, but this seems redundant with
> the balancer which I have turned on.
>
> What is generally considered the best practice for OSD balancing?
>
> --
> Rich
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux