Re: Best Practice for OSD Balancing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> 
>> 1) They’re client aka desktop SSDs, not “enterprise”
>> 2) They’re a partition of a larger OSD shared with other purposes
> 
> Yup.  They're a mix of SATA SSDs and NVMes, but everything is
> consumer-grade.  They're only 10% full on average and I'm not
> super-concerned with performance.  If they did get full I'd allocate
> more space for them.  Performance is more than adequate for the very
> light loads they have.

Fair enough.  We sometimes see people bringing a toothpick to a gun fight and expecting a different result, so I had to ask.  Just keep an eye on their endurance burn.

> 
> 
> It is interesting because Quincy had no issues with the autoscaler
> with the exact same cluster config.  It might be a Rook issue, or it
> might just be because so many PGs are remapped.  I'll take another
> look at that once it reaches more of a steady state.
> 
> In any case, if the balancer is designed more for equal-sized OSDs I
> can always just play with reweights to balance things.

Look into the JJ balancer, I’ve read good things about it.

> 
> --
> Rich
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux