Re: Uneven OSD data distribution

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There are a lot of threads in the ML about rebalancing the data distribution in a cluster.  The CRUSH algorithm is far from perfect when it comes to evenly distributing PGs, but it's fairly simple to work around and there are ceph tools that help with it.  reweight-by-utilization being one of them.  Poke around in the archives, or search for that term in the ceph docs and you should be on your way.  The main gist of it is that reweighting the OSDs at that are full down a little and reweighting the OSDs that are empty up a little should help you normalize your OSD usage.  reweight-by-utilization should help with that process.

On Thu, Feb 15, 2018 at 9:14 AM Osama Hasebou <osama.hasebou@xxxxxx> wrote:
Hi All,

I am seeing a lot of uneven distribution of data among the OSDs on Jewel even thought their weight value is the same. Some can be 30, some 70, some 45 %. Is there a way to fix this to make them all evenly distributed ?

Thanks!

Regards,
Ossi

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux