There is a ceph command "reweight-by-utilization" you can run to adjust the OSD weights automatically based on their utilization: http://docs.ceph.com/docs/master/rados/operations/control/#osd-subsystem Some people run this on a periodic basis (cron script) Check the mailing list archives, for example this thread https://www.spinics.net/lists/ceph-devel/msg15083.html provides a bit off background information but there are many others. You probably want to run test-reweight-by-utilization first just to see what it would do before too much data moves around. And last but not least there is some work being done on adding an automatic balancer to ceph which runs periodically and adjusts the weights to achieve an even distribution but I don't think that's fully baked yet. On Thu, Sep 14, 2017 at 8:30 AM, Sinan Polat <sinan@xxxxxxxx> wrote: > Hi, > > > > I have 52 OSD’s in my cluster, all with the same disk size and same weight. > > > > When I perform a: > > ceph osd df > > > > The disk with the least available space: 863G > > The disk with the most available space: 1055G > > > > I expect the available space or the usage on the disks to be the same, since > they have the same weight, but there is a difference of almost 200GB. > > > > Due to this, the MAX AVAIL in ceph df is lower than expected (the MAX AVAIL > is based on the disk with the least available space). > > > > - How can I balance the disk usage over the disks, so the usage / > available space on each disk is more or less the same? > > - What will happen if I hit the MAX AVAIL, while most of the disks > still have space? > > > > Thanks! > > Sinan > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com