Interesting re-shuffling of pg's after adding new osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Two days ago I added a new osd to one of my ceph machines, because one
of the existing osd's got rather full. There was quite a difference in
disk space usage between osd's, but I understand this is kind of just
how ceph works. It spreads data over osd's but not perfectly even.

Now check out the graph of free disk space. You can clearly see the new
4TB osd added and how it starts to fill up. It's also quite visible that
some existing osd's profit more than others.
And not only is data put onto the new osd, but also data is exchanged
between existing osd's. This is also why it takes so incredibly long to
fill the new osd up, because ceph is spending most its time shuffling
data around instead of moving it to the new osd.

Anyway, what is especially troubling, is that the osd that was already
lowest on disk space, is actually filling up even more during this
process (!)
What's causing that and how can I get ceph to do the reasonable thing?

All crush weights are identical.

Thanks,

Erik.

Attachment: ceph-new-osd.png
Description: PNG image

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux