Hi, Two days ago I added a new osd to one of my ceph machines, because one of the existing osd's got rather full. There was quite a difference in disk space usage between osd's, but I understand this is kind of just how ceph works. It spreads data over osd's but not perfectly even. Now check out the graph of free disk space. You can clearly see the new 4TB osd added and how it starts to fill up. It's also quite visible that some existing osd's profit more than others. And not only is data put onto the new osd, but also data is exchanged between existing osd's. This is also why it takes so incredibly long to fill the new osd up, because ceph is spending most its time shuffling data around instead of moving it to the new osd. Anyway, what is especially troubling, is that the osd that was already lowest on disk space, is actually filling up even more during this process (!) What's causing that and how can I get ceph to do the reasonable thing? All crush weights are identical. Thanks, Erik.
Attachment:
ceph-new-osd.png
Description: PNG image
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com