ceph osd crush tunables optimal AND add new OSD at the same time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

after seting ceph upgrade (0.72.2 to 0.80.3) I have issued "ceph osd crush
tunables optimal" and after only few minutes I have added 2 more OSDs to
the CEPH cluster...

So these 2 changes were more or a less done at the same time - rebalancing
because of tunables optimal, and rebalancing because of adding new OSD...

Result - all VMs living on CEPH storage have gone mad, no disk access
efectively, blocked so to speak.

Since this rebalancing took 5h-6h, I had bunch of VMs down for that long...

Did I do wrong by causing "2 rebalancing" to happen at the same time ?
Is this behaviour normal, to cause great load on all VMs because they are
unable to access CEPH storage efectively ?

Thanks for any input...
-- 

Andrija Pani?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140713/eb2e3995/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux