Edge effect with multiple RBD kernel clients per host ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I seem to have a bad edge effect in my setup, don't know if it's a RBD
problem or a Xen problem.

So, I have one Ceph cluster, in which I setup 2 different storage
pools : one on SSD and one on SAS. With appropriate CRUSH rules, those
pools are complety separated, only MON are commons.

Then, on a Xen host A, I run "VMSSD" and "VMSAS". If I launch a big
reballance on the "SSD pool", then the "VMSSD" *and* "VMSAS" will slow
down (a lot of IOWait). But if I move the "VMSAS" on a different Xen
host (B), then "VMSSD" will still be slow, but the "VMSAS" will be fast
again.

The first thing I checked is the network of the Xen host A, but I didn't
find any problem.

So, is there a queue, shared by all RBD kernel clients running on a same
host ? Or something which can explain this edge effect ?


Olivier

PS : one precision, I have about 60 RBD mapped on the "Xen host A",
don't know if it can be the key of the problem.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux