How much iowait is too much iowait?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a 4 node cluster. Each one has 1 SSD for the OS and block.dbs (50Gb partition for each OSD), one 4Tb hdd and two 8Tb hdds.

I have 15% iowait average.

In any other server 15% seems too much. But ceph is a storage service cluster.

There's a way to minimize the iowait or to better measure where is my bottleneck? (¿Any of the HDDs maybe?)

Is time to stop changing my HDDs for bigger ones and add a new node?

--

Alfrenovsky


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux