High IOWAIT On OpenStack Instance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

We are testing Ceph with OpenStack and installed 3 Mon (This three monitor nodes are also OpenStack controller and network node), 6 OSD (3 of the OSDs are also Nova Computer Node).

There are total 24 OSDs (21 SAS, 3 SSD and all journals are in SSD).

There is no cache tiering for now.

Before power problem, I had great test and achieved following results:

Running VM: 106
Software: iometer
Hardware: HP DL 360e Gen8
Network: 10G Network for Storage
IOPS: 40K IOPS (30/70 write,read)

Now after the incident, I have installed the cluster from scratch and having 90 to 100 % iowait on all of the vm I have created.

I know this might be from hardware failure or network but I need to pinpoint who the culprit is.

Does any one has good procedure to pinpoint this kind of problems?

Thx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux