Re: Ceph performance laggy (requests blocked > 32) on OpenStack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If I use slow HDD, I can get the same outcome. Placing journals on fast SAS or NVMe SSD will make a difference. If you are using SATA SSD, those SSD are much slower. Instead of guessing why Ceph is lagging, have you looked at ceph -w and iostat and vmstat reports during your tests? Io stat will tell you HDD and SSD stats (i use the commands: iostat -tzxm 5 to show only active disks). If they are dedicated luns, then look at %utilization and service times. Looking at vmstat, check the ‘b’ column which shows how often your system is blocked waiting on io.

On Nov 25, 2016, at 8:48 AM, Kevin Olbrich <ko@xxxxxxx> wrote:

Hi,

we are running 80 VMs using KVM in OpenStack via RBD in Ceph Jewel on a total of 53 disks (RAID parity already excluded).
Our nodes are using Intel P3700 DC-SSDs for journaling.

Most VMs are linux based and load is low to medium. There are also about 10 VMs running Windows 2012R2, two of them run remote services (terminal).

My question is: Are 80 VMs hosted on 53 disks (mostly 7.2k SATA) to much? We sometime experience lags where nearly all servers suffer from "blocked IO > 32" seconds.

What are your experiences?

Mit freundlichen Grüßen / best regards,
Kevin Olbrich.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Rick Stehno

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux