All SSD Pool - Odd Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a performance question for anyone running an SSD only pool. Let me detail the setup first.

12 X Dell PowerEdge R630 ( 2 X 2620v3 64Gb RAM)
8 X intel DC 3710 800GB
Dual port Solarflare 10GB/s NIC (one front and one back)
Ceph 0.94.5
Ubuntu 14.04 (3.13.0-68-generic)

The above is in one pool that is used for QEMU guests, A 4k FIO test on the SSD directly yields around 55k Iops, the same test inside a QEMU guest seems to hit a limit around 4k Iops. If I deploy multiple guests they can all reach 4K Iops simultaneously.

I don't see any evidence of a bottle neck on the OSD hosts,Is this limit inside the guest expected or I am just not looking deep enough yet?

Thanks
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux