Re: Quick comparison of Hammer & Infernalis on NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for sharing !

Results are impressive, Great to see that write are finally improving.

I just wonder how much you could have with rbd_cache=false.

Also, with such high load,

maybe using jemalloc on fio could help too (I have seen around 20% improvement on fio client)

LD_PRELOAD=${JEMALLOC_PATH}/lib/libjemalloc.so.1 fio ....


Regards,

Alexandre
----- Mail original -----
De: "Blinick, Stephen L" <stephen.l.blinick@xxxxxxxxx>
À: "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>, "Sage Weil" <sweil@xxxxxxxxxx>, "Mark Nelson" <mnelson@xxxxxxxxxx>, "Somnath Roy" <Somnath.Roy@xxxxxxxxxxx>
Envoyé: Mercredi 25 Novembre 2015 22:52:59
Objet: Quick comparison of Hammer & Infernalis on NVMe

(2nd time w/o HTML formatting) 
As I mentioned today in the meeting we have a first pass set of numbers from Infernalis. This is the same hardware, configuration (including ceph.conf), and clients as the previous data from this presentation: http://www.slideshare.net/Inktank_Ceph/accelerating-cassandra-workloads-on-ceph-with-allflash-pcie-ssds 

The comparison data is here: https://www.docdroid.net/X0kJcIp/quick-hammer-vs-infernalis-nvme-comparison.pdf.html 

Reads are a bit slower than we measured before at higher QD, but roughly the same up to 1M IOPS. Writes look a lot better! Also if you look at the long-tail latency numbers (not graphed, in the backup slides) the long tail latency is much lower for writes and mixed workloads, by 1/3rd to 1/8th. We are doing analysis and building tools that are focusing on 80th/90th/95th/99th latency across workloads at various queue depths now so we'll see if we can get any more insight. 

We did at some point during the upgrade find that SELinux was set to 'enforcing', and originally this gave us a much lower performance measurement. We have some comparison data for that if anyone is interested. 

If you're in the U.S, have a great holiday! 

Thanks, 

Stephen 


-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
the body of a message to majordomo@xxxxxxxxxxxxxxx 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux