On-going Bluestore Performance Testing Results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Guys,

Now that folks are starting to dig into bluestore with the Jewel release, I wanted to share some of our on-going performance test data. These are from 10.1.0, so almost, but not quite, Jewel. Generally bluestore is looking very good on HDDs, but there are a couple of strange things to watch out for, especially with NVMe devices. Mainly:

1) in HDD+NVMe configurations performance increases dramatically when replacing the stock CentOS7 kernel with Kernel 4.5.1.

2) In NVMe only configurations performance is often lower at middle-sized IOs. Kernel 4.5.1 doesn't really help here. In fact it seems to amplify both the cases where bluestore is faster and where it is slower.

3) Medium sized sequential reads are where bluestore consistently tends to be slower than filestore. It's not clear yet if this is simply due to Bluestore not doing read ahead at the OSD (ie being entirely dependent on client read ahead) or something else as well.

I wanted to post this so other folks have some ideas of what to look for as they do their own bluestore testing. This data is shown as percentage differences vs filestore, but I can also release the raw throughput values if people are interested in those as well.

https://drive.google.com/file/d/0B2gTBZrkrnpZOTVQNkV0M2tIWkk/view?usp=sharing

Thanks!
Mark
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux