Re: Improvement on write performance is more than 100% with the bluefs_buffered_io from false to true in ceph version 11.2.0 and HDD disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 16 Feb 2017, Ruiz Alonso, Jaime Jesus (Nokia - ES) wrote:
> 
> Thanks a lot for your help,
> 
> We left the test running with this change ( bluefs_buffered_io set to true )
> and we observed a radical increase on performance.
> 
> Effects of the change on our only write test:
> 
> - No reads from disk ( before change were in the 500 r/s range )
> - Minimum Write bandwidth moved from 12 MBps to 24 MBps
> 
> Attached Sheet "Test1-2_Kraken_11.2_iotrue" in the xls.
> 
> See attached pictures.
> 
> Best Regards,
> Jaime

Thanks for testing!  This suggests to me that even a simple LRU write 
buffer cache in bluefs to catch recently compacted SSTs will be enough to 
prime the rocksdb kv cache.  Either that or we need to do the same in 
rocksdb itself, but they didn't seem very interested in doing that in the 
general case.  Our situation is a bit atypical in that we'd prefer to 
eliminate the dependence on the OS page cache so that we can work with 
SPDK.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux