Re: Slow rbd reads (fast writes) with luminous + bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 13, 2018 at 10:44 AM Emmanuel Lacour <elacour@xxxxxxxxxxxxxxx> wrote:
Le 13/08/2018 à 16:29, Jason Dillaman a écrit :


For such a small benchmark (2 GiB), I wouldn't be surprised if you are not just seeing the Filestore-backed OSDs hitting the page cache for the reads whereas the Bluestore-backed OSDs need to actually hit the disk. Are the two clusters similar in terms of the numbers of HDD-backed OSDs?


I looked at iostat on both cluster when running fio and yes, on new cluster I see disks reads, but not with old cluster, everything comes from page cache.

So is there a way to simulate page cache for bluestore, or on rbd side?

See [1] for ways to tweak the bluestore cache sizes. I believe that by default, bluestore will not cache any data but instead will only attempt to cache its key/value store and metadata. In general, however, I would think that attempting to have bluestore cache data is just an attempt to optimize to the test instead of actual workloads. Personally, I think it would be more worthwhile to just run 'fio --ioengine=rbd' directly against a pre-initialized image after you have dropped the cache on the OSD nodes.

[1] http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/

--
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux