Re: Slow rbd reads (fast writes) with luminous + bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 13/08/2018 à 16:58, Jason Dillaman a écrit :
>
> See [1] for ways to tweak the bluestore cache sizes. I believe that by
> default, bluestore will not cache any data but instead will only
> attempt to cache its key/value store and metadata.

I suppose too because default ratio is to cache as much as possible k/v
up to 512M and hdd cache is 1G by default.

I tried to increase hdd cache up to 4G and it seems to be used, 4 osd
processes uses 20GB now.

> In general, however, I would think that attempting to have bluestore
> cache data is just an attempt to optimize to the test instead of
> actual workloads. Personally, I think it would be more worthwhile to
> just run 'fio --ioengine=rbd' directly against a pre-initialized image
> after you have dropped the cache on the OSD nodes.

So with bluestore, I assume that we need to think more of client page
cache (at least when using a VM)  when with old filestore both osd and
client cache where used.
 
For benchmark, I did real benchmark here for the expected app workload
of this new cluster and it's ok for us :)


Thanks for your help Jason.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux