Re: cephfs cache tiering - hitset

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



i'm not an expert but here is my understanding of it. a hit_set keeps track of whether or not an object was accessed during the timespan of the hit_set. for example, if you have a hit_set_period of 600, then the hit_set covers a period of 10 minutes. the hit_set_count defines how many of the hit_sets to keep a record of. setting this to a value of 12 with the 10 minute hit_set_period would mean that there is a record of objects accessed over a 2 hour period. the min_read_recency_for_promote, and its newer min_write_recency_for_promote sibling, define how many of these hit_sets and object must be in before and object is promoted from the storage pool into the cache pool. if this were set to 6 with the previous examples, it means that the cache tier will promote an object if that object has been accessed at least once in 6 of the 12 10-minute periods. it doesn't matter how many times the object was used in each period and so 6 requests in one 10-minute hit_set will not cause a promotion. it would be any number of access in 6 separate 10-minute periods over the 2 hours.

this is just an example and might not fit well for your use case. the systems i run have a lower hit_set_period, higher hit_set_count, and higher recency options. that means that the osds use some more memory (each hit_set takes space but i think they use the same amount of space regardless of period) but hit_set covers a smaller amount of time. the longer the period, the more likely a given object is in the hit_set. without knowing your access patterns, it would be hard to recommend settings. the overhead of a promotion can be substantial and so i'd probably go with settings that only promote after many requests to an object.

one thing to note is that the recency options only seemed to work for me in jewel. i haven't tried infernalis. the older versions of hammer didn't seem to use the min_read_recency_for_promote properly and 0.94.6 definitely had a bug that could corrupt data when min_read_recency_for_promote was more than 1. even though that was fixed in 0.94.7, i was hesitant to increase it will still on hammer. min_write_recency_for_promote wasn't added till after hammer.

hopefully that helps.
mike

On Fri, Mar 17, 2017 at 2:02 PM, Webert de Souza Lima <webert.boss@xxxxxxxxx> wrote:
Hello everyone,

I`m deploying a ceph cluster with cephfs and I`d like to tune ceph cache tiering, and I`m
a little bit confused of the settings hit_set_counthit_set_period and min_read_recency_for_promote. The docs are very lean and I can`f find any more detailed explanation anywhere.

Could someone provide me a better understandment of this?

Thanks in advance!

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux