Cache tiers hit_set values

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone.

I added a lot more storage to our cluster, and we now have a lot of slower
hard drives that could contain archival data. So I thought setting up a
cache tier for the fast drives should be a good idea.

We want to retain data for about a week in the cache pool as the data could
be interesting for at least a week, and then it will probably rarely be
accessed.

You can see my config below.

Would you change any of these values?
Do you have any better suggestions?

This is my interpretation:
I keep the data for 30 minutes before writing it back to the slower drives.
We will retain the data for seven days before we evict them from the cache.
All accessed objects will be promoted to cache

Now we only have the hit_set values left, and I'm not sure the right
strategy here. I've read a paper from SUSE where they explain it.
https://documentation.suse.com/ses/6/html/ses-all/cha-ceph-tiered.html#ses-tiered-hitset

And my understanding from that paper I should change the hit_set_count to
42, so I will have a hit set for the entire duration I want to keep data in
my cache pool. Is this correct?

We have a lot of overhead when it comes to disk, and the hit sets should be
pretty small in this context, so I don't think there should be a reason for
not keeping 42 of them.

---------------------------------------------
ceph osd pool set cephfs_data_cache cache_min_flush_age 1800
ceph osd pool set cephfs_data_cache cache_min_evict_age 604800

ceph osd pool set cephfs_data_cache hit_set_count 12
ceph osd pool set cephfs_data_cache hit_set_period 14400
ceph osd pool set cephfs_data_cache hit_set_fpp 0.01
ceph osd pool set cephfs_data_cache min_write_recency_for_promote 0
ceph osd pool set cephfs_data_cache min_read_recency_for_promote 0
---------------------------------------------

I appreciate any help you can provide.

Best regards

Daniel
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux