> -----Original Message----- > From: Robert LeBlanc [mailto:robert@xxxxxxxxxxxxx] > Sent: Monday, September 21, 2015 12:21 PM > To: Wang, Zhiqiang > Cc: ceph-users@xxxxxxxxxxxxxx > Subject: Re: Clarification of Cache settings > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > > > On Sun, Sep 20, 2015 at 9:49 PM, Wang, Zhiqiang wrote: > >> -----Original Message----- > >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf > >> Of Robert LeBlanc > >> Sent: Saturday, September 19, 2015 12:18 PM > >> To: ceph-users@xxxxxxxxxxxxxx > >> Subject: Clarification of Cache settings > >> > >> -----BEGIN PGP SIGNED MESSAGE----- > >> Hash: SHA256 > >> > >> Based on some discussion on where promotions were limited to only 10% > >> increased the performance of the cache tier (sorry I can't find that > >> discussion at the moment to reference). > > > > Probably here: > > http://tracker.ceph.com/projects/ceph/wiki/Rados_cache_tier_promotion_ > > queue_and_throttling > > Yep, that sounds like it. > > >> I've been reading through > >> http://ceph.com/docs/master/rados/operations/cache-tiering/#configuri > >> ng-a-c > >> ache-tier > >> trying to figure out how to configure this type of promotion and try > >> different values. I've reviewed the concepts of Bloom filters and so > >> here are my > >> questions: > >> > >> 1. Is a hit set an individual bloom filter? Or does the bloom filter > >> keep track of the objects in the cache tier? > > > > Yes, a hit set is a bloom filter if you set its type to bloom filter. And it keeps > track of the object access in the cache tier. > > > >> 2. If each hit set is a Bloom filter... It seems limiting the rate of > >> promotion could be configured by setting > >> min_{read,write}_recency_for_promote > 1 (The object would need to be > >> in more than 1 hit set, where each hit set is 3,600 seconds). But the > >> documentation specifies "Currently there is minimal benefit for > >> hit_set_count > 1 since the agent does not yet act intelligently on > >> that information." My assumption would be to set > >> min_read,write}_recency_for_promote = 4, set hit_set_count = 15 and > >> hit_set_period to 300. This would require an object to be accessed at > >> least in 4 different 5 minute intervals in the last hour to be promoted. Is > this how these values are intended to be used? Does hit_set_count > 1 still not > do anything? > > > > The sentence "Currently there is minimal benefit for hit_set_count > 1 since > the agent does not yet act intelligently on that information." is not valid > anymore and is now removed from the documentation. And the current > meaning of min_{read,write}_recency_for_promote is different from your > understanding. Currently setting it to 1 would make it most difficult to promote. > Checking the latest documentation of the master branch. And we've had some > discussions to change its semantic to what you think, but haven't implemented > yet. Looks like it makes more sense. > > I just did a git pull and checkout the master branch. The file > /doc/rados/operations/cache-tiering.rst is the same as the website. > Has the changes been merged yet? Which version of Ceph is able to take > advantage of the multiple hitsets? If I'm just git challanged, can you provide the > text here for me? It has been removed in http://docs.ceph.com/docs/master/dev/cache-pool/. But seems I forgot to remove it in http://docs.ceph.com/docs/master/rados/operations/cache-tiering/. > > >> 3. I understand that there was some discussion about changing the > >> tracking for promotion. Will the new method be available in Jewell? > >> Is the current approach still being developed? > > > > I had an implementation of the promotion queue and throttling in > https://github.com/ceph/ceph/pull/5486. > > I'm excited about improvements in the caching tier as experience is gained. > > - ---------------- > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > -----BEGIN PGP SIGNATURE----- > Version: Mailvelope v1.1.0 > Comment: https://www.mailvelope.com > > wsFcBAEBCAAQBQJV/4WKCRDmVDuy+mK58QAAhrgP/2ws1cmwuW0UAz1dr7s > W > 3iLaS43uB446ocvbjfFxLzijvftFok7at1Lkv2tKjvDDPiSOiT5m2zmGw9RE > K1eAJfWPP6um42yz8+ALfF/CAtCIuLlXMYm10iCGkJ7R7mml6Lga+Kxef6Qx > /GqVQkSuYwLTT5+Vl6CA0k+fVUibWdc0Kp9vvuH6DcmtDX9R1F5U0gpmHx4m > fs8UALk42Y9RJS+3kgxSP3O7ux569qMZorkFeCgOVPc46lZUD7goi3O4EKkA > EQRs/R+BKr9bLhEhOSg3pFAOsZLtqn8FTdGZgKDbZ6Fh73clq9y9MtFOFBUY > n3K0IVlyqdmLliMVwJzT7Q0asa4K70MOrQeLGXw0antHSfJMpqLT5+rwXsmk > p76Sy1+pEoqR4SURZ3811xtCluRln05tPdXzsZG3x8McbM+97vXDskL0qaoE > 1bFzz4dixP4IoryxTxLrg0Hg2asBYeJdd/9gQzhCUs1H3wfNRFUzVafXx95g > R7t2CSdMRUeMkD6XWMKDV8bQgeYURdoR+mJqpdWS3gSTLhfVrIH9+USifUVP > a4LUyQ0cP8G74n/ZM4XrcR5YtGmMWtW+pGE06i6WH0fKj4L3ZGk7IXNH/sq+ > LC07xCmgOnRYp8ZMJUmZJFS8kU7mC6ZkAaCXwaqwVfVdA/0NlzJw1Lsx4nyk > dHyG > =oRFb > -----END PGP SIGNATURE----- _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com