On Fri, 1 Aug 2014, Kenneth Waegeman wrote: > > On Thu, 31 Jul 2014, Kenneth Waegeman wrote: > > > Hi all, > > > > > > We have a erasure coded pool 'ecdata' and a replicated pool 'cache' acting > > > as > > > writeback cache upon it. > > > When running 'rados -p ecdata bench 1000 write', it starts filling up the > > > 'cache' pool as expected. > > > I want to see what happens when it starts evicting, therefore I've done: > > > ceph osd pool set cache target_max_bytes $((200*1024*1024*1024)) > > > > > > When it start to evict the objects to 'ecdata', the cache osds are all > > > crashing. I logged an issue : http://tracker.ceph.com/issues/8982 > > > > > > I enabled the cache with this commands: > > > > > > ceph osd pool create cache 1024 1024 > > > ceph osd erasure-code-profile set profile11 k=8 m=3 > > > ruleset-failure-domain=osd > > > ceph osd pool create ecdata 128 128 erasure profile11 > > > > > > ceph osd tier add ecdata cache > > > ceph osd tier cache-mode cache writeback > > > ceph osd tier set-overlay ecdata cache > > > > > > Is there something else that I should configure? > > > > I think you just need to enable the hit_set tracking. It's > > obviously not supposed to crash when you don't, though; I'll fix > > that up shortly. > > Thanks, it seems to work now! I saw the values of hit_set_count,.. are set to > zero. What does this mean? Is this the default value or does it actually mean > '0' (and then they should be set always?) 0 means that hit set tracking is disabled (which means the OSD isn't pay attention to which objects are being accessed and won't have any information to inform it's flushing/evicting decisions). Glad to hear that cleared things up! sage