cache tier write-back upper bound?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, I'm wondering when using a cache pool tier if there's an upper bound on when something written to the cache is flushed back to the backing pool? Something like a cache_max_flush_age setting? Basically I'm wondering if I have the unfortunate case of all of the SSD replicas for a cache pool object all go at once, how far behind is the backing pool object from the latest data?

Also, am I reading things correctly that if you wanted to turn the write-back mode into something close to a write-through (though not exactly), you'd do something like the following?

# ceph osd pool set cachepool cache_target_dirty_ratio 0.00
# ceph osd pool set cachepool cache_min_flush_age 0

That should still ack the client as soon as the replicas were confirmed on the cachepool layer, but then immediately let the background flusher start writing the updates to the backing pool, all while still leaving the object available for further updates from clients, correct? Or does the background flusher need to lock the object while it writes it to the backing pool, thus stalling further client updates to to the object until that completes?

I'm guessing that setting cache_target_dirty_ratio to 0 and cache_min_flush_age to N, still wouldn't quite implement cache_max_flush_age since if the object is continually getting updated, then that timer is continually getting reset, so it never becomes a candidate to get updated in the backing store, right?

Thanks,
Brian
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux