Hi, I use a cache tier on SSD's in front of the data pool on HDD's. I don't understand the logic behind the flushing of the cache however. If I start writing data to the pool, it all ends up in the cache pool at first. So far so good, this was what I expected. However ceph never starts actually flushing the objects to the data pool. This was not what I expected. Why not use some of the idle time now and then to flush some data? When I continue to write data, eventually the cache pool fills up completely (up to target_max_bytes or target_max_objects at least), and then finally it starts to evict objects to make room for the new writes. But this evicting is expensive, because the data hasn't been flushed yet -- so it's a flush+evict. Now the write performance is no better than without cache pool, because it is limited to the speed at which objects can be flushed+evicted to the HDD pool. So why doesn't ceph flush stuff when it has the time? I noticed I can tune some settings like cache_min_flush_age and cache_min_evict_age but... no -max- age, just -min-. Again I don't understand, why would I want to force ceph nót to flush objects that are "too young"? If the cache pool is under heavy write load, I can only imagine that ceph might run out of objects old enough to be allowed to flush. Anyway these settings don't seem to matter since ceph doesn't seem to flush at all until it really has to. By the way my cache_target_dirty_ratio and cache_target_full_ratio settings are set to the default values. Maybe those need tuning? Any insight you could provide would be appreciated. Thanks, Erik. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com