Re: Cache tiers flushing logic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 30, 2014 at 7:56 AM, Erik Logtenberg <erik@xxxxxxxxxxxxx> wrote:
>
> Hi,
>
> I use a cache tier on SSD's in front of the data pool on HDD's.
>
> I don't understand the logic behind the flushing of the cache however.
> If I start writing data to the pool, it all ends up in the cache pool at
> first. So far so good, this was what I expected. However ceph never
> starts actually flushing the objects to the data pool. This was not what
> I expected. Why not use some of the idle time now and then to flush some
> data?
>
> When I continue to write data, eventually the cache pool fills up
> completely (up to target_max_bytes or target_max_objects at least), and
> then finally it starts to evict objects to make room for the new writes.
> But this evicting is expensive, because the data hasn't been flushed yet
> -- so it's a flush+evict. Now the write performance is no better than
> without cache pool, because it is limited to the speed at which objects
> can be flushed+evicted to the HDD pool.
>
> So why doesn't ceph flush stuff when it has the time?
>
> I noticed I can tune some settings like cache_min_flush_age and
> cache_min_evict_age but... no -max- age, just -min-. Again I don't
> understand, why would I want to force ceph nót to flush objects that are
> "too young"? If the cache pool is under heavy write load, I can only
> imagine that ceph might run out of objects old enough to be allowed to
> flush. Anyway these settings don't seem to matter since ceph doesn't
> seem to flush at all until it really has to.
>
> By the way my cache_target_dirty_ratio and cache_target_full_ratio
> settings are set to the default values. Maybe those need tuning?
>
> Any insight you could provide would be appreciated.
>
> Thanks,
>
> Erik.

Hi Erik,

I have tiering working on a couple test clusters.  It seems to be
working with Ceph v0.90 when I set:

ceph osd pool set POOL  hit_set_type bloom
ceph osd pool set POOL  hit_set_count 1
ceph osd pool set POOL  hit_set_period 3600
ceph osd pool set POOL  cache_target_dirty_ratio .5
ceph osd pool set POOL  cache_target_full_ratio .9

Eric
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux