Re: Cache Settings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nick

it is correct that the ratios are relative to the size directives, target_max_bytes and target_max_objects which ever is crossed first in case they are both set. Those parameters are cache pool specific so you can create multiple cache pools, all using the same OSDs (same CRUSH rule assignment), but using different settings for size, flushing and eviction. In this case, all cache pool PGs will be hosted by the same OSDs hence will “compete” for space. Therefore, the sum of each max size directive for all your cache pools should not exceed that capacity of the OSDs hosting them.

See comments inline

Cheers
JC


> On Feb 7, 2015, at 12:23, Nick Fisk <nick@xxxxxxxxxx> wrote:
> 
> Hi All,
> 
> Time for a little Saturday evening Ceph related quiz.
> 
> From this documentation page
> 
> http://ceph.com/docs/master/rados/operations/cache-tiering/
> 
> It seems to indicate that you can either flush/evict using relative sizing
> (cache_target_dirty_ratio)  or absolute sizing (target_max_bytes). But the
> two are separate methods and are mutually exclusive. Ie flush at 80% or when
> you hit the number of bytes specified
> 
> The same goes for the max_age parameters, ie it will flush all objects older
> than 300 seconds no matter how full the pool is.
> 
> However this documentation page
> 
> https://ceph.com/docs/master/dev/cache-pool/
> 
> Seems to indicate that the target_max_bytes is actually the number of bytes
> that the cache_target_dirty_ratio uses to calculate the size of the cache
> pool it has to work with. And that the max_age parameters just make sure
> objects aren't evicted too quickly.
> 
> 1. Maybe, I'm reading it wrong but they appear conflicting to me, Which is
> correct?
> 
> The following questions may be invalid depending on the answer to #1
> 
> 2. Assuming link #1 is correct, is it possible to have multiple cache pools
> on group of SSD's and how does Ceph work out capacity for each pool?
> 3. Assuming #2 is correct, I can also specify min_age variables without
> overriding the target_max_bytes and cache_target_dirty_ratio?
> 
> 
> So assuming link #2 is correct (which makes more sense to me), if I had the
> following configuration
> 
> target_max_bytes = 10,000,000,000
> cache_target_dirty_ratio = .4
> cache_full_dirty_ratio = .8
> cache_min_flush_age = 60
> cache_min_evict_age = 120
> 
> Then are the following assumptions true:-
> 
> 1. I have a cache pool that is 10G total in size, regardless of the actual
> size of the pool

Yes and remember that the pool does not have a size expressed, only a number of PGs protected by the OSDs chosen according to your CRUSH rule

> 2. When the pool has 4G of dirty bytes in it, it will start trying to flush
> them as long as they are older than 60 seconds

Yes flushing will kick in at 40% usage of your max value

> 3. When the pool is 8G full it will start evicting all objects that are
> older than 120 seconds in a LRU order

Yes eviction will kick in at 80% usage of your max value

> 4. If I manage to fill the pool up to 10G, Ceph will block until free space
> becomes available from evictions

Yes

> 5. If I had a 100G worth of SSD capacity after replication, I could have 10
> of these cache pools (disregard performance concerns)

Yes

> 
> Many Thanks for any answers,
> Nick
> 
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux