Cache Settings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

Time for a little Saturday evening Ceph related quiz.

>From this documentation page

http://ceph.com/docs/master/rados/operations/cache-tiering/

It seems to indicate that you can either flush/evict using relative sizing
(cache_target_dirty_ratio)  or absolute sizing (target_max_bytes). But the
two are separate methods and are mutually exclusive. Ie flush at 80% or when
you hit the number of bytes specified

The same goes for the max_age parameters, ie it will flush all objects older
than 300 seconds no matter how full the pool is.

However this documentation page

https://ceph.com/docs/master/dev/cache-pool/

Seems to indicate that the target_max_bytes is actually the number of bytes
that the cache_target_dirty_ratio uses to calculate the size of the cache
pool it has to work with. And that the max_age parameters just make sure
objects aren't evicted too quickly.

1. Maybe, I'm reading it wrong but they appear conflicting to me, Which is
correct?

The following questions may be invalid depending on the answer to #1

2. Assuming link #1 is correct, is it possible to have multiple cache pools
on group of SSD's and how does Ceph work out capacity for each pool?
3. Assuming #2 is correct, I can also specify min_age variables without
overriding the target_max_bytes and cache_target_dirty_ratio?


So assuming link #2 is correct (which makes more sense to me), if I had the
following configuration

target_max_bytes = 10,000,000,000
cache_target_dirty_ratio = .4
cache_full_dirty_ratio = .8
cache_min_flush_age = 60
cache_min_evict_age = 120

Then are the following assumptions true:-

1. I have a cache pool that is 10G total in size, regardless of the actual
size of the pool
2. When the pool has 4G of dirty bytes in it, it will start trying to flush
them as long as they are older than 60 seconds
3. When the pool is 8G full it will start evicting all objects that are
older than 120 seconds in a LRU order
4. If I manage to fill the pool up to 10G, Ceph will block until free space
becomes available from evictions
5. If I had a 100G worth of SSD capacity after replication, I could have 10
of these cache pools (disregard performance concerns)

Many Thanks for any answers,
Nick








_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux