cache tier on rgw index pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I am evaluating on setting up a cache tier for the rgw index pool and
have a few questions regarding that. The rgw index pool is different
as it completely stores the data in leveldb. The 'rados df' command on
the existing index pool shows size in KB as 0 on a 1 PB cluster with
500 million objects running ceph 0.94.2.

Seeking clarifications on the following points:

1. How are the cache tier parameters like target_max_bytes,
cache_target_dirty_ratio and cache_target_full_ratio honoured given
the size of index pool is shown as 0 and how does flush/eviction take
place in this case? Is there any specific reason why the omap data is
not reflected in the size, as Sage mentions it here [1]

2. I found a mail archive in ceph-devel where Greg mentions that
"there's no cross-OSD LevelDB replication or communication" [2]. In
that case,  how does ceph handle re-balancing of leveldb instance data
in case of node failure?

3. Are there any surprises that can be expected on deploying a cache
tier for rgw index pool ?

[1] http://www.spinics.net/lists/ceph-devel/msg28635.html
[2] http://www.spinics.net/lists/ceph-devel/msg24990.html

Thanks
Abhishek Varshney
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux