Cache pool with replicated pool don't work properly.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All.

My name is John Haan.

I've been testing Cache Pool using Jewel version on ubuntu 16.04 OS.

I implemented 2 types of cache tiers.

first one is cache pool + erasure pool and the other one is cache pool + replicated pool

I choose writeback mode of cache mode.

vdbench and rados bench are used for test tool.

1. cache pool + erasure pool
  (1) write operation : objects are saved to cache pool.
  (2) read operation : objects are loaded from cache pool.

2. cache pool + replicated pool
  (1) write operation : objects are saved to replicated pool
  (2) read operation : objects are loaded from cache pool.


As a result, objects are not saved to cache pool but replicated pool when we implement cache pool and replicated pool.

That was very weird things for me. 

Could anybody explain this?

Below is my environment.

=======================================================
1. ceph osd dump | grep pool
pool 23 'erasure-pool' erasure size 13 min_size 10 crush_ruleset 1 object_hash rjenkins pg_num 512 pgp_num 512 last_change 630 lfor 577 flags hashpspool tiers 24 read_tier 24 write_tier 24 stripe_width 4160
pool 24 'cache-pool' replicated size 3 min_size 2 crush_ruleset 2 object_hash rjenkins pg_num 512 pgp_num 512 last_change 660 flags hashpspool,incomplete_clones tier_of 23 cache_mode writeback target_bytes 4000000000000 target_objects 1000000 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 1200s x4 decay_rate 0 search_last_n 0 min_read_recency_for_promote 1 min_write_recency_for_promote 1 stripe_width 0
pool 32 'c-pool' replicated size 3 min_size 2 crush_ruleset 2 object_hash rjenkins pg_num 512 pgp_num 512 last_change 825 flags hashpspool,incomplete_clones tier_of 33 cache_mode writeback target_bytes 4000000000000 target_objects 1000000 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 1200s x6 decay_rate 0 search_last_n 0 min_read_recency_for_promote 1 min_write_recency_for_promote 1 stripe_width 0
pool 33 'r-pool' replicated size 3 min_size 2 crush_ruleset 3 object_hash rjenkins pg_num 512 pgp_num 512 last_change 801 lfor 787 flags hashpspool tiers 32 read_tier 32 write_tier 32 stripe_width 0

2. crush rule list

rule erasure-pool {
        ruleset 1
        type erasure
        min_size 3
        max_size 13
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take sata
        step choose indep 0 type osd
        step emit
}
rule ssd_ruleset {
        ruleset 2
        type replicated
        min_size 1
        max_size 10
        step take ssd
        step chooseleaf firstn 0 type host
        step emit
}
rule sata_ruleset {
        ruleset 3
        type replicated
        min_size 1
        max_size 10
        step take sata
        step chooseleaf firstn 0 type host
        step emit
}

3. ceph df detail

GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED     OBJECTS
    43518G     43327G         148G          0.34       35630
POOLS:
    NAME             ID     CATEGORY     QUOTA OBJECTS     QUOTA BYTES     USED       %USED     MAX AVAIL     OBJECTS     DIRTY     READ      WRITE     RAW USED
    rbd              0      -            N/A               N/A                  0         0        14387G           1         1     8213k     2523k            0
    erasure-pool     23     -            N/A               N/A                 40         0        23118G           3         3        33      171k           52
    cache-pool       24     -            N/A               N/A             29946M      0.20         4418G       17081     15030     8643k     7815k       89839M
    c-pool           32     -            N/A               N/A              1021M         0         4418G        2876       206     27316     23965        3064M
    r-pool           33     -            N/A               N/A             62127M      0.42        10018G       15669     15669     2901k     2501k         182G


4. cache pool option

hit_set_type: bloom
hit_set_count: 6
hit_set_period: 1200
min_read_recency_for_promote: 1
min_write_recency_for_promote: 1
cache_min_flush_age: 600
cache_min_evict_age: 1800
target_max_bytes: 4000000000000
target_max_objects: 1000000
cache_target_dirty_ratio: 0.6
cache_target_dirty_high_ratio: 0.7
cache_target_full_ratio: 0.8

===============================================================

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux