rados bench leaves objects in tiered pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

While benchmarking tiered pool using rados bench it was noticed that objects are not being removed after test.

Test was performed using "rados -p rbd bench 3600 write". The pool is not used by anything else.

Just before end of test:
POOLS:
    NAME                      ID     USED       %USED     MAX AVAIL     OBJECTS
    rbd-cache                 36     33110M      3.41          114G        8366
    rbd                       37     43472M      4.47          237G       10858

Some time later (few hundreds of writes are flushed, rados automatic cleanup finished):
POOLS:
    NAME                      ID     USED       %USED     MAX AVAIL     OBJECTS
    rbd-cache                 36      22998         0          157G       16342
    rbd                       37     46050M      4.74          234G       11503

# rados -p rbd-cache ls | wc -l
16242
# rados -p rbd ls | wc -l
11503
#

# rados -p rbd cleanup
error during cleanup: -2
error 2: (2) No such file or directory
#

# rados -p rbd cleanup --run-name "" --prefix prefix ""
 Warning: using slow linear search
 Removed 0 objects
#

# rados -p rbd ls | head -5
benchmark_data_dropbox01.tzk_7641_object10901
benchmark_data_dropbox01.tzk_7641_object9645
benchmark_data_dropbox01.tzk_7641_object10389
benchmark_data_dropbox01.tzk_7641_object10090
benchmark_data_dropbox01.tzk_7641_object11204
#

#  rados -p rbd-cache ls | head -5
benchmark_data_dropbox01.tzk_7641_object10901
benchmark_data_dropbox01.tzk_7641_object9645
benchmark_data_dropbox01.tzk_7641_object10389
benchmark_data_dropbox01.tzk_7641_object5391
benchmark_data_dropbox01.tzk_7641_object10090
#

So, it looks like the objects are still in place (in both pools?). But it is not possible to remove them:

# rados -p rbd rm benchmark_data_dropbox01.tzk_7641_object10901
error removing rbd>benchmark_data_dropbox01.tzk_7641_object10901: (2) No such file or directory
#

# ceph health
HEALTH_OK
#


Can somebody explain the behavior? And is it possible to cleanup the benchmark data without recreating the pools?


ceph version 0.94.5

# ceph osd dump | grep rbd
pool 36 'rbd-cache' replicated size 3 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 100 pgp_num 100 last_change 755 flags hashpspool,incomplete_clones tier_of 37 cache_mode writeback target_bytes 107374182400 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 3600s x1 stripe_width 0
pool 37 'rbd' erasure size 5 min_size 3 crush_ruleset 2 object_hash rjenkins pg_num 100 pgp_num 100 last_change 745 lfor 745 flags hashpspool tiers 36 read_tier 36 write_tier 36 stripe_width 4128
#

# ceph osd pool get rbd-cache hit_set_type
hit_set_type: bloom
# ceph osd pool get rbd-cache hit_set_period
hit_set_period: 3600
# ceph osd pool get rbd-cache hit_set_count
hit_set_count: 1
# ceph osd pool get rbd-cache target_max_objects
target_max_objects: 0
# ceph osd pool get rbd-cache target_max_bytes
target_max_bytes: 107374182400
# ceph osd pool get rbd-cache cache_target_dirty_ratio
cache_target_dirty_ratio: 0.1
# ceph osd pool get rbd-cache cache_target_full_ratio
cache_target_full_ratio: 0.2
#

Crush map:
root cache_tier {   
        id -7           # do not change unnecessarily
        # weight 0.450
        alg straw   
        hash 0  # rjenkins1
        item osd.0 weight 0.090
        item osd.1 weight 0.090
        item osd.2 weight 0.090
        item osd.3 weight 0.090
        item osd.4 weight 0.090
}
root store_tier {   
        id -8           # do not change unnecessarily
        # weight 0.450
        alg straw   
        hash 0  # rjenkins1
        item osd.5 weight 0.090
        item osd.6 weight 0.090
        item osd.7 weight 0.090
        item osd.8 weight 0.090
        item osd.9 weight 0.090
}
rule cache {
        ruleset 1
        type replicated
        min_size 0
        max_size 5
        step take cache_tier
        step chooseleaf firstn 0 type osd
        step emit
}
rule store {
        ruleset 2
        type erasure
        min_size 0
        max_size 5
        step take store_tier
        step chooseleaf firstn 0 type osd
        step emit
}

Thanks

--
Dmitry Glushenok
Jet Infosystems

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux