Hi,
On inspecting new installed cluster (Nautilus), I find following result. ssd-test pool is cache pool for hdd-test pool. After running some RBD bench and delete all rbd images used for benchmarking, it there is some hidden objects inside both pools (except rbd_directory, rbd_info and rbd_trash). What are they and how to clear them?
[root@c10-ctrl ~]# ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 256 TiB 255 TiB 1.5 TiB 1.6 TiB 0.62
ssd 7.8 TiB 7.8 TiB 5.9 GiB 24 GiB 0.30
TOTAL 264 TiB 262 TiB 1.5 TiB 1.6 TiB 0.61
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
hdd-test 8 16 KiB 15 384 KiB 0 161 TiB N/A N/A 15 0 B 0 B
ssd-test 9 25 MiB 17.57k 2.7 GiB 0.04 2.5 TiB N/A N/A 3 0 B 0 B
[root@c10-ctrl ~]# rados -p hdd-test ls
rbd_directory
rbd_info
rbd_trash
[root@c10-ctrl ~]# rados -p ssd-test ls
[root@c10-ctrl ~]# ceph osd dump | grep test
pool 8 'hdd-test' erasure size 6 min_size 5 crush_rule 2 object_hash rjenkins pg_num 2048 pgp_num 2048 autoscale_mode warn last_change 6701 lfor 1242/6094/6102 flags hashpspool,selfmanaged_snaps tiers 9 read_tier 9 write_tier 9 stripe_width 16384 application rbd
pool 9 'ssd-test' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 1024 pgp_num 1024 autoscale_mode warn last_change 6702 lfor 1242/3890/6106 flags hashpspool,incomplete_clones,selfmanaged_snaps tier_of 8 cache_mode writeback target_bytes 1374389534720 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 3600s x24 decay_rate 0 search_last_n 0 stripe_width 0
[root@c10-ctrl ~]#
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 256 TiB 255 TiB 1.5 TiB 1.6 TiB 0.62
ssd 7.8 TiB 7.8 TiB 5.9 GiB 24 GiB 0.30
TOTAL 264 TiB 262 TiB 1.5 TiB 1.6 TiB 0.61
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
hdd-test 8 16 KiB 15 384 KiB 0 161 TiB N/A N/A 15 0 B 0 B
ssd-test 9 25 MiB 17.57k 2.7 GiB 0.04 2.5 TiB N/A N/A 3 0 B 0 B
[root@c10-ctrl ~]# rados -p hdd-test ls
rbd_directory
rbd_info
rbd_trash
[root@c10-ctrl ~]# rados -p ssd-test ls
[root@c10-ctrl ~]# ceph osd dump | grep test
pool 8 'hdd-test' erasure size 6 min_size 5 crush_rule 2 object_hash rjenkins pg_num 2048 pgp_num 2048 autoscale_mode warn last_change 6701 lfor 1242/6094/6102 flags hashpspool,selfmanaged_snaps tiers 9 read_tier 9 write_tier 9 stripe_width 16384 application rbd
pool 9 'ssd-test' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 1024 pgp_num 1024 autoscale_mode warn last_change 6702 lfor 1242/3890/6106 flags hashpspool,incomplete_clones,selfmanaged_snaps tier_of 8 cache_mode writeback target_bytes 1374389534720 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 3600s x24 decay_rate 0 search_last_n 0 stripe_width 0
[root@c10-ctrl ~]#
Best regards,
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com