Why rbd rn did not clean used pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

Configuration:
rbd - erasure pool
rbdtier - tier pool for rbd

ceph osd tier add-cache rbd rbdtier 549755813888
ceph osd tier cache-mode rbdtier writeback

Create new rbd block device:
rbd create --size 16G  rbdtest
rbd feature disable rbdtest object-map fast-diff deep-flatten
rbd device map rbdtest

And fill in rbd0 by data (dd, fio and like).

Remove rbd block device:
rbd device unmap rbdtest
rbd rm rbdtest

And now pool usage look like:

POOLS:
    NAME        ID     USED        %USED     MAX AVAIL     OBJECTS
    rbd         9       16 GiB         0           0 B        4094
    rbdtier     14     104 KiB         0       1.7 TiB        5110

rbd and rbdtier contain some objects:
rados -p rbdtier ls
rbd_data.14716b8b4567.0000000000000dc4
rbd_data.14716b8b4567.00000000000002fc
rbd_data.14716b8b4567.0000000000000e82
rbd_data.14716b8b4567.00000000000003d7
rbd_data.14716b8b4567.0000000000000fb1
rbd_data.14716b8b4567.0000000000000018
[...]

rados - p rbd ls
rbd_data.14716b8b4567.0000000000000dc4
rbd_data.14716b8b4567.00000000000002fc
rbd_data.14716b8b4567.0000000000000e82
rbd_data.14716b8b4567.00000000000003d7
rbd_data.14716b8b4567.0000000000000fb1
[...]

why rbd rm do not remove all used objects from pools?



WBR,
    Fyodor.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux