Re: Fwd: “ceph df” pool section metrics is wrong during backfill

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Xiaoxi,

what Ceph version are you talking about?

There were significant changes in Ceph df reporting in Nautilus...


Thanks,

Igor

On 4/4/2019 11:48 AM, Xiaoxi Chen wrote:
Hi list,


     The fs_data pool was under backfilling , 1 out of 16 hosts are
rebuild with same OSD id. After doing that the fs_data stored size is
not correct though it is increasing.

       RAW STORAGE:
     CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
     hdd       1.2 PiB     653 TiB     547 TiB      547 TiB         45.60
     meta       25 TiB      25 TiB      40 GiB      107 GiB          0.42
     ssd       219 TiB     147 TiB      72 TiB       73 TiB         33.11
     TOTAL     1.4 PiB     824 TiB     619 TiB      620 TiB         42.92

POOLS:
     POOL           ID     STORED      OBJECTS     USED        %USED
  MAX AVAIL
     cache_tier      3     8.0 TiB       9.94M      24 TiB     16.80
     40 TiB
     fs_data         4     1.6 TiB      63.63M     4.8 TiB      1.12
    143 TiB
     fs_meta         5      35 GiB     343.66k      40 GiB      0.18
    7.0 TiB


     The RAW STORAGE by class is correct.
     Any insight?

Xiaoxi



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux