Re: Fwd: “ceph df” pool section metrics is wrong during backfill

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Xiaoxi,

could you please provide the output for 'ceph osd df tree'?

And corresponding(updated) 'ceph df' one please.


Thanks,

Igor

On 4/4/2019 5:13 PM, Xiaoxi Chen wrote:
Hi Igor,

Yes It is Nautilus.

We upgrade to Nautilus days ago and the "df" reporting is fine.  It
just turn wrong when a node in CLASS hdd was down(all osd were down
and out)  and then all osds on this node got rebuilt.

Below is my last snapshot,   the CLASS is exactly match with Pool,
i.e fs_data on HDD, cache_tier on SSD, fs_meta on meta.   The raw
storage usage is right, just the fs_data is wrong.

Still in the progress of backfilling

   data:
     pools:   3 pools, 17408 pgs
     objects: 73.40M objects, 195 TiB
     usage:   614 TiB used, 830 TiB / 1.4 PiB avail
     pgs:     119921/220195989 objects degraded (0.054%)
              4093191/220195989 objects misplaced (1.859%)
              16916 active+clean
              289   active+remapped+backfill_wait
              107   active+remapped+backfilling
              96    active+undersized+degraded+remapped+backfilling

Current "ceph df"
RAW STORAGE:
     CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
     hdd       1.2 PiB     659 TiB     541 TiB      541 TiB         45.06
     meta       25 TiB      25 TiB      44 GiB      110 GiB          0.43
     ssd       219 TiB     146 TiB      73 TiB       73 TiB         33.36
     TOTAL     1.4 PiB     830 TiB     613 TiB      614 TiB         42.51

POOLS:
     POOL           ID     STORED      OBJECTS     USED       %USED     MAX AVAIL
     cache_tier      3     8.1 TiB       9.72M     24 TiB     17.03        39 TiB
     fs_data         4     6.6 TiB      63.33M     20 TiB      4.22       151 TiB
     fs_meta         5      35 GiB     344.19k     40 GiB      0.18       7.0 TiB
Xiaoxi

Igor Fedotov <ifedotov@xxxxxxx> 于2019年4月4日周四 下午5:52写道:
Hi Xiaoxi,

what Ceph version are you talking about?

There were significant changes in Ceph df reporting in Nautilus...


Thanks,

Igor

On 4/4/2019 11:48 AM, Xiaoxi Chen wrote:
Hi list,


      The fs_data pool was under backfilling , 1 out of 16 hosts are
rebuild with same OSD id. After doing that the fs_data stored size is
not correct though it is increasing.

        RAW STORAGE:
      CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
      hdd       1.2 PiB     653 TiB     547 TiB      547 TiB         45.60
      meta       25 TiB      25 TiB      40 GiB      107 GiB          0.42
      ssd       219 TiB     147 TiB      72 TiB       73 TiB         33.11
      TOTAL     1.4 PiB     824 TiB     619 TiB      620 TiB         42.92

POOLS:
      POOL           ID     STORED      OBJECTS     USED        %USED
   MAX AVAIL
      cache_tier      3     8.0 TiB       9.94M      24 TiB     16.80
      40 TiB
      fs_data         4     1.6 TiB      63.63M     4.8 TiB      1.12
     143 TiB
      fs_meta         5      35 GiB     343.66k      40 GiB      0.18
     7.0 TiB


      The RAW STORAGE by class is correct.
      Any insight?

Xiaoxi



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux