Hi Xiaoxi,
as we learned offline currently you have a mixture of new OSDs created
by Nautilus and old ones created by earlier releases.
New OSDs provide per-pool statistics in a different manner than old
ones. Merging both together is hardly doable so once your cluster
contains any OSD with new format 'df' report starts to show pool
statistics using new OSDs only.
To fix the issue one has to perform 'ceph-bluestore-tool repair' command
for any old OSDs.
Please note that repair is nonreversible OSD upgrade, one wouldn't be
able to downgrade to prior to Nautilus releases after that.
Thanks,
Igor
On 4/4/2019 11:48 AM, Xiaoxi Chen wrote:
Hi list,
The fs_data pool was under backfilling , 1 out of 16 hosts are
rebuild with same OSD id. After doing that the fs_data stored size is
not correct though it is increasing.
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 1.2 PiB 653 TiB 547 TiB 547 TiB 45.60
meta 25 TiB 25 TiB 40 GiB 107 GiB 0.42
ssd 219 TiB 147 TiB 72 TiB 73 TiB 33.11
TOTAL 1.4 PiB 824 TiB 619 TiB 620 TiB 42.92
POOLS:
POOL ID STORED OBJECTS USED %USED
MAX AVAIL
cache_tier 3 8.0 TiB 9.94M 24 TiB 16.80
40 TiB
fs_data 4 1.6 TiB 63.63M 4.8 TiB 1.12
143 TiB
fs_meta 5 35 GiB 343.66k 40 GiB 0.18
7.0 TiB
The RAW STORAGE by class is correct.
Any insight?
Xiaoxi