Anyone has some ideas?
On 10/9/2020 下午4:20, norman wrote:
Hi,
I have changed most of pools from 3-replica to ec 4+2 in my cluster,
when I use
ceph df command to show the used capactiy of the cluster:
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED
%RAW USED
hdd 1.8 PiB 788 TiB 1.0 PiB 1.0
PiB 57.22
ssd 7.9 TiB 4.6 TiB 181 GiB 3.2
TiB 41.15
ssd-cache 5.2 TiB 5.2 TiB 67 GiB 73
GiB 1.36
TOTAL 1.8 PiB 798 TiB 1.0 PiB 1.0
PiB 56.99
POOLS:
POOL ID STORED OBJECTS
USED %USED MAX AVAIL
default-oss.rgw.control 1 0 B 8 0
B 0 1.3 TiB
default-oss.rgw.meta 2 22 KiB 97 3.9
MiB 0 1.3 TiB
default-oss.rgw.log 3 525 KiB 223 621
KiB 0 1.3 TiB
default-oss.rgw.buckets.index 4 33 MiB 34 33
MiB 0 1.3 TiB
default-oss.rgw.buckets.non-ec 5 1.6 MiB 48 3.8
MiB 0 1.3 TiB
.rgw.root 6 3.8 KiB 16 720
KiB 0 1.3 TiB
default-oss.rgw.buckets.data 7 274 GiB 185.39k 450
GiB 0.14 212 TiB
default-fs-metadata 8 488 GiB 153.10M 490
GiB 10.65 1.3 TiB
default-fs-data0 9 374 TiB 1.48G 939
TiB 74.71 212 TiB
...
The USED = 3 * STORED in 3-replica mode is completely right, but for
EC 4+2 pool
(for default-fs-data0 )
P.S. I have another cluster with the same config, its ceph df output
is right.
The diff between them is that the cluster has different HDD OSD(size
8T and 12T).
I'm not sure it's a bug or something, but it's not reasonable for the
spaces used.
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx