Hi all. I deploy a ceph cluster with Mimic 13.2.4. There are 26 nodes, 286 osds and 1.4 PiB avail space.
I created nearly 5,000,000,000 objects by ceph-rgw, each object is 4K size. So there should be 18TB * 3 disk used. But `ceph df detail` output shows that the RAW USED is 889 TiB
Is this a bug or I missed something?
ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
1.4 PiB 541 TiB 889 TiB 62.15
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
.rgw.root 7 4.6 KiB 0 12 TiB 20
default.rgw.control 8 0 B 0 12 TiB 8
default.rgw.meta 9 0 B 0 12 TiB 0
default.rgw.log 10 0 B 0 12 TiB 175
test.rgw.buckets.index 11 0 B 0 39 TiB 35349
test.rgw.buckets.data 12 18 TiB 59.35 12 TiB 4834552282
test.rgw.buckets.non-ec 13 0 B 0 12 TiB 0
test.rgw.control 17 0 B 0 39 TiB 8
test.rgw.meta 18 3.0 KiB 0 39 TiB 13
test.rgw.log 19 63 B 0 39 TiB 211
Here is `ceph -s` output
cluster:
id: a61656e0-6086-42ce-97b7-9999330b3e44
health: HEALTH_WARN
4 backfillfull osd(s)
9 nearfull osd(s)
6 pool(s) backfillfull
services:
mon: 3 daemons, quorum ceph-test01,ceph-test03,ceph-test04
mgr: ceph-test03(active), standbys: ceph-test01, ceph-test04
osd: 286 osds: 286 up, 286 in;
rgw: 3 daemons active
data:
pools: 10 pools, 8480 pgs
objects: 4.83 G objects, 18 TiB
usage: 889 TiB used, 541 TiB / 1.4 PiB avail
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com