Hi Marcelo,
highly likely the disposition is as follows:
1) OSDs use spinners as main devices. Hence 64K allocation unit. Correct?
2) Pool with replication factor = 3?
Hence each object has 3 replicas and takes at least 64K => 100K objects
* 64K *3 = 18GiB as "USED".
Please also note "STORED" column which has 969KiB as you expect.
Thanks,
Igor
On 10/22/2020 5:35 PM, Marcelo wrote:
Hello. I've searched a lot but couldn't find why the size of USED column in
the output of ceph df is a lot times bigger than the actual size. I'm using
Nautilus (14.2.8), and I've 1000 buckets with 100 objectsineach bucket.
Each object is around 10B.
ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 511 GiB 147 GiB 340 GiB 364 GiB 71.21
TOTAL 511 GiB 147 GiB 340 GiB 364 GiB 71.21
POOLS:
POOL ID STORED OBJECTS
USED %USED MAX AVAIL
.rgw.root 1 1.1 KiB 4 768
KiB 0 36 GiB
default.rgw.control 11 0 B 8 0
B 0 36 GiB
default.rgw.meta 12 449 KiB 2.00k 376
MiB 0.34 36 GiB
default.rgw.log 13 3.4 KiB 207 6
MiB 0 36 GiB
default.rgw.buckets.index 14 0 B 1.00k 0
B 0 36 GiB
default.rgw.buckets.data 15 969 KiB 100k 18
GiB 14.52 36 GiB
default.rgw.buckets.non-ec 16 27 B 1 192
KiB 0 36 GiB
Does anyone know what are the maths behind this, to show 18GiB used when I
have something like 1 MiB?
Thanks in advance, Marcelo.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx