Hi every one;
we have a small testing cluster, one node with 4 OSDs of 3TB each. i
created one RBD image of 4TB. now the cluster is nearly full:
# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
11178G 1783G 8986G 80.39
POOLS:
NAME ID USED %USED OBJECTS
data 0 0 0 0
metadata 1 40100K 0 30
rbd 2 3703G 33.13 478583
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cephtest1-root 181G 19G 153G 11% /
udev 48G 4.0K 48G 1% /dev
tmpfs 19G 592K 19G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 48G 0 48G 0% /run/shm
/dev/sde1 228M 27M 189M 13% /boot
/dev/sda 2.8T 2.1T 566G 79% /var/lib/ceph/osd/ceph-0
/dev/sdb 2.8T 2.4T 316G 89% /var/lib/ceph/osd/ceph-1
/dev/sdc 2.8T 2.2T 457G 84% /var/lib/ceph/osd/ceph-2
/dev/sdd 2.8T 2.2T 447G 84% /var/lib/ceph/osd/ceph-3
# rbd list -l
NAME SIZE PARENT FMT PROT LOCK
share2 3906G 1
# rbd info share2
rbd image 'share2':
size 3906 GB in 500000 objects
order 23 (8192 KB objects)
block_name_prefix: rb.0.1056.2ae8944a
format: 1
# ceph osd pool get rbd min_size
min_size: 1
# ceph osd pool get rbd size
size: 2
4 disk at 3TB should give me 12TB, and 4TBx2 should be 8TB. that is 66%
not 80% as the ceph df shows (%RAW).
where is this space is leaking? how can i fix it?
or is this normal behavior and this is due to overhead?
thanks
Ali
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com