'Missing' capacity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a cluster with 97TiB of storage and one pool on it, size 3, using 17.4TiB, totalling at 52.5TiB in use on the cluster. However, I feel that that should leave me with 45TiB/3=15TiB available, but Ceph tells me the pool only has 4.57TiB max available, as you can see below.

root@proxmox01:~#  ceph osd pool ls detail
pool 3 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2048 pgp_num 2048 last_change 21954 flags hashpspool min_write_recency_for_promote 1 stripe_width 0 application rbd
        removed_snaps [1~55,57~2,5a~1,5d~1b,7a~15,90~1,98~6]

root@proxmox01:~# ceph df detail
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED     OBJECTS
    97.4TiB     45.0TiB      52.5TiB         53.84       4.59M
POOLS:
    NAME     ID     QUOTA OBJECTS     QUOTA BYTES     USED        %USED     MAX AVAIL     OBJECTS     DIRTY     READ        WRITE       RAW USED
    rbd      3      N/A               N/A             17.4TiB     79.20       4.57TiB     4587951     4.59M     1.16GiB     4.44GiB      52.2TiB


So where is the rest of the free space? :X

--
Mark Schouten <mark@xxxxxxxx>
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208 
 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux