I think you'll find the "ceph df" command more useful -- in recent versions that is pretty smart about reporting the effective space available for each pool. John On Fri, Feb 6, 2015 at 3:38 PM, pixelfairy <pixelfairy@xxxxxxxxx> wrote: > heres output of 'ceph -s' from a kvm instance running as a ceph node. > all 3 nodes are monitors, each with 6 4gig osds. > > mon_osd_full ratio: .611 > mon_osd_nearfull ratio: .60 > > whats 23689MB used? is that a buffer because of mon_osd_full ratio? > > is there a way to query a pool for how much usable space is really > available to clients? for example, in this case, 3 nodes, 6osds, 4G > each = 72G, so with a replica size of 3, id like to see something that > says close 20G available, 1.7G in use. > > ceph3:~# ceph -s > cluster 2198abdb-2669-438a-8673-fc4f226a226c > health HEALTH_OK > monmap e1: 3 mons at > {ceph1=172.21.0.31:6789/0,ceph2=172.21.0.32:6789/0,ceph3=172.21.0.33:6789/0}, > election epoch 16, quorum 0,1,2 ceph1,ceph2,ceph3 > osdmap e104: 18 osds: 18 up, 18 in > pgmap v5557: 600 pgs, 1 pools, 1694 MB data, 432 objects > 23689 MB used, 49858 MB / 73548 MB avail > 600 active+clean > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com