Do mind that drives may have more than one pool on them, so RAW space is what it says, how much free space there is. Then the avail and %USED on per-pool stats will take replication into account, it can tell how much data you may write into that particular pool, given that pools replication or EC settings. Den lör 20 okt. 2018 kl 19:09 skrev Oliver Freyermuth <freyermuth@xxxxxxxxxxxxxxxxxx>: > > Dear Cephalopodians, > > as many others, I'm also a bit confused by "ceph df" output > in a pretty straightforward configuration. > > We have a CephFS (12.2.7) running, with 4+2 EC profile. > > I get: > ---------------------------------------------------------------------------- > # ceph df > GLOBAL: > SIZE AVAIL RAW USED %RAW USED > 824T 410T 414T 50.26 > POOLS: > NAME ID USED %USED MAX AVAIL OBJECTS > cephfs_metadata 1 452M 0.05 860G 365774 > cephfs_data 2 275T 62.68 164T 75056403 > ---------------------------------------------------------------------------- > > So about 50 % of raw space are used, but already ~63 % of filesystem space are used. > Is this purely from imperfect balancing? > In "ceph osd df", I do indeed see OSD usages spreading from 65.02 % usage down to 37.12 %. > > We did not yet use the balancer plugin. > We don't have any pre-luminous clients. > In that setup, I take it that "upmap" mode would be recommended - correct? > Any "gotchas" using that on luminous? > > Cheers, > Oliver > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com