On Wed, 16 Jan 2019, 02:20 David Young <funkypenguin@xxxxxxxxxxxxxx wrote:
Hi folks,My ceph cluster is used exclusively for cephfs, as follows:---root@node1:~# grep ceph /etc/fstabnode2:6789:/ /ceph ceph auto,_netdev,name=admin,secretfile=/root/ceph.admin.secretroot@node1:~#---"rados df" shows me the following:---root@node1:~# rados dfPOOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WRcephfs_metadata 197 MiB 49066 0 98132 0 0 0 9934744 55 GiB 57244243 232 GiBmedia 196 TiB 51768595 0 258842975 0 1 203534 477915206 509 TiB 165167618 292 TiBtotal_objects 51817661total_used 266 TiBtotal_avail 135 TiBtotal_space 400 TiBroot@node1:~#---But "df" on the mounted cephfs volume shows me:---root@node1:~# df -h /cephFilesystem Size Used Avail Use% Mounted on10.20.30.22:6789:/ 207T 196T 11T 95% /cephroot@node1:~#---And ceph -s shows me:---data:pools: 2 pools, 1028 pgsobjects: 51.82 M objects, 196 TiBusage: 266 TiB used, 135 TiB / 400 TiB avail---"media" is an EC pool with size of 5 (4+1), so I can expect 1TB of data to consume 1.25TB raw space.My question is, why does "df" show me I have 11TB free, when "rados df" shows me I have 135TB (raw) available?
Probabaly because your OSDs are quite unbalanced. What does your 'ceph osd df' look like?
Thanks!D_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com