Hi folks,
My ceph cluster is used exclusively for cephfs, as follows:
---
root@node1:~# grep ceph /etc/fstab
node2:6789:/ /ceph ceph auto,_netdev,name=admin,secretfile=/root/ceph.admin.secret
root@node1:~#
---
"rados df" shows me the following:
---
root@node1:~# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
cephfs_metadata 197 MiB 49066 0 98132 0 0 0 9934744 55 GiB 57244243 232 GiB
media 196 TiB 51768595 0 258842975 0 1 203534 477915206 509 TiB 165167618 292 TiB
total_objects 51817661
total_used 266 TiB
total_avail 135 TiB
total_space 400 TiB
root@node1:~#
---
But "df" on the mounted cephfs volume shows me:
---
root@node1:~# df -h /ceph
Filesystem Size Used Avail Use% Mounted on
10.20.30.22:6789:/ 207T 196T 11T 95% /ceph
root@node1:~#
---
And ceph -s shows me:
---
data:
pools: 2 pools, 1028 pgs
objects: 51.82 M objects, 196 TiB
usage: 266 TiB used, 135 TiB / 400 TiB avail
---
"media" is an EC pool with size of 5 (4+1), so I can expect 1TB of data to consume 1.25TB raw space.
My question is, why does "df" show me I have 11TB free, when "rados df" shows me I have 135TB (raw) available?
Thanks!
D
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com