Hi all,
I have a cluster used exclusively for cephfs (A EC "media" pool, and a standard metadata pool for the cephfs).
"ceph -s" shows me:
---
data:
pools: 2 pools, 260 pgs
objects: 37.18 M objects, 141 TiB
usage: 177 TiB used, 114 TiB / 291 TiB avail
pgs: 260 active+clean
---
But 'df' against the mounted cephfs shows me:
---
root@node1:~# df | grep ceph
Filesystem 1K-blocks Used Available Use% Mounted on
10.20.30.1:6789:/ 151264890880 151116939264 147951616 100% /ceph
root@node1:~# df -h | grep ceph
Filesystem Size Used Avail Use% Mounted on
10.20.30.1:6789:/ 141T 141T 142G 100% /ceph
root@node1:~#
---
And "rados df" shows me:
---
root@node1:~# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
cephfs_metadata 173 MiB 27239 0 54478 0 0 0 1102765 9.8 GiB 8810925 43 GiB
media 141 TiB 37152647 0 185763235 0 0 0 110377842 120 TiB 74835385 183 TiB
total_objects 37179886
total_used 177 TiB
total_avail 114 TiB
total_space 291 TiB
root@node1:~#
---
The amount used that df represents seems accurate (141TB at 4+1 EC), but the amount of remaining space is baffling me. Have I hit a limitation due to the amount of PGs I created, or is remaining free space just being misr-reported by df/cephfs?
Thanks!
D
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com