incorrect cephfs size reported by system df

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ceph Jewel 10.2.10, with 3 OSDs, OSD based replication rules (min 1, max 10)
Linux Kernel 4.14.19

We have an older ceph cluster which uses only 2/1 replication rules
(min_size = 1, size = 2) for the cephfs_data and metadata pools. The
total raw capacity of the cluster is 5.5T.  The pools themselves
report the expected "MAX AVAIL" value of about 2.6T when using "ceph
df".

However, when cephfs is mounted on the OS and we use the system 'df'
command, it shows that the max size of the cephfs filesystem is the
full capacity of the cluster (5.5T) when we use the "df -h" command.
We are expecting to see the 2.6T value, any idea why it reports the
full cluster capacity instead of the max available for the cephfs data
pool?

thanks!



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux