Re: incorrect cephfs size reported by system df

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I believe it reports correctly in Luminous (havent tested Mimic yet).
The issue is that 'ceph df' reports correctly but the system
/usr/bin/df  reports the entire cluster size, which seems very wrong
and gives the user the wrong impression.  Our cephfs consists of only
the data and metadata pools.

On Fri, Jan 4, 2019 at 3:21 PM Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
>
> I believe this is expected behavior. There was a recent email (on
> ceph-users?) pointing at a particular commit which improved it
> somewhat — CephFS will report the pool's available space if only one
> pool is assigned to the FS. But I think it falls back to the total raw
> space if there are multiple pools, since they might have different
> values and it can't reconcile them in the general case. (If two pools
> have 200 TB available, is that because they are the same 200TB? Or are
> they on different servers and the total space is 400TB? But how do you
> report that since it's not available for all folders? etc)
> And Jewel may be too old for even that limited detection; not sure.
> -Greg
>
> On Thu, Jan 3, 2019 at 7:12 AM Wyllys Ingersoll
> <wyllys.ingersoll@xxxxxxxxxxxxxx> wrote:
> >
> > Ceph Jewel 10.2.10, with 3 OSDs, OSD based replication rules (min 1, max 10)
> > Linux Kernel 4.14.19
> >
> > We have an older ceph cluster which uses only 2/1 replication rules
> > (min_size = 1, size = 2) for the cephfs_data and metadata pools. The
> > total raw capacity of the cluster is 5.5T.  The pools themselves
> > report the expected "MAX AVAIL" value of about 2.6T when using "ceph
> > df".
> >
> > However, when cephfs is mounted on the OS and we use the system 'df'
> > command, it shows that the max size of the cephfs filesystem is the
> > full capacity of the cluster (5.5T) when we use the "df -h" command.
> > We are expecting to see the 2.6T value, any idea why it reports the
> > full cluster capacity instead of the max available for the cephfs data
> > pool?
> >
> > thanks!




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux