Re: Cephfs free space vs ceph df free space disparity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 27.05.19 09:08, Stefan Kooman wrote:
> Quoting Robert Ruge (robert.ruge@xxxxxxxxxxxxx):
>> Ceph newbie question.
>>
>> I have a disparity between the free space that my cephfs file system
>> is showing and what ceph df is showing.  As you can see below my
>> cephfs file system says there is 9.5TB free however ceph df says there
>> is 186TB which with replication size 3 should equate to 62TB free
>> space.  I guess the basic question is how can I get cephfs to see and
>> use all of the available space?  I recently changed my number of pg's
>> on the cephfs_data pool from 2048 to 4096 and this gave me another 8TB
>> so do I keep increasing the number of pg's or is there something else
>> that I am missing? I have only been running ceph for ~6 months so I'm
>> relatively new to it all and not being able to use all of the space is
>> just plain bugging me.
> 
> My guess here is you have a lot of small files in your cephfs, is that
> right? Do you have HDD or SDD/NVMe?
> 
> Mohamad Gebai gave a talk about this at Cephalocon 2019:
> https://static.sched.com/hosted_files/cephalocon2019/d2/cephalocon-2019-mohamad-gebai.pdf
> for the slides and the recording:
> https://www.youtube.com/watch?v=26FbUEbiUrw&list=PLrBUGiINAakNCnQUosh63LpHbf84vegNu&index=29&t=0s
> 
> TL;DR: there is a bluestore_min_alloc_size_ssd which is 16K default for
> SSD and 64K default for HDD. With lots of small objects this might add
> up to *a lot* of overhead. You can change that to 4k:
> 
> bluestore min alloc size ssd = 4096
> bluestore min alloc size hdd = 4096
> 
> You will have to rebuild _all_ of your OSDs though.
> 
> Here is another thread about this:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/thread.html#24801
> 
> Gr. Stefan

Hi Robert,

some more questions: Are all your OSDs of equal size? If yes, have you
enabled balancing for your cluster (see [0])?

You might also be interested in this thread [1].

Peter

[0] http://docs.ceph.com/docs/master/rados/operations/balancer
[1]
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030765.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux