Re: Cephfs free space vs ceph df free space disparity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for everyone's suggestions which have now helped me to fix the space free problem.
The newbie mistake was not knowing anything about rebalancing. Turning on the balancer and using upmap I have gone from 7TB free to 50TB free on my cephfs. Seeing that the object store is saying 180TB free and I'm using 3 replication this would give a theoretical 60TB free so I'm close and pretty happy with upmap.

The links provided were a great help.

I also need to look at the bluestore_min_alloc_size_hdd for a second cluster I am building which shows
POOLS:
    POOL                ID     STORED     OBJECTS     USED        %USED     MAX AVAIL
    cephfs_data          1     36 TiB     107.94M     122 TiB     90.94       4.1 TiB
    cephfs_metadata      2     60 GiB       6.29M      61 GiB      0.49       4.1 TiB

The stored TiB vs used TiB would indicate that this has many small files, which it does, and I presume could be helped with a smaller alloc size. Is that correct?

Would anyone also have any experiences with running compression on the cephfs pool?

Regards
Robert Ruge


-----Original Message-----
From: Peter Wienemann <wienemann@xxxxxxxxxxxxxxxxxx>
Sent: Tuesday, 28 May 2019 9:53 PM
To: Robert Ruge <robert.ruge@xxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Cephfs free space vs ceph df free space disparity

On 27.05.19 09:08, Stefan Kooman wrote:
> Quoting Robert Ruge (robert.ruge@xxxxxxxxxxxxx):
>> Ceph newbie question.
>>
>> I have a disparity between the free space that my cephfs file system
>> is showing and what ceph df is showing.  As you can see below my
>> cephfs file system says there is 9.5TB free however ceph df says
>> there is 186TB which with replication size 3 should equate to 62TB
>> free space.  I guess the basic question is how can I get cephfs to
>> see and use all of the available space?  I recently changed my number
>> of pg's on the cephfs_data pool from 2048 to 4096 and this gave me
>> another 8TB so do I keep increasing the number of pg's or is there
>> something else that I am missing? I have only been running ceph for
>> ~6 months so I'm relatively new to it all and not being able to use
>> all of the space is just plain bugging me.
>
> My guess here is you have a lot of small files in your cephfs, is that
> right? Do you have HDD or SDD/NVMe?
>
> Mohamad Gebai gave a talk about this at Cephalocon 2019:
> https://static.sched.com/hosted_files/cephalocon2019/d2/cephalocon-201
> 9-mohamad-gebai.pdf
> for the slides and the recording:
> https://www.youtube.com/watch?v=26FbUEbiUrw&list=PLrBUGiINAakNCnQUosh6
> 3LpHbf84vegNu&index=29&t=0s
>
> TL;DR: there is a bluestore_min_alloc_size_ssd which is 16K default
> for SSD and 64K default for HDD. With lots of small objects this might
> add up to *a lot* of overhead. You can change that to 4k:
>
> bluestore min alloc size ssd = 4096
> bluestore min alloc size hdd = 4096
>
> You will have to rebuild _all_ of your OSDs though.
>
> Here is another thread about this:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/thre
> ad.html#24801
>
> Gr. Stefan

Hi Robert,

some more questions: Are all your OSDs of equal size? If yes, have you enabled balancing for your cluster (see [0])?

You might also be interested in this thread [1].

Peter

[0] http://docs.ceph.com/docs/master/rados/operations/balancer
[1]
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030765.html

Important Notice: The contents of this email are intended solely for the named addressee and are confidential; any unauthorised use, reproduction or storage of the contents is expressly prohibited. If you have received this email in error, please delete it and any attachments immediately and advise the sender by return email or telephone.

Deakin University does not warrant that this email and any attachments are error or virus free.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux