Re: cephfs does not seem to properly free up space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



to delete these orphan objects

list all objects in cephfs data pool. Object name is in form of [inode
number in dex].[offset in hex].  If an object with 'offset > 0', but
there is no object with 'offset == 0' and same inode number, it's
orphan object.

It's not difficult to write a script to find all orphan objects and
delete them. If there are multiple data pools, repeat above steps for
each data pool.

Regards
Yan, Zheng


On Wed, Apr 20, 2016 at 4:20 PM, Simion Rad <Simion.Rad@xxxxxxxxx> wrote:
> Yes, we do use customized layout settings for most of our folders.
> We have some long running backup jobs which require high-throughput writes in order to finish in a reasonable amount of time.
> ________________________________________
> From: Florent B <florent@xxxxxxxxxxx>
> Sent: Wednesday, April 20, 2016 11:07
> To: Yan, Zheng; Simion Rad
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  cephfs does not seem to properly free up space
>
> That seems to be the bug we have for years now with CephFS. We always
> used customized layout.
>
> On 04/20/2016 02:20 AM, Yan, Zheng wrote:
>> have you ever used fancy layout?
>>
>> see http://tracker.ceph.com/issues/15050
>>
>>
>> On Wed, Apr 20, 2016 at 3:17 AM, Simion Rad <Simion.Rad@xxxxxxxxx> wrote:
>>> Mounting and unmount doesn't change anyting.
>>> The used space reported by df command is nearly the same  as the values returned by ceph -s command.
>>>
>>> Example 1, df output:
>>> ceph-fuse       334T  134T  200T  41% /cephfs
>>>
>>> Example 2, ceph -s output:
>>>  health HEALTH_WARN
>>>             mds0: Many clients (22) failing to respond to cache pressure
>>>             noscrub,nodeep-scrub,sortbitwise flag(s) set
>>>      monmap e1: 5 mons at {r730-12=10.103.213.12:6789/0,r730-4=10.103.213.4:6789/0,r730-5=
>>> 10.103.213.5:6789/0,r730-8=10.103.213.8:6789/0,r730-9=10.103.213.9:6789/0}
>>>             election epoch 132, quorum 0,1,2,3,4 r730-4,r730-5,r730-8,r730-9,r730-12
>>>      mdsmap e14637: 1/1/1 up {0=ceph2-mds-2=up:active}
>>>      osdmap e6549: 68 osds: 68 up, 68 in
>>>             flags noscrub,nodeep-scrub,sortbitwise
>>>       pgmap v4394151: 896 pgs, 3 pools, 54569 GB data, 56582 kobjects
>>>             133 TB used, 199 TB / 333 TB avail
>>>                  896 active+clean
>>>   client io 47395 B/s rd, 1979 kB/s wr, 388 op/s
>>>
>>>
>>> ________________________________________
>>> From: John Spray <jspray@xxxxxxxxxx>
>>> Sent: Tuesday, April 19, 2016 22:04
>>> To: Simion Rad
>>> Cc: ceph-users@xxxxxxxxxxxxxx
>>> Subject: Re:  cephfs does not seem to properly free up space
>>>
>>> On Tue, Apr 19, 2016 at 2:40 PM, Simion Rad <Simion.Rad@xxxxxxxxx> wrote:
>>>> Hello,
>>>>
>>>>
>>>> At my workplace we have a production cephfs cluster (334 TB on 60 OSDs)
>>>> which was recently upgraded from Infernalis 9.2.0 to Infernalis 9.2.1 on
>>>> Ubuntu 14.04.3 (linux 3.19.0-33).
>>>>
>>>> It seems that cephfs still doesn't free up space at all or at least that's
>>>> what df command tells us.
>>> Hmm, historically there were bugs with the purging code, but I thought
>>> we fixed them before Infernalis.
>>>
>>> Does the space get freed after you unmount the client?  Some issues
>>> have involved clients holding onto references to unlinked inodes.
>>>
>>> John
>>>
>>>> Is there a better way of getting a df-like output with other command for
>>>> cephfs  ?
>>>>
>>>>
>>>> Thank you,
>>>>
>>>> Marius Rad
>>>>
>>>> SysAdmin
>>>>
>>>> www.propertyshark.com
>>>>
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux