Re: Force cephfs delayed deletion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

 

From: John Spray <jspray@xxxxxxxxxx>
Date: Wednesday, August 1, 2018 at 4:02 AM
To: "Kamble, Nitin A" <Nitin.Kamble@xxxxxxxxxxxx>
Cc: "aryabov@xxxxxxxxxxxxxx" <aryabov@xxxxxxxxxxxxxx>, "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] Force cephfs delayed deletion

 

[External Email]


On Tue, Jul 31, 2018 at 11:43 PM Kamble, Nitin A <Nitin.Kamble@xxxxxxxxxxxx> wrote:

Hi John,

 

I am running ceph Luminous 12.2.1 release on the storage nodes with v4.4.114 kernel on the cephfs clients.

 

3 client nodes are running 3 instances of a test program.

The test program is doing this repeatedly in a loop:

  • sequentially write a 256GB file on cephfs
  • delete the file

 

‘ceph df’ shows that after delete the space is not getting freed from cephfs and and cephfs space utilization (number of objects, space used and  % utilization) keeps growing up continuously.

 

I double checked, and no process is holding an open handle to the closed files.

 

When the test program is stopped, the writing workload stops and then the cephfs space utilization starts going down as expected.

 

Looks like the cephfs write load is not giving enough opportunity to actually perform the delete file operations from clients. It is a consistent behavior, and easy to reproduce.

 

Deletes are not prioritised ahead of writes, and we probably wouldn't want them to be: client workloads are in general a higher priority than purging the objects from deleted files.

 

This only becomes an issue if a filesystem is almost completely fully: at that point it would be nice to block the clients on the purging, rather than give them ENOSPC.

 

John

 

 

 

I see your approach. Currently multiple file deletes are completed in loop before purging any of the associated ceph objects. When the space is almost full, there may not be any more purge requests, and the earlier purges will still be on hold because the current write pressure. So, that approach may not work as expected.

 

I understand some would like to deprioritize purges over writes, but if one wants to prioritize purges over writes, there should be a way to do it.

 

Thanks,

Nitin

 

I tried playing with these advanced MDS config parameters:

  • mds_max_purge_files
  • mds_max_purge_ops
  • mds_max_purge_ops_per_pg
  • mds_purge_queue_busy_flush_period

 

But it is not helping with the workload.

 

Is this a known issue? And is there a workaround to give more priority to the objects purging operations?

 

Thanks in advance,

Nitin

 

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Alexander Ryabov <aryabov@xxxxxxxxxxxxxx>
Date: Thursday, July 19, 2018 at 8:09 AM
To: John Spray <jspray@xxxxxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] Force cephfs delayed deletion

 

>Also, since I see this is a log directory, check that you don't have some processes that are holding their log files open even after they're unlinked.

Thank you very much - that was the case.

lsof /mnt/logs | grep deleted

 

After dealing with these, space was reclaimed in about 2-3min.

 

 


From: John Spray <jspray@xxxxxxxxxx>
Sent: Thursday, July 19, 2018 17:24
To: Alexander Ryabov
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] Force cephfs delayed deletion

 

On Thu, Jul 19, 2018 at 1:58 PM Alexander Ryabov <aryabov@xxxxxxxxxxxxxx> wrote:

Hello,

I see that free space is not released after files are removed on CephFS.

I'm using Luminous with replica=3 without any snapshots etc and with default settings.

 

From client side:

$ du -sh /mnt/logs/

4.1G /mnt/logs/

$ df -h /mnt/logs/

Filesystem   Size  Used Avail Use% Mounted on

h1,h2:/logs  125G   87G   39G  70% /mnt/logs

 

These stats are after couple of large files were removed in /mnt/logs dir, but that only dropped Useв space a little.

 

Check what version of the client you're using -- some older clients had bugs that would hold references to deleted files and prevent them from being purged.  If you find that the space starts getting freed when you unmount the client, this is likely to be because of a client bug.

 

Also, since I see this is a log directory, check that you don't have some processes that are holding their log files open even after they're unlinked.

 

John

 

 

 

Doing 'sync' command also changes nothing.

 

From server side:

# ceph  df

GLOBAL:

    SIZE     AVAIL      RAW USED     %RAW USED

    124G     39226M       88723M         69.34

POOLS:

    NAME                ID     USED       %USED     MAX AVAIL     OBJECTS

    cephfs_data         1      28804M     76.80         8703M        7256

    cephfs_metadata     2        236M      2.65         8703M         101

 

Why there are such a large difference between 'du' and 'USED'?

I've found that it could be due to 'delayed delete' http://docs.ceph.com/docs/luminous/dev/delayed-delete/

And previously it seems could be tuned by adjusting the "mds max purge files" and "mds max purge ops" 

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013679.html

But there is no more of such options in http://docs.ceph.com/docs/luminous/cephfs/mds-config-ref/

 

So the question is - how to purge deleted data and reclaim free space?

Thank you.

 

______________________________________________

ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Intermedia
10th floor, 2, Alexander Nevsky Sq.
Saint-Petersburg, Russia 191167

www.intermedia.net

J.D. Power

J.D. Power certifies Intermedia for technical support excellence two years running, a first among cloud application providers

 

 



This message is intended only for the person(s) to which it is addressed and may contain Intermedia.net. Inc. privileged, confidential and/or proprietary information. If you have received this communication in error, please notify us immediately by replying to the message and deleting it from your computer. Any disclosure, copying, distribution, or the taking of any action concerning the contents of this message and any attachment(s) by anyone other than the named recipient(s) is strictly prohibited.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux