From: John Spray <jspray@xxxxxxxxxx>
Date: Wednesday, August 1, 2018 at 4:02 AM
To: "Kamble, Nitin A" <Nitin.Kamble@xxxxxxxxxxxx>
Cc: "aryabov@xxxxxxxxxxxxxx" <aryabov@xxxxxxxxxxxxxx>, "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] Force cephfs delayed deletion
[External Email]
Hi John,
I am running ceph Luminous 12.2.1 release on the storage nodes with v4.4.114 kernel on the cephfs clients.
3 client nodes are running 3 instances of a test program.
The test program is doing this repeatedly in a loop:
-
sequentially write a 256GB file on cephfs
-
delete the file
‘ceph df’ shows that after delete the space is not getting freed from cephfs and and cephfs space utilization (number of objects, space used and %
utilization) keeps growing up continuously.
I double checked, and no process is holding an open handle to the closed files.
When the test program is stopped, the writing workload stops and then the cephfs space utilization starts going down as expected.
Looks like the cephfs write load is not giving enough opportunity to actually perform the delete file operations from clients. It is a consistent
behavior, and easy to reproduce.
Deletes are not prioritised ahead of writes, and we probably wouldn't want them to be: client workloads are in general a higher priority than purging the objects from deleted files.
This only becomes an issue if a filesystem is almost completely fully: at that point it would be nice to block the clients on the purging, rather than give them ENOSPC.
I see your approach. Currently multiple file deletes are completed in loop before purging any of the associated ceph objects. When the space is almost full, there may not be any more purge requests, and the
earlier purges will still be on hold because the current write pressure. So, that approach may not work as expected.
I understand some would like to deprioritize purges over writes, but if one wants to prioritize purges over writes, there should be a way to do it.
Thanks,
Nitin
I tried playing with these advanced MDS config parameters:
-
mds_max_purge_files
-
mds_max_purge_ops
-
mds_max_purge_ops_per_pg
-
mds_purge_queue_busy_flush_period
But it is not helping with the workload.
Is this a known issue? And is there a workaround to give more priority to the objects purging operations?
Thanks in advance,
Nitin
>Also, since I see this is a log directory, check that you don't have some processes that are holding their log files open even after they're unlinked.
Thank you very much - that was the case.
lsof /mnt/logs | grep deleted
After dealing with these, space was reclaimed in about 2-3min.
Hello,
I see that free space is not released after files are removed on CephFS.
I'm using Luminous with replica=3 without any snapshots etc and with default settings.
From client side:
Filesystem Size Used Avail Use% Mounted on
h1,h2:/logs 125G 87G 39G 70% /mnt/logs
These stats are after couple of large files were removed in /mnt/logs dir, but that only dropped Useв space a little.
Check what version of the client you're using -- some older clients had bugs that would hold references to deleted files and prevent them from being
purged. If you find that the space starts getting freed when you unmount the client, this is likely to be because of a client bug.
Also, since I see this is a log directory, check that you don't have some processes that are holding their log files open even after they're unlinked.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Intermedia
10th floor, 2, Alexander Nevsky Sq.
Saint-Petersburg, Russia 191167
www.intermedia.net
|
J.D. Power certifies Intermedia for technical support excellence two years running, a first among cloud application providers
|
This message is intended only for the person(s) to which it is addressed and may contain Intermedia.net. Inc. privileged, confidential and/or proprietary information. If you have received this communication in error, please notify us immediately by replying
to the message and deleting it from your computer. Any disclosure, copying, distribution, or the taking of any action concerning the contents of this message and any attachment(s) by anyone other than the named recipient(s) is strictly prohibited.
|