Re: cephfs slow delete

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 13, 2016 at 12:44 PM, Heller, Chris <cheller@xxxxxxxxxx> wrote:
> I have a directory I’ve been trying to remove from cephfs (via
> cephfs-hadoop), the directory is a few hundred gigabytes in size and
> contains a few million files, but not in a single sub directory. I startd
> the delete yesterday at around 6:30 EST, and it’s still progressing. I can
> see from (ceph osd df) that the overall data usage on my cluster is
> decreasing, but at the rate its going it will be a month before the entire
> sub directory is gone. Is a recursive delete of a directory known to be a
> slow operation in CephFS or have I hit upon some bad configuration? What
> steps can I take to better debug this scenario?

Is it the actual unlink operation taking a long time, or just the
reduction in used space? Unlinks require a round trip to the MDS
unfortunately, but you should be able to speed things up at least some
by issuing them in parallel on different directories.

If it's the used space, you can let the MDS issue more RADOS delete
ops by adjusting the "mds max purge files" and "mds max purge ops"
config values.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux