Re: cephfs rm -rf on directory of 160TB /40M files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 6, 2016 at 2:42 PM, Scottix <scottix@xxxxxxxxx> wrote:
> I have been running some speed tests in POSIX file operations and I noticed
> even just listing files can take a while compared to an attached HDD. I am
> wondering is there a reason it takes so long to even just list files.
>
> Here is the test I ran
>
> time for i in {1..100000}; do touch $i; done
>
> Internal HDD:
> real 4m37.492s
> user 0m18.125s
> sys 1m5.040s
>
> Ceph Dir
> real 12m30.059s
> user 0m16.749s
> sys 0m53.451s
>
> ~300% faster on HDD
>
> *I am actually ok with this but nice to be quicker.
>
> When I am listing the directory it is taking a lot longer compared to an
> attached HDD
>
> time ls -1
>
> Internal HDD
> real 0m2.112s
> user 0m0.560s
> sys 0m0.440s
>
> Ceph Dir
> real 3m35.982s
> user 0m2.788s
> sys 0m4.580s
>
> ~1000% faster on HDD

This might be a bad interaction between your MDS cache size and the
size of the directory. The subsequent run is a lot faster because
after running an "ls" once you've got most of the information you need
for it cached locally on the client (but perhaps not all of it,
depending on various things).

>
> *I understand there is some time in the display so what is really making it
> odd is the following test.
>
> time ls -1 > /dev/null
>
> Internal HDD
> real 0m0.367s
> user 0m0.324s
> sys 0m0.040s
>
> Ceph Dir
> real 0m2.807s
> user 0m0.128s
> sys 0m0.052s
>
> ~700% faster on HDD
>
> My guess the performance issue is with the batch requests as you stated. So
> I am wondering if the file deletion of the 40M files is not just deleting
> the files but even just traversing that many files takes a while.
>
> I am running this on 0.94.6 with Ceph Fuse Client
> And config
> fuse multithreaded = false
>
> Since multithreaded crashes in hammer.

Oh, that's probably hurting things in various ways. The fix for
http://tracker.ceph.com/issues/13729 ended up getting into the hammer
branch after all and should go out whenever there's another stable
release, FYI.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux