Re: CephFS - Small file - single thread - read performance.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, Jan 18, 2019 at 2:12 PM <jesper@xxxxxxxx> wrote:
Hi.

We have the intention of using CephFS for some of our shares, which we'd
like to spool to tape as a part normal backup schedule. CephFS works nice
for large files but for "small" .. < 0.1MB  .. there seem to be a
"overhead" on 20-40ms per file. I tested like this:

root@abe:/nfs/home/jk# time cat /ceph/cluster/rsyncbackups/13kbfile >
/dev/null

real    0m0.034s
user    0m0.001s
sys     0m0.000s

And from local page-cache right after.
root@abe:/nfs/home/jk# time cat /ceph/cluster/rsyncbackups/13kbfile >
/dev/null

real    0m0.002s
user    0m0.002s
sys     0m0.000s

Giving a ~20ms overhead in a single file.

This is about x3 higher than on our local filesystems (xfs) based on
same spindles.

CephFS metadata is on SSD - everything else on big-slow HDD's (in both
cases).

Is this what everyone else see?

Pretty much. Reading a file from a pool of Filestore spinners:

# time cat 13kb > /dev/null

real    0m0.013s
user    0m0.000s
sys     0m0.003s

That's after dropping the caches on the client however the file would have still been in the page cache on the OSD nodes as I just created it. If the file was coming straight off the spinners I'd expect to see something closer to your time. 

I guess if you wanted to improve the latency you would be looking at the usual stuff e.g (off the top of my head):

- Faster network links/tuning your network
- Turning down Ceph debugging
- Trying a different striping layout on the dirs with the small files (unlikely to have much affect)
- If you're using fuse mount try Kernel mount (or maybe vice versa)
- Play with mount options
- Tune CPU on MDS node

Still even with all of that unlikely you'll get to local file-system performance, as Burkhard says you have the locking overhead. You'll probably need to look at getting more parallelism going in your rsyncs.

 

Thanks

--
Jesper

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux