Hi. We have the intention of using CephFS for some of our shares, which we'd like to spool to tape as a part normal backup schedule. CephFS works nice for large files but for "small" .. < 0.1MB .. there seem to be a "overhead" on 20-40ms per file. I tested like this: root@abe:/nfs/home/jk# time cat /ceph/cluster/rsyncbackups/13kbfile > /dev/null real 0m0.034s user 0m0.001s sys 0m0.000s And from local page-cache right after. root@abe:/nfs/home/jk# time cat /ceph/cluster/rsyncbackups/13kbfile > /dev/null real 0m0.002s user 0m0.002s sys 0m0.000s Giving a ~20ms overhead in a single file. This is about x3 higher than on our local filesystems (xfs) based on same spindles. CephFS metadata is on SSD - everything else on big-slow HDD's (in both cases). Is this what everyone else see? Thanks -- Jesper _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com