On Wed, Sep 26, 2012 at 1:54 PM, Bryan K. Wright <bkw1a@xxxxxxxxxxxxxxxxxxxxxxxx> wrote: > Hi Mark, > > Thanks for your help. Some answers to your questions > are below. > > mark.nelson@xxxxxxxxxxx said: >> On 09/26/2012 09:50 AM, Bryan K. Wright wrote: >> Hi folks, >> Hi Bryan! >> > >> I'm seeing reasonable performance when I run rados >> benchmarks, but really slow I/O when reading or writing >> from a mounted ceph filesystem. The rados benchmarks >> show about 150 MB/s for both read and write, but when I >> go to a client machine with a mounted ceph filesystem >> and try to rsync a large (60 GB) directory tree onto >> the ceph fs, I'm getting rates of only 2-5 MB/s. >> Was the rados benchmark run from the same client machine that the filesystem >> is being mounted on? Also, what object size did you use for rados bench? >> Does the directory tree have a lot of small files or a few very large ones? > > The rados benchmark was run on one of the OSD > machines. Read and write results looked like this (the > objects size was just the default, which seems to be 4kB): Actually, that's 4MB. ;) Can you run # rados bench -p pbench 900 write -t 256 -b 4096 and see what that gets? It'll run 256 simultaneous 4KB writes. (You can also vary the number of simultaneous writes and see if that impacts it.) However, my suspicion is that you're limited by metadata throughput here. How large are your files? There might be some MDS or client tunables we can adjust, but rsync's workload is a known weak spot for CephFS. -Greg -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html