Re: Slow ceph fs performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 27, 2012 at 11:47 AM, Bryan K. Wright
<bkw1a@xxxxxxxxxxxxxxxxxxxxxxxx> wrote:
>
> greg@xxxxxxxxxxx said:
>> >
>>         The rados benchmark was run on one of the OSD
>> machines.  Read and write results looked like this (the
>> objects size was just the default, which seems to be 4kB):
>> Actually, that's 4MB. ;)
>
>         Oops! My plea is that I was the victim of a
> man page bug:
>
>        bench seconds mode [ -b objsize ] [ -t threads ]
>               Benchmark  for  seconds.  The  mode  can  be  write or read. The
>               default object size is 4 KB, and the default number of simulated
>               threads (parallel writes) is 16.

Whoops! I'd fix it but it's obfuscated somewhat now, so:
http://tracker.newdream.net/issues/3230


>
>
>> Can you run # rados bench -p pbench 900 write -t 256
>> -b 4096 and see what that gets? It'll run 256 simultaneous 4KB writes. (You
>> can also vary the number of simultaneous writes and see if that impacts it.)
>
>         Here's the new benchmark output:
>
>  Total time run:         900.880070
> Total writes made:      537187
> Write size:             4096
> Bandwidth (MB/sec):     2.329
>
> Stddev Bandwidth:       2.57691
> Max bandwidth (MB/sec): 12.6055
> Min bandwidth (MB/sec): 0
> Average Latency:        0.429315
> Stddev Latency:         0.891734
> Max latency:            19.7647
> Min latency:            0.016743

Hmm, that is significantly lower than I would have expected. Can you
check and see if you can get that number higher by increasing (or
decreasing) the number of in-flight ops? (-t param)

Given your size distribution, it could just be that your RAID arrays
aren't giving you the small random write throughput you expect.


>> However, my suspicion is that you're limited by metadata throughput here. How
>> large are your files? There might be some MDS or client tunables we can
>> adjust, but rsync's workload is a known weak spot for CephFS. -Greg
>
>         The file size is generally small.  Here's the distribution:
>
> http://ayesha.phys.virginia.edu/~bryan/filesize.png
>
> The mean is about 2.5 MB.

So that chart is measuring in KB? Anyway, it might be metadata — you
could see what the CPU usage on the MDS server looks like while
running the rsync.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux