On 04/19/2013 08:30 PM, James Harper wrote:
rados -p <pool> -b 4096 bench 300 seq -t 64
sec Cur ops started finished avg MB/s cur MB/s last lat avg lat
0 0 0 0 0 0 - 0
read got -2
error during benchmark: -5
error 5: (5) Input/output error
not sure what that's about...
Oops... I typo'd --no-cleanup. Now I get:
sec Cur ops started finished avg MB/s cur MB/s last lat avg lat
0 0 0 0 0 0 - 0
Total time run: 0.243709
Total reads made: 1292
Read size: 4096
Bandwidth (MB/sec): 20.709
Average Latency: 0.0118838
Max latency: 0.031942
Min latency: 0.001445
So it finishes instantly without seeming to do much actual testing...
My bad. I forgot to tell you to do a sync/flush on the OSDs after the
write test. All of those reads are probably coming from pagecache. The
good news is that this is demonstrating that reading 4k objects from
pagecache isn't insanely bad on your setup (for larger sustained loads I
see 4k object reads from pagecache hit up to around 100MB/s with
multiple clients on my test nodes).
On your OSD nodes try:
sync
echo 3 > /proc/sys/vm/drop_caches
right before you run the read test.
Whatever issue you are facing is probably down at the filestore level or
possible lower down yet.
How do your drives benchmark with something like fio doing random 4k
writes? Are your drives dedicated for ceph? What filesystem? Also
what is the journal device you are using?
Mark
James
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html