rados bench multiple clients error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I was trying rados bench, and first wrote 250 objects from 14 hosts with --no-cleanup. Then I ran the read tests from the same 14 hosts and ran into this:

[root@osd007 test]# /usr/bin/rados -p ectest bench 100 seq
2015-07-31 17:52:51.027872 7f6c40de17c0 -1 WARNING: the following dangerous and experimental features are enabled: keyvaluestore

   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0       0         0         0         0         0         -         0
read got -2
error during benchmark: -5
error 5: (5) Input/output error

The objects are there:
...
benchmark_data_osd011.gigalith.os_39338_object2820
benchmark_data_osd004.gigalith.os_142795_object3059
benchmark_data_osd001.gigalith.os_98375_object1182
benchmark_data_osd007.gigalith.os_20502_object2226
benchmark_data_osd008.gigalith.os_3059_object2183
benchmark_data_osd001.gigalith.os_94812_object1390
benchmark_data_osd010.gigalith.os_37614_object253
benchmark_data_osd011.gigalith.os_41998_object1093
benchmark_data_osd009.gigalith.os_90933_object1270
benchmark_data_osd010.gigalith.os_35614_object393
benchmark_data_osd009.gigalith.os_90933_object2611
benchmark_data_osd010.gigalith.os_35614_object2114
benchmark_data_osd013.gigalith.os_29915_object976
benchmark_data_osd014.gigalith.os_45604_object2497
benchmark_data_osd003.gigalith.os_147071_object1775
...


This works when only using 1 host..
Is there a way to run the benchmarks with multiple instances?

I'm looking to find what our performance problem is, and what the difference is between directly reading objects from the erasure coded pool and through the cache layer.

I tested to read large files that weren't in cache from 14 hosts through cephfs ( cached files are performing enough) and got only 8MB/ stream, while our disks were hardly working (as seen in iostat) So my next steps would be to run these tests through rados: first directly on ecpool, and then on cache pool.. Someone an idea?


Thank you!

Kenneth
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux