messaging/IO/radosbench results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



*Disclaimer*: these results are an investigation into potential
bottlenecks in RADOS. The test setup is wholly unrealistic, and these
numbers SHOULD NOT be used as an indication of the performance of OSDs,
messaging, RADOS, or ceph in general.


Executive summary: rados bench has some internal bottleneck. Once that's
cleared up, we're still having some issues saturating a single
connection to an OSD. Having 2-3 connection in parallel alleviates that
(either by having > 1 OSD or having multiple bencher clients).


I've run three separate tests: msbench, smalliobench, and rados bench.
In all cases I was trying to determine where bottleneck(s) exist. All
the tests were run on a machine with 192 GB of RAM. The backing stores
for all OSDs and journals are RAMdisks. The stores are running XFS.

smalliobench: I ran tests varying the number of OSDs and bencher
clients. In all cases, the number of PG's per OSD is 100.

OSD     Bencher     Throughput (mbyte/sec)
1       1           510
1       2           800
1       3           850
2       1           640
2       2           660
2       3           670
3       1           780
3       2           820
3       3           870
4       1           850
4       2           970
4       3           990

Note: these numbers are fairly fuzzy. I eyeballed them and they're only
really accurate to about 10 mbyte/sec. The small IO bencher was run with
100 ops in flight, 4 mbyte io's, 4 mbyte files.

msbench: ran tests trying to determine max throughput of raw messaging
layer. Varied the number of concurrently connected msbench clients and
measured aggregate throughput. Take-away: a messaging client can very
consistently push 400-500 mbytes/sec through a single socket.

Clients     Throughput (mbyte/sec)
1           520
2           880
3           1300
4           1900

Finally, rados bench, which seems to have its own bottleneck. Running
varying numbers of these, each client seems to get 250 mbyte/sec up till
the aggregate rate is around 1000 mbyte/sec (appx line speed as measured
by iperf). These were run on a pool with 100 PGs/OSD.

Clients     Throughput (mbyte/sec)
1           250
2           500
3           750
4           1000 (very fuzzy, probably 1000 +/- 75)
5           1000, seems to level out here
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux