Nice job Patrick!
I wonder how tough it would be to wrap your scripts in some python and
add this as a CephFS benchmark in CBT. Seems like a good real-world test.
It would be interesting to see some mdtest results for multi-active mds
as well.
Mark
On 09/26/2016 10:28 PM, Patrick Donnelly wrote:
I wanted to share some of the initial performance results of CephFS
using single and multiple MDS. These results are currently hosted
here:
https://github.com/batrick/cephfs-perf
It has been some time since we've done an analysis of MDS performance
(especially multiple active MDS). One of the goals of this study is to
obtain performance metrics which confirm multiple active metadata
servers (multimds) function in ways we expect. We are also looking to
establish a base-line performance for a single active metadata server,
such as how many concurrent clients one MDS can handle.
The first experiment [3] is evaluating the performance of one and
three active metadata servers against parallel kernel builds by
clients (8 and 16). This is a situation we would expect multiple
metadata servers to excel at with dynamic sub-tree partitioning [1,2].
Additionally, because clients operate in independent trees, lock
contention and forced journal flushes should be mostly eliminated.
While there is still performance data to analyze, the initial results
indicate multimds can give impressive performance gains, especially as
the number of clients increase. For example, with 16 clients we saw
improvements of 74% in execution time of the clients and an 84%
reduction in client request latency (time spent by the client waiting
for a result from the MDS).
Future tests are being planned. If anyone has
comments/questions/suggestions, please do share.
[1] http://dl.acm.org/citation.cfm?id=1049948
[2] http://ceph.com/papers/weil-mds-sc04.pdf
[3] https://github.com/batrick/cephfs-perf/tree/master/kernel-build
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html