Re: compare the performance between ceph monitors and etcd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Yao,

This is interesting!

First, we should point out that there are no Ceph scenarios that I can 
think of where things are bound by the mon's qps... certain not on 
config-key.

The other thing to keep in mind is that the commit behavior varies between 
different monitor requests.  Sometimes an operation triggers an immediate 
paxos round, while other times we queue it up and wait for more work (or a 
timer to expire).  I'm guessing that config-key is in the 'commit 
immediately' category, which means that there is basically no concurrency.

You could probably improve this config-key microbenchmark by putting it in 
the batching mode but with a very short timeout, so that multiple changes 
are applied in a single paxos round.  Although the config-key mon code is 
implementing in a slightly wonky way (not using PaxosService) for 
historical reasons, so a bit of a refactor might be needed first.

sage

On Fri, 20 Mar 2020, ?????? wrote:
> Hi, 
> 
> Recently, i have written some code to bench the performance of ceph monitor cluster and etcd, and compare the results.
> 
> I installed ceph cluster (only three monitors, no osds, no mgr, no rgw, no mds) on my three virtualbox machines, and also installed etcd cluster on these three machines. The installed ceph version is luminous 12.2.8, the version of etcd is 3.3.11. The configuration of each virtualbox machine is 10GB memory, and 8 core cpu.
> 
> ceph mon cluster was benched using librados's mon_command api, send config-key related command to monitors, `config-key set` for writing key-value, `config-key get` for reading key-value. The key is ranged in [0, 1024), the value is random hex string of length 32,  The bench tool is written using c++.
> 
> etcd was benched using etcd client v2 version's api go.etcd.io/etcd/client, for reading key-values I have set the Quorum flag in the GetOptions to achieve linearizability consistency. The key is ranged in [0, 1024), the value is random hex string of length 32. The bench tool is written using golang.
> 
> Both of the bench tools are runned on the first virtualbox machine.
> 
> Here is the bench result:
> --------------------------------------------------------------------
>                         |  ceph monitors     |   etcd              |
> --------------------------------------------------------------------
>                         |  qps   latency     | qps     latency     |
>                         |      (max#min#avg) |       (max#min#avg) |
> --------------------------------------------------------------------
> single concurrent read  |  623   407#0#1     | 481     71#1#1      |
> single concurrent write |  116   454#3#8     | 483     26#1#1      |
> --------------------------------------------------------------------
> 16 concurrent read      |  1110  1220#0#14   | 3322    23#1#4      |
> 16 concurrent write     |  293   440#6#54    | 3280    29#1#4      |
> --------------------------------------------------------------------
> 32 concurrent read      |  1176  1161#0#27   | 4006    58#1#7      |
> 32 concurrent write     |  332   754#8#97    | 4297    32#1#6      |
> --------------------------------------------------------------------
> 64 concurrent read      |  1160  1623#0#55   | 4954    156#2#12    |
> 64 concurrent write     |  336   1738#8#192  | 5013    92#3#12     |
> --------------------------------------------------------------------
> 
> As the result shows:
> 1. for ceph monitors, reading is about 4 times faster than write, which may because ceph monitor using lease, so all monitors can service reading request.
> 2. for etcd cluster, the performance between reading and writing is no big difference.
> 3. compare ceph monitors and etcd, ceph monitors is much slower than etcd, especially in multi concurrent situation.
> 
> Best wishes,
> Yao Zongyou
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx
> To unsubscribe send an email to dev-leave@xxxxxxx
> 
> 
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux