RE: parallel transaction submit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yeah, from the perf counter, we do reduce time waiting for kv_queue, but increase rocksdb submisstion time. Seems it is better to do batch submission for rocksdb instead of parallel submission. Besides, parallel submission also make bdev->flush more frequent.

>From the test result, there is a little performance improvement if using memdb, but compared with rocksdb, throughput is more stable.

-----Original Message-----
From: Haomai Wang [mailto:haomai@xxxxxxxx] 
Sent: Thursday, August 25, 2016 3:55 PM
To: Tang, Haodong <haodong.tang@xxxxxxxxx>
Cc: sweil@xxxxxxxxxx; varada.kari@xxxxxxxxxxx; ceph-devel@xxxxxxxxxxxxxxx
Subject: Re: parallel transaction submit

looks very litlle improvements. rocksdb result meet my expectation because rocksdb internal has lock for multi sync write. But memdb improments is a little confusing.

On Thu, Aug 25, 2016 at 3:45 PM, Tang, Haodong <haodong.tang@xxxxxxxxx> wrote:
> Hi Sage, Varada
>
> Noticed you are making parallel transaction submits, we also worked 
> out a prototype that looks similar, here is the link for the 
> implementation: https://github.com/ceph/ceph/pull/10856
>
> Background:
> From the perf counter we added, found it spent a lot time in kv_queue, that is, single thread transaction submits is not competent to handle the transaction from OSD.
>
> Implementation:
> The key thought is to use multiple thread and assign each TransContext to one of the processing threads. In order to parallelize transaction submit, add different kv_locks and kv_conds for each thread.
>
> Performance evaluation:
> Test ENV:
>         4 x server, 4 x client, 16 x Intel S3700 as block device, and 4 x Intel P3600 as Rocksdb/WAL device.
> Performance:
> We also did several quick tests to verify the performance benefit, the results showed that parallel transaction submission will brought 10% performance improvement if using memdb, but little performance improvement with rocksdb.
>
> What's more, without parallel transaction submits, we also see performance boost if just changing to MemDB, but a little.
>
> Test summary:
> QD Scaling Test - 4k Random Write:
>                                                                                   QD = 1      QD = 16     QD = 32      QD = 64      QD = 128
> With rocksdb (IOPS)                                              682            173000       190000        203000       204000
> With memdb (IOPS)                                              704            180000       194000        206000       218000
> With rocksdb+multiple_kv_thread(IOPS)          /                164243        167037        180961      201752
> With memdb+multiple_kv_thread(IOPS)          /                 176000       200000        221000      227000
>
>
> It seems single thread of transaction submits will be a bottleneck if using MemDB.
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux