RE: Latency Improvement Report for ShardedOpWQ

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dong,
I don't think in case of single client scenario there is much benefit. Single client has a limitation. The benefit with sharded TP is, a single OSD is scaling much more with the increase of clients since it is increasing parallelism (by reducing lock contention) in the filestore level. A quick check could be like this.

1. Create a single node, single OSD cluster and try putting load with increasing number of clients like 1,3, 5, 8,10. Small workload serving from memory should be ideal.
2. Compare the code with sharded TP against say firefly. You should be seeing firefly is not scaling with increasing number of clients.
3. try top -H on two different case and you should be seeing more threads in case of sharded tp were working in parallel than firefly.

Also, I am sure this latency result will not hold true in high workload , there you should be seeing more contention and as a result more latency.

Thanks & Regards
Somnath

-----Original Message-----
From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Dong Yuan
Sent: Saturday, September 27, 2014 8:45 PM
To: ceph-devel
Subject: Latency Improvement Report for ShardedOpWQ

===== Test Purpose =====

Measure whether and how much Sharded OpWQ is better than Traditional OpWQ for random write scene.

===== Test Case =====

4K Object WriteFull for 1w times.

===== Test Method =====

Put the following static probes into codes when running tests to get the time span between enqeueue and dequeue of OpWQ.

Start: PG::enqueue_op before osd->op_wq.equeue call
End: OSD::dequeue_op.entry

===== Test Result =====

Traditional OpWQ: 109us(AVG), 40us(MIN)
ShardedOpWQ: 97us(AVG), 32us(MIN)

===== Test Conclusion =====

No Remarkably Improvement for Latency


--
Dong Yuan
Email:yuandong1222@xxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at  http://vger.kernel.org/majordomo-info.html

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f





[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux