Re: Hammer vs Jewel librbd performance testing and git bisection results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 11, 2016 at 08:21:18AM -0500, Mark Nelson wrote:
> Hi Guys,
> 
> [..]
> The gist of this is that Jewel is faster than Hammer for many random
> workloads (Read, Write, and Mixed).  There is one specific case
> where performance degrades significantly: 64-128k sequential reads.
> We couldn't find anything obviously wrong with these tests, so we
> spent some time running git bisects between hammer and jewel with
> the NVMe test configuration (these tests were faster to setup/run
> than the HDD setup).  We tested about 45 different commits with
> anywhere from 1-5 samples depending on how confident the results
> looked:
> 
> https://docs.google.com/spreadsheets/d/1hbsyNM5pr-ZwBuR7lqnphEd-4kQUid0C9eRyta3ohOA/edit?usp=sharing
> 
> There are several commits of interest that have a noticeable effect
> on 128K sequential read performance:
> 
> [..]
> 2) https://github.com/ceph/ceph/commit/c474ee42
> 
> This commit had a very large impact, reducing performance by another 20-25%.

https://github.com/ceph/ceph/commit/c474ee42#diff-254555dde8dcfb7fb908791ab8214b92R318
I would check if temporarily forcing unique_lock_name() to return its arg
(or other constant) would change things. If so, probably a more efficient way
to construct unique lock name may be in order.

-- 
Piotr Dałek
branch@xxxxxxxxxxxxxxxx
http://blog.predictor.org.pl
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux