On Fri, Aug 30, 2013 at 11:56 AM, Sedat Dilek <sedat.dilek@xxxxxxxxx> wrote: > On Fri, Aug 30, 2013 at 11:48 AM, Ingo Molnar <mingo@xxxxxxxxxx> wrote: >> >> * Sedat Dilek <sedat.dilek@xxxxxxxxx> wrote: >> >>> On Fri, Aug 30, 2013 at 9:55 AM, Sedat Dilek <sedat.dilek@xxxxxxxxx> wrote: >>> > On Fri, Aug 30, 2013 at 5:54 AM, Linus Torvalds >>> > <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: >>> >> On Thu, Aug 29, 2013 at 8:12 PM, Waiman Long <waiman.long@xxxxxx> wrote: >>> >>> On 08/29/2013 07:42 PM, Linus Torvalds wrote: >>> >>>> >>> >>>> Waiman? Mind looking at this and testing? Linus >>> >>> >>> >>> Sure, I will try out the patch tomorrow morning and see how it works out for >>> >>> my test case. >>> >> >>> >> Ok, thanks, please use this slightly updated patch attached here. >>> >> >>> >> It improves on the previous version in actually handling the >>> >> "unlazy_walk()" case with native lockref handling, which means that >>> >> one other not entirely odd case (symlink traversal) avoids the d_lock >>> >> contention. >>> >> >>> >> It also refactored the __d_rcu_to_refcount() to be more readable, and >>> >> adds a big comment about what the heck is going on. The old code was >>> >> clever, but I suspect not very many people could possibly understand >>> >> what it actually did. Plus it used nested spinlocks because it wanted >>> >> to avoid checking the sequence count twice. Which is stupid, since >>> >> nesting locks is how you get really bad contention, and the sequence >>> >> count check is really cheap anyway. Plus the nesting *really* didn't >>> >> work with the whole lockref model. >>> >> >>> >> With this, my stupid thread-lookup thing doesn't show any spinlock >>> >> contention even for the "look up symlink" case. >>> >> >>> >> It also avoids the unnecessary aligned u64 for when we don't actually >>> >> use cmpxchg at all. >>> >> >>> >> It's still one single patch, since I was working on lots of small >>> >> cleanups. I think it's pretty close to done now (assuming your testing >>> >> shows it performs fine - the powerpc numbers are promising, though), >>> >> so I'll split it up into proper chunks rather than random commit >>> >> points. But I'm done for today at least. >>> >> >>> >> NOTE NOTE NOTE! My test coverage really has been pretty pitiful. You >>> >> may hit cases I didn't test. I think it should be *stable*, but maybe >>> >> there's some other d_lock case that your tuned waiting hid, and that >>> >> my "fastpath only for unlocked case" version ends up having problems >>> >> with. >>> >> >>> > >>> > Following this thread with half an eye... Was that "unsigned" stuff >>> > fixed (someone pointed to it). >>> > How do you call that test-patch (subject)? >>> > I would like to test it on my SNB ultrabook with your test-case script. >>> > >>> >>> Here on Ubuntu/precise v12.04.3 AMD64 I get these numbers for total loops: >>> >>> lockref: w/o patch | w/ patch >>> ====================== >>> Run #1: 2.688.094 | 2.643.004 >>> Run #2: 2.678.884 | 2.652.787 >>> Run #3: 2.686.450 | 2.650.142 >>> Run #4: 2.688.435 | 2.648.409 >>> Run #5: 2.693.770 | 2.651.514 >>> >>> Average: 2687126,6 VS. 2649171,2 ( ???37955,4 ) >> >> For precise stddev numbers you can run it like this: >> >> perf stat --null --repeat 5 ./test >> >> and it will measure time only and print the stddev in percentage: >> >> Performance counter stats for './test' (5 runs): >> >> 1.001008928 seconds time elapsed ( +- 0.00% ) >> > > Hi Ingo, > > that sounds really good :-). > > AFAICS 'make deb-pkg' does not have support to build linux-tools > Debian package where perf is included. > Can I run an older version of perf or should I / have to try with the > one shipped in Linux v3.11-rc7+ sources? > How can I build perf standalone, out of my sources? > Hmm, I installed linux-tools-common (3.2.0-53.81). $ perf stat --null --repeat 5 ./t_lockref_from-linus perf_3.11.0-rc7 not found You may need to install linux-tools-3.11.0-rc7 - Sedat - -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html