Re: [tip:core/locking] x86/smp: Move waiting on contended ticket lock out of line

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 27, 2013 at 8:42 AM, Rik van Riel <riel@xxxxxxxxxx> wrote:
>
> To keep the results readable and relevant, I am reporting the
> plateau performance numbers. Comments are given where required.
>
>                 3.7.6 vanilla   3.7.6 w/ backoff
>
> all_utime               333000          333000
> alltests        300000-470000   180000-440000   large variability
> compute                 528000          528000
> custom          290000-320000   250000-330000   4 fast runs, 1 slow
> dbase                   920000          925000
> disk                    100000   90000-120000   similar plateau, wild
>                                                 swings with patches
> five_sec                140000          140000
> fserver         160000-300000   250000-430000   w/ patch drops off at
>                                                 higher number of users
> high_systime     80000-110000    30000-125000   w/ patch mostly 40k-70k,
>                                                 wild wings
> long            no performance platform, equal performance for both
> new_dbase               960000          96000
> new_fserver     150000-300000   210000-420000   vanilla drops off,
>                                                 w/ patches wild swings
> shared          270000-440000   120000-440000   all runs ~equal to
>                                                 vanilla up to 1000
>                                                 users, one out of 5
>                                                 runs slows down past
>                                                 1100 users
> short                   120000          190000

Ugh. That really is rather random. "short" and fserver seems to
improve a lot (including the "new" version), the others look like they
are either unchanged or huge regressions.

Is there any way to get profiles for the improved versions vs the
regressed ones? It might well be that we have two different classes of
spinlocks. Maybe we could make the back-off version be *explicit* (ie
not part of the normal "spin_lock()", but you'd use a special
"spin_lock_backoff()" function for it) because it works well for some
cases but not for others?

Hmm? At the very least, it would give us an idea of *which* spinlock
it is that causes the most pain. I think your earlier indications was
that it's the mutex->wait_lock or something?

                   Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-tip-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Stable Commits]     [Linux Stable Kernel]     [Linux Kernel]     [Linux USB Devel]     [Linux Video &Media]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux