Re: rt mutex priority boost

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 28 Nov 2007, Peter W. Morreale wrote:
>
> Will do.  didn't want to flood your mailbox, if that was the case... :-)

No prob with my Inbox ;-)

> >
> > Well, make does do a lot af IO and syscalls. Accessing the hard drive.
> > This in turn will kick off interrupts and softirqs. Which will all contend
> > for spinlocks, and since they are all working together, expect a lot of
> > contention.
> >
> > -- Steve
> >
>
> It does, and that was the point.
>
> Switching gears here a little bit...
>
> The real problem I see is under a moderate 'dbench' load (No laughing,
> you want VFS contention, use dbench :-) I can easily bump the cs/s
> (context-switch/sec) rate to 380k/s.

Right, and this has nothing to do with priority boosting. It has to do
with lock contention.

>
> This on a ramfs (no disk involved) partition.  The bad part is that
> top(1) reports 50-60% idle CPU time.  Which implies that 2 of my 4
> x86_64 intels are spinning while there is work to do.
>
> As an early experiment, I converted the dcache, inode, and vfsmount
> spins to raw, and performance jumped by 4x.  (I realized later that
> dbench does alot of record locking and was still hammered by the BKL,
> otherwise I suspect it would have been significantly greater...)  This
> also reduced the cs/s rate to below 100k/s (from the high of ~380k/s)

After converting those locks to spinlocks, have you tried running
cyclictest and hackbench (or dbench) and see how cyclictest works?

I bet you'll see extremely large latencies.

Those locks are some of the biggest offenders of adding latencies.

>
> It seems clear that a single point of contention (e.g: the dcache lock
> in the above workload) greatly impacts the throughput of the hardware
> platform.  There are similar points of contention with dev->_xmit_lock,
> and queue_lock in the networking stack.
>
> Obviously, this is an issue for real-world apps.  Those pesky thingies
> think they need data from various sources to do stuff.  That was humor.
>
> At the risk of being chastised, is (or has) this any discussion on this
> been taking place?

Discussion of what?  Changing them to raw spinlocks?

The real solution is to find better ways to handle the filesystem with
less contention. This will take great knowledge of the VFS. But this is no
trivial task.

Ideas are welcome.

-- Steve


-
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux