Re: Priority Ceiling for SMP?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Luís Henriques wrote:

[...] because in a single CPU system,
only the highest priority task will actually run!

Fully agree!  Single CPU systems are great :)

Basically agreed, but the situation is that the industry is going
multi-core because you get more bang for the buck - so it is
necessary to somehow scope with it ...


The only solution (i'm aware of) would be [...]

I believe this could actually be implemented _but_ it might not be desirable to do it. An in-deep analysis would be required to check this, but I believe the determinism of the system would suffer with this.

I came to the same conclusion - hence the question if there is a
better way to do it :-)


I know there are several academic research (probably some implementations
> too, of course) on distributed systems locking mechanisms that deal with
> this issues

Can you please give me hint where to find papers on that topic?
A Google research didn't helped very much - i guess i didn't used
the right keywords ...


the main goal of PI and PC protocols is not dead-locks avoidance but to prevent priority inversion.

In respect to prevent priority inversion, i think PI is
superior to PC, but i have recently learned that a dead-lock
avoidance mechanism can be very helpful if it comes to SIL
certification: stating "we follow good programming practices"
is certainly a weaker justification for a high safety integrity
level rather then telling "our OS avoids dead-locks by design".


When you have a some kind of locking mechanism (a mutex, for example) that implements the PC protocol, you need to define a ceiling value for it. This ceiling value is defined according to an analysis of your system and correspond to the highest priority task that can access this locking mechanism. My question was: sometimes it may not be easy to define this value and, in runtime, the ceiling may be violated by a task that accesses this mutex.

Now i see the problem: i.e.. if the *topmost* priority task *rarely*
accesses a *highly* contended lock, you may decide to lower the
ceiling because otherwise the lock pretty much acts like a giant lock.
(causing in-determinism). However, when lowering the ceiling, then
dead-locks may happen, again, and the only potential advantage of PC
over PI is gone ...


Well, but if you dynamically change the ceiling value, then we are talking about PI, right? :)

Not necessarily: lets assume that we stick to the rule that the ceiling
value for a lock should always be equal to the highest priority amongst
all tasks using the lock. Now the situation is that the ceiling value
may change when new high priority tasks are created or old high priority
tasks get killed. If each task provides a list of locks it uses to the
OS, then the OS could easily decide the new ceiling value for each lock
when a tasks is created or killed (from what i understand, this might
be tricky to implement for -rt). Change the ceiling value during runtime
is IMHO no problem: either no task is currently accessing a lock, so the
new task can simple run at it's priority, or the the lock is in use by
another task so it's priority will be temporarily raised to the new
ceiling value. Did i missed anything?

regards

Bernhard
-
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux