Re: Priority Ceiling for SMP?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thursday 08 November 2007, you wrote:
> Luís Henriques wrote:
> > Maybe I didn't completely understood your question, but... if there is a
> > problem with SMP efficiency (I do not really know, but believe you :) )
> > with priority ceiling (PC) implementation, this problem should have been
> > already identified (and possibly solved) in the current PI
> > implementation, right?
>
> To make it clear: I'm not aware of any efficiency issues with
> the current PI implementation :-)
>
>
> The (potential) efficiency issue with PC on SMP derives from the
> dead-lock avoidance mechanism that you mentioned: If a task
> locks a resource, then it's priority is raised to the ceiling
> (which is typically the highest priority of all task sharing
> that resource). Once that happens, no other task (sharing the
> same resource) could then lock another resource that potentially
> leads to a dead-lock situation.
>
> Example:
>
> * A low priority Task A shall share resources X and Y
>    with a high priority Task B.
>
> * Task A shall execute the following fraction of code:
>
>      1. lock(X)
>      2. lock(Y)
>
> * Task B shall execute the following fraction of code:
>
>      1. lock(Y)
>      2. lock(X)
>
> With PI, this could easily lead to a race condition resulting
> in a dead-lock. But with PC, no dead-lock is possible: the very
> moment Task A locks X, Task B won't be scheduled any more
> (assuming a FIFO scheduling policy), because A would now run
> at the same priority as B. If B takes Y, then lower priority A
> wouldn't be scheduled anyway, because in a single CPU system,
> only the highest priority task will actually run!

Fully agree!  Single CPU systems are great :)

> On SMP systems, the situation is completely different, because
> you can have two CPUs to run the tasks on: even if Task A running
> on CPU 1 is boosted to the ceiling when it takes X, then Task B
> on CPU 2 is not affected and could easily lock Y -> deadlock!

Now I see what you mean.  However, I guess the problem here is that, in the 
example you provide you are using nested locks and this is _always_ an issue.  
It is a known problem with the PI protocol.  Good practices in RT 
applications development say you should never do this.  Anyway, the main goal 
of PI and PC protocols is to avoid priority inversion, not dead-locks.

> The only solution (i'm aware of) would be that when Task A locks
> Y, then CPU 1 needs to inform CPU 2 to un-schedule Task B (if it's
> currently running) or to mark Task-B as un-schedulable (if it's
> not currently running), respectively. This way, Task B can't take
> any shared resources (X or Y) while Task A holds X or Y. When
> Task A frees the lock, CPU 1 has to tell CPU 2 to kick Task B again
> (if it was put asleep) or to mark it as schedulable, respectivly.

I believe this could actually be implemented _but_ it might not be desirable 
to do it.  An in-deep analysis would be required to check this, but I believe 
the determinism of the system would suffer with this.

> Effectivly, all Tasks using PC and sharing the same resources do
> not perform any better than on a single CPU system because they
> can't run in parallel on different CPUs. Even worse the notable
> notification overhead could reduce performance dramatically for
> heavyly contended locks!
>
> Now my actual question was if there is any other resource
> locking mechanism that inherently (by design) avoids dead-locks
> and is proven to work on SMP?

Well... apart for good RT applications design and some schedulabity analysis, 
no, I don't know anything else.  I know there are several academic research 
(probably some implementations too, of course) on distributed systems locking 
mechanisms that deal with this issues but I am not familiar with those.

Note once again that the main goal of PI and PC protocols is not dead-locks 
avoidance but to prevent priority inversion.

> > (BTW, what do you think that should be the kernel behaviour when the
> > ceiling value is violated?  Typical solution in RTOS is to have a fatal
> > error)
>
> Here's where i'm not exactly sure what you mean:

When you have a some kind of locking mechanism (a mutex, for example) that 
implements the PC protocol, you need to define a ceiling value for it.  This 
ceiling value is defined according to an analysis of your system and 
correspond to the highest priority task that can access this locking 
mechanism.  My question was: sometimes it may not be easy to define this 
value and, in runtime, the ceiling may be violated by a task that accesses 
this mutex.

> You mean, if in the example above, a new yet higher priority Task C
> would come in (sharing X or Y) and thus the ceiling value would
> need to be (dynamically) changed? I don't think that would have
> any implications for the case, above.

Well, but if you dynamically change the ceiling value, then we are talking 
about PI, right? :)

Best regards
-- 
Luís Henriques
-
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux