On Thu, Apr 16, 2009 at 08:41:58PM +0200, Eric Dumazet wrote: > > > Paul E. McKenney a écrit : > > > > But if some other CPU holds the lock, this code would fail to wait for > > that other CPU to release the lock, right? It also might corrupt the > > rl->count field due to two CPUs accessing it concurrently non-atomically. > > If another cpu holds the lock, this cpu will spin on its own lock. > > Remember other cpus dont touch rl->count. This is a private field, only touched > by the cpu on its own per_cpu data. There is no possible 'corruption' Ah!!! I must confess that I didn't make it that far into the code... > So the owner of the per_cpu data does : > > /* > * disable preemption, get rl = &__get_cpu_var(arp_tables_lock); > * then : > */ > lock_time : > if (++rl->count == 0) > spin_lock(&rl->lock); > > unlock_time: > if (likely(--rl->count == 0)) > spin_unlock(&rl->lock); > > > while other cpus only do : > > spin_lock(&rl->lock); > /* work on data */ > spin_unlock(&rl->lock); > > So they cannot corrupt 'count' stuff. OK, that does function. Hurts my head, though. ;-) > > I suggest the following, preferably in a function or macro or something: > > > > cur_cpu = smp_processor_id(); > > if (likely(rl->owner != cur_cpu) { > > spin_lock(&rl->lock); > > rl->owner = smp_processor_id(); > > rl->count = 1; > > } else { > > rl->count++; > > } > > > > And the inverse for unlock. > > > > Or am I missing something subtle? > > Apparently Linus missed it too, and reacted badly to my mail. > I dont know why we discuss of this stuff on lkml either... Encapsulating them so that they appear in the same place might (or might not) have gotten the fact that you were not doing a recursive lock through my head. Even so, the name "rec_lock" might have overwhelmed the code structure in my mind. ;-) > I stop working on this subject and consider drinking dome hard stuf and > watching tv :) That -is- extreme! ;-) Thanx, Paul > See you > -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html