Re: [PATCH] Fix Bug messages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2008-07-31 at 16:10 +0200, John Kacur wrote:
> On Thu, Jul 31, 2008 at 4:01 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> > On Thu, 2008-07-31 at 15:49 +0200, John Kacur wrote:
> >> Signed-off-by: John Kacur <jkacur@xxxxxxxxx>
> >> Index: linux-2.6.26-rt1/net/core/sock.c
> >> ===================================================================
> >> --- linux-2.6.26-rt1.orig/net/core/sock.c
> >> +++ linux-2.6.26-rt1/net/core/sock.c
> >> @@ -1986,11 +1986,12 @@ static __init int net_inuse_init(void)
> >>
> >>  core_initcall(net_inuse_init);
> >>  #else
> >> -static DEFINE_PER_CPU(struct prot_inuse, prot_inuse);
> >> +static DEFINE_PER_CPU_LOCKED(struct prot_inuse, prot_inuse);
> >>
> >>  void sock_prot_inuse_add(struct net *net, struct proto *prot, int val)
> >>  {
> >> -       __get_cpu_var(prot_inuse).val[prot->inuse_idx] += val;
> >> +       int cpu = 0;
> >> +       __get_cpu_var_locked(prot_inuse, cpu).val[prot->inuse_idx] += val;
> >>  }
> >>  EXPORT_SYMBOL_GPL(sock_prot_inuse_add);
> >>
> >> @@ -2000,7 +2001,7 @@ int sock_prot_inuse_get(struct net *net,
> >>         int res = 0;
> >>
> >>         for_each_possible_cpu(cpu)
> >> -               res += per_cpu(prot_inuse, cpu).val[idx];
> >> +               res += per_cpu_var_locked(prot_inuse, cpu).val[idx];
> >>
> >>         return res >= 0 ? res : 0;
> >>  }
> >
> > This doesn't look good. You declare it as a PER_CPU_LOCKED, but then
> > never use the extra lock to synchronize data.
> >
> > Given that sock_proc_inuse_get() is a racy read anyway, the 'right' fix
> > would be to do something like:
> >
> > diff --git a/net/core/sock.c b/net/core/sock.c
> > index 91f8bbc..5a8ace4 100644
> > --- a/net/core/sock.c
> > +++ b/net/core/sock.c
> > @@ -1941,8 +1941,9 @@ static DECLARE_BITMAP(proto_inuse_idx, PROTO_INUSE_NR);
> >  #ifdef CONFIG_NET_NS
> >  void sock_prot_inuse_add(struct net *net, struct proto *prot, int val)
> >  {
> > -       int cpu = smp_processor_id();
> > +       int cpu = get_cpu();
> >        per_cpu_ptr(net->core.inuse, cpu)->val[prot->inuse_idx] += val;
> > +       put_cpu();
> >  }
> >  EXPORT_SYMBOL_GPL(sock_prot_inuse_add);
> >
> > @@ -1988,7 +1989,9 @@ static DEFINE_PER_CPU(struct prot_inuse, prot_inuse);
> >
> >  void sock_prot_inuse_add(struct net *net, struct proto *prot, int val)
> >  {
> > -       __get_cpu_var(prot_inuse).val[prot->inuse_idx] += val;
> > +       int cpu = get_cpu();
> > +       per_cpu(prot_inuse, cpu).val[prot->inuse_idx] += val;
> > +       put_cpu();
> >  }
> >  EXPORT_SYMBOL_GPL(sock_prot_inuse_add);
> >
> > This disables preemption, but only for a very short time - so it doesn't
> > hurt the preempt-latency.
> >
> > The alternative is to take a lock, do the inc, and drop the lock again,
> > which is much more expensive.
> >
> >
> 
> Cool, thanks for the quick feedback. What kind of criteria are used to
> decide between disabling preemption for a short time, or using the
> more expensive lock?

Basically total cost of the operation.. in this case the cost of taking
the lock utterly dwarfs the cost of the operation.

And since its Real-Time we're talking about, its the WCET of the
operation that counts.

--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux