On Mon, 2008-08-25 at 19:35 +0530, Aneesh Kumar K.V wrote: > On Mon, Aug 25, 2008 at 01:27:19PM +0200, Peter Zijlstra wrote: > > On Mon, 2008-08-25 at 16:50 +0530, Aneesh Kumar K.V wrote: > > > @@ -53,10 +53,31 @@ static inline s64 percpu_counter_sum(struct percpu_counter *fbc) > > > return __percpu_counter_sum(fbc); > > > } > > > > > > -static inline s64 percpu_counter_read(struct percpu_counter *fbc) > > > +#if BITS_PER_LONG == 64 > > > +static inline s64 fbc_count(struct percpu_counter *fbc) > > > { > > > return fbc->count; > > > } > > > +#else > > > +/* doesn't have atomic 64 bit operation */ > > > +static inline s64 fbc_count(struct percpu_counter *fbc) > > > +{ > > > + s64 ret; > > > + unsigned seq; > > > + unsigned long flags; > > > + do { > > > + seq = read_seqbegin_irqsave(&fbc->lock, flags); > > > + ret = fbc->count; > > > + } while(read_seqretry_irqrestore(&fbc->lock, seq, flags)); > > > > Do we really need to disabled IRQs here? It seems to me the worst that > > can happen is that the IRQ will change ->count and increase the sequence > > number a bit - a case that is perfectly handled by the current retry > > logic. > > > > And not doing the IRQ flags bit saves a lot of time on some archs. > > > > Will update in the next version. BTW does it make sense to do > the above unconditionally now ? ie to remove the #if ?. How much > impact would it be to do read_seqbegin and read_seqretry on a 64bit > machine too ? there's a few smp_rmb()s in there - so that will at the very least be a compiler barrier and thus generate slightly worse code along with the few extra reads. But I'm not sure that's measurable.. -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html