Re: [PATCH -V3 01/11] percpu_counters: make fbc->count read atomic on 32 bit architecture

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



在 2008-08-27三的 21:09 -0700,Andrew Morton写道:
> On Thu, 28 Aug 2008 09:22:00 +0530 "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxxxxxxx> wrote:
> 
> > On Wed, Aug 27, 2008 at 02:22:50PM -0700, Andrew Morton wrote:
> > > On Wed, 27 Aug 2008 23:01:52 +0200
> > > Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> wrote:
> > > 
> > > > > 
> > > > > > +static inline s64 percpu_counter_read(struct percpu_counter *fbc)
> > > > > > +{
> > > > > > +	return fbc_count(fbc);
> > > > > > +}
> > > > > 
> > > > > This change means that a percpu_counter_read() from interrupt context
> > > > > on a 32-bit machine is now deadlockable, whereas it previously was not
> > > > > deadlockable on either 32-bit or 64-bit.
> > > > > 
> > > > > This flows on to the lib/proportions.c, which uses
> > > > > percpu_counter_read() and also does spin_lock_irqsave() internally,
> > > > > indicating that it is (or was) designed to be used in IRQ contexts.
> > > > 
> > > > percpu_counter() never was irq safe, which is why the proportion stuff
> > > > does all the irq disabling bits by hand.
> > > 
> > > percpu_counter_read() was irq-safe.  That changes here.  Needs careful
> > > review, changelogging and, preferably, runtime checks.  But perhaps
> > > they should be inside some CONFIG_thing which won't normally be done in
> > > production.
> > > 
> > > otoh, percpu_counter_read() is in fact a rare operation, so a bit of
> > > overhead probably won't matter.
> > > 
> > > (write-often, read-rarely is the whole point.  This patch's changelog's
> > > assertion that "Since fbc->count is read more frequently and updated
> > > rarely" is probably wrong.  Most percpu_counters will have their
> > > fbc->count modified far more frequently than having it read from).
> > 
> > we may actually be doing percpu_counter_add. But that doesn't update
> > fbc->count. Only if the local percpu values cross FBC_BATCH we update
> > fbc->count. If we are modifying fbc->count more frequently than
> > reading fbc->count then i guess we would be contenting of fbc->lock more.
> > 
> > 
> 
> Yep.  The frequency of modification of fbc->count is of the order of a
> tenth or a hundredth of the frequency of
> precpu_counter_<modification>() calls.
> 
> But in many cases the frequency of percpu_counter_read() calls is far
> far less than this.  For example, the percpu_counter_read() may only
> happen when userspace polls a /proc file.
> 
> 

The global counter is is much more frequently accessed with delalloc.:(

With delayed allocation, we have to do read the free blocks counter  at
each write_begin(),  to make sure there is enough free blocks to do
block reservation to prevent lately writepages returns ENOSPC.

Mingming

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux