On Aug 2, 2005, at 12:35 PM, Alasdair G Kergon wrote:
On Tue, Aug 02, 2005 at 11:00:05AM -0500, Jon Brassow wrote:
The 'race.patch' fixes a couple race conditions that were a result of
the __rh_alloc() function releasing and regrabbing a lock when called.
- read_unlock(&rh->hash_lock);
- nreg = mempool_alloc(rh->region_pool, GFP_NOIO);
+ nreg = mempool_alloc(rh->region_pool, GFP_ATOMIC);
nreg->state = rh->log->type->in_sync(rh->log, region, 1) ?
So mempool_alloc is now called while holding a spinlock.
With GFP_ATOMIC, what happens if it returns NULL?
That's what the mempool is all about. :) I suppose, if you were under
memory pressure long enough and the kernel never successfully reclaimed
memory, it *might* return NULL -- in which case, the kernel oops's. I
think this is so unlikely, that I would be willing to put in a BUG()
call if nreg is found to be NULL. BTW, the scenario is exactly the
same as if the gfp_flag was GFP_NOIO -- if the kernel can't get us
memory, it can't get us memory.
Doesn't it need to be able to sleep to get the benefit of the mempool?
No. mempool will try to allocate memory honoring the flags you have
set (in this case GFP_ATOMIC). If unsuccessful, it will tap into the
memory reserve you have created when you did a mempool_create. So,
when there is no memory left to give us, we have 64 _regions_ left in
our repository.
Am I wrong?
The 'deadlock.patch' switches all spin_lock calls on the region_lock
to
spin_lock_irqsave. Without this patch you machine will hang.
I'd prefer it if race.patch used the correct versions (with or without
_irq).
Are you asking me to "combine" the patches? I can do that.
brassow