On Wed, Dec 06, 2006 at 02:57:22PM -0500, J. Bruce Fields wrote: > On Wed, Dec 06, 2006 at 09:49:51AM -0600, David Teigland wrote: > > The gfs side looks fine to me. Did you forget to call fl_notify from > > gdlm_plock_callback() or am I missing something? > > Yes, looks like we missed something, thanks. This code's an rfc (not > even tested), so don't apply it yet! What we should have there is > something like: > > rv = op->info.rv; > > if (fl_notify(fl, NULL, rv)) { > /* XXX: We need to cancel the lock here: */ > printk("gfs2 lock granted after lock request failed; dangling lock!\n"); > } > > if (!rv) { > /* check if the following are still valid or make a copy */ > file = op->info.file; > fl = op->info.fl; > > if (posix_lock_file_wait(file, fl) < 0) > log_error("gdlm_plock: vfs lock error file %p fl %p", > file, fl); > } > > Note there's a race condition--that calls fl_notify before actually > getting the lock locally. I don't *think* that's a problem, as long as > it's the filesystem and not the local lock list that's authoritative > when it comes to who gets a posix lock. agree > The more annoying problem is the need to cancel the GFS lock when > fl_notify fails; is that something that it's possible for GFS to do? > > It can fail because lockd has a timeout--it waits a few seconds for the > callback, then gives up and returns a failure to the user. If that > happens after your userspace posix lock manager acquires the lock (but > before fl_notify is called) then you've got to cancel it somehow. I'd think we could just send an unlock for it at that point. Set up an op with GDLM_PLOCK_OP_UNLOCK and the same fields as the lock you're removing and call send_op(). We probably need to flag this internal-unlock op so that when the result arrives, device_write() delists and frees it itself. (I wouldn't call this "canceling", I think of cancel as trying to force a blocked request to return/fail prematurely.) Dave - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html