> OK. I see what you mean now. Do you agree with the following analysis? > > 1) We need 2 extra locks for the case where we > upgrade/downgrade, a single existing lock and end up splitting > it. > > 2) We need to use 1 extra lock in the case where we unlock and > split a single existing lock. > > 3) We also need to use 1 extra lock in the case where there is > no existing lock that is contiguous with the region to lock. 4) Also 1 extra lock needed if there's an existing lock that is contiguous/overlapping with the new region but it's a different type, and no existing locks are completely covered > In all other cases, we resort to modifying existing locks instead of > using new_fl/new_fl2. > > In cases (1) and (2) we do need to modify the existing lock. Since this > is only done after we've set up the extra locks, we're safe. And 4. > Could I still suggest a couple of modifications to your patch? Firstly, > we only need to test for 'added' once. Like this? + /* + * The above code only modifies existing locks in case of + * merging or replacing. If new lock(s) need to be inserted + * all modifications are done bellow this, so it's safe yet to + * bail out. + */ + error = -ENOLCK; /* "no luck" */ + if (right && left == right && !new_fl2) + goto out; + error = 0; if (!added) { if (request->fl_type == F_UNLCK) goto out; + + if (!new_fl) { + error -ENOLCK; + goto out; + } locks_copy_lock(new_fl, request); locks_insert_lock(before, new_fl); new_fl = NULL; > Secondly, in cases (2) and (3), we can still complete the lock > despite one of new_fl/new_fl2 failing to be allocated. I think it's highly unlikely that one of the allocations would succeed and the other fail. If the machine is OOM, then it will very likely fail all allocations. But even if that would happen it's not worth it to add more complexity, just to squeeze the last drop out of the available memory for file locking purposes. Miklos - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html