On Fri, 29 Nov 2013, Sebastian Andrzej Siewior wrote: > * Sebastian Andrzej Siewior | 2013-11-29 16:14:01 [+0100]: > > >* Nicholas Mc Guire | 2013-11-23 01:51:58 [+0100]: > > > >>>From 5c9a0c1510ec29c1e148f66f3c111f52f7565df1 Mon Sep 17 00:00:00 2001 > >>From: Nicholas Mc Guire <der.herr@xxxxxxx> > >>Date: Fri, 22 Nov 2013 02:41:48 -0500 > >>Subject: [PATCH] migrate_disable pushd down in rt_read_trylock > >> > >> No need to migrate_disable before requesting the lock and no need to > >> speculatively disable/enable on every recursive call. migration_disable > >> can be done at the latest point in the code before returning an acquired > >> ``lock. > >> > >> patch is on top of 3.12-rt2 > >> > >> No change of functionality > >Applied without this line. > > and dropped because there is a problem with this: > - Now > if you read_lock() and then read_try_lock() then migrate_disable() is > called by each caller. Also on read_unlock() migrate_enable() is called > by each caller. > > - with patch > read_lock() calls migrate_disable() and read_try_lock() does not. Both > get the lock. So on read_unlock(), the read_try_lock() owner remains > unbalanced. > > disabling migration prior incrementing read_depth should fix this. > yup - that one is broken - interesting that the boxes run happily for days now with this bug applied :) 4core i3 and a 4core i7 So the fix would be to do the push_down into rt_read_lock and then balance it in read_unlock conditioned on read_depth reaching 0. have a few more cleanups - will give them another scan if I missed this in any of the others. thx! hofrat -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html