On Wed, 13 Jul 2016, Manfred Spraul wrote:
-static void sem_wait_array(struct sem_array *sma) +static void complexmode_enter(struct sem_array *sma) { int i; struct sem *sem; - if (sma->complex_count) { - /* The thread that increased sma->complex_count waited on - * all sem->lock locks. Thus we don't need to wait again. - */ + if (sma->complex_mode) { + /* We are already in complex_mode. Nothing to do */ return; } + WRITE_ONCE(sma->complex_mode, true);
So we can actually save those READ/WRITE_ONCE calls for complex_mode as it's a bool and therefore tearing is not an issue.
+ + /* We need a full barrier: + * The write to complex_mode must be visible + * before we read the first sem->lock spinlock state. + */ + smp_mb();
smp_store_mb()?
/* @@ -300,56 +338,40 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops, /* Complex operation - acquire a full lock */ ipc_lock_object(&sma->sem_perm); - /* And wait until all simple ops that are processed - * right now have dropped their locks. - */ - sem_wait_array(sma); + /* Prevent parallel simple ops */ + complexmode_enter(sma); return -1;
nit and unrelated: we should probably use some better label here than a raw -1 (although I don't see it changing, just for nicer reading), ie: SEM_OBJECT_LOCKED Thanks, Davidlohr -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html