The race happens when two threads try to lookup the iint entry for the same inode at the same time. The reason it happens is this pseudocode: read_lock lookup read_unlock write_lock alloc_and_add write_unlock because the lookup is only protected by a read_lock, both racing threads can execute the lookup code in parallel and conclude there's no entry ... neither thread can get into the alloc_and_add code until the other thread finishes and drops the read_lock, which then allows promotion to the exclusive write lock. Event though the alloc_and_add will be serialized by the write lock, both threads will add an iint entry for the same inode. The fix for this is to do the lookup again under the exclusive lock, so the last thread into the exclusive section will see the addition and not do another alloc_and_add. Signed-off-by: James Bottomley <jejb@xxxxxxxxxxxxx> --- diff --git a/security/integrity/iint.c b/security/integrity/iint.c index 8638976f7990..eadc5890f4ec 100644 --- a/security/integrity/iint.c +++ b/security/integrity/iint.c @@ -116,6 +116,21 @@ struct integrity_iint_cache *integrity_inode_get(struct inode *inode) write_lock(&integrity_iint_lock); + /* + * unlikely race caused by two threads executing the lookup and + * add simultaneously. Because the lookup is under a read + * lock, they can both execute that in parallel and both + * conclude there's no entry for inode. To prevent them then + * both adding separate entries for the same inode we need to + * perform the lookup again under the exclusive lock. + */ + test_iint = __integrity_iint_find(inode); + if (unlikely(test_iint)) { + write_unlock(&integrity_iint_lock); + kmem_cache_free(iint_cache, iint); + return test_iint; + } + p = &integrity_iint_tree.rb_node; while (*p) { parent = *p;