Re: 307af6c879377 "mbcache: automatically delete entries from cache on freeing" ==> PREEMPT_RT grumble

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mike!

[added ext4 list to CC]

On Tue 06-09-22 18:21:37, Mike Galbraith wrote:
> diff --git a/fs/mbcache.c b/fs/mbcache.c
> index d1ebb5df2856..96f1d49d30a5 100644
> --- a/fs/mbcache.c
> +++ b/fs/mbcache.c
> @ -106,21 +106,28 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
>  		}
>  	}
>  	hlist_bl_add_head(&entry->e_hash_list, head);
> -	hlist_bl_unlock(head);
> -
> +	/*
> +	 * Add entry to LRU list before it can be found by
> +	 * mb_cache_entry_delete() to avoid races
> +	 */
>  	spin_lock(&cache->c_list_lock);
>  	list_add_tail(&entry->e_list, &cache->c_list);
> -	/* Grab ref for LRU list */
> -	atomic_inc(&entry->e_refcnt);
>  	cache->c_entry_count++;
>  	spin_unlock(&cache->c_list_lock);
> +	hlist_bl_unlock(head);
> 
>  	return 0;
>  }
>  EXPORT_SYMBOL(mb_cache_entry_create);
> 
> The above movement of hlist_bl_unlock() is a problem for RT wrt both
> taking and releasing of ->c_list_lock, it becoming an rtmutex in RT and
> hlist_bl_unlock() taking a preemption blocking bit spinlock.
> 
> Is that scope increase necessary?  If so, looks like ->c_list_lock
> could probably become a raw_spinlock_t without anyone noticing.

Well, it was an easy solution but there's relatively simple workaround that
should remove the need of nesting. I'll send a patch.

								Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR



[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux