On Fri, Apr 07, 2017 at 03:06:00PM +0200, Andrea Arcangeli wrote: > On Fri, Apr 07, 2017 at 11:02:11AM +0100, Chris Wilson wrote: > > On Fri, Apr 07, 2017 at 01:23:44AM +0200, Andrea Arcangeli wrote: > > > Waiting a RCU grace period only guarantees the work gets queued, but > > > until after the queued workqueue returns, there's no guarantee the > > > memory was actually freed. So flush the work to provide better > > > guarantees to the reclaim code in addition of waiting a RCU grace > > > period to pass. > > > > We are not allowed to call flush_work() from the shrinker, the workqueue > > doesn't have and can't have the right reclaim flags. > > I figured the flush_work had to be conditional to "unlock" being true > too in the i915 shrinker (not only synchronize_rcu_expedited()), and I > already fixed that bit, but I didn't think it would be a problem to > wait for the workqueue as long as reclaim didn't recurse on the > struct_mutex (it is a problem if unlock is false of course as we would > be back to square one). I didn't get further hangs and I assume I've > been running a couple of synchronize_rcu_expedited() and flush_work (I > should add dynamic tracing to be sure). Not getting hangs is a good sign, but lockdep doesn't like it: [ 460.684901] WARNING: CPU: 1 PID: 172 at kernel/workqueue.c:2418 check_flush_dependency+0x92/0x130 [ 460.684924] workqueue: PF_MEMALLOC task 172(kworker/1:1H) is flushing !WQ_MEM_RECLAIM events:__i915_gem_free_work [i915] If I allocated the workqueue with WQ_MEM_RELCAIM, it complains bitterly as well. > Also note, I didn't get any lockdep warning when I reproduced the > workqueue hang in 4.11-rc5 so at least as far as lockdep is concerned > there's no problem to call synchronize_rcu_expedited and it couldn't > notice we were holding the struct_mutex while waiting for the new > workqueue to run. Yes, that is concerning. I think it's due to the coupling via the struct completion that is not being picked up lockdep, and I hope the "crossrelease" patches would fix the lack of warnings. > Also note recursing on the lock (unlock false case) is something > nothing else does, I'm not sure if it's worth the risk and if you > shouldn't just call mutex_trylock in the shrinker instead of > mutex_trylock_recursive. One thing was to recurse on the lock > internally in the same context, but recursing through the whole > reclaim is more dubious as safe. We know. We don't use trylock in order to reduce the frequency of users' oom. Peter added mutex_trylock_recursive() because we already were doing recursive locking in the shrinker and although we know we shouldn't, getting rid of the recursion is something we are doing, but slowly. > You could start dropping objects and wiping vmas and stuff in the > middle of some kmalloc/alloc_pages that doesn't expect it and then > crash for other reasons. So this reclaim recursion model of the > shinker is quite unique and quite challenging to proof as safe if you > keep using mutex_trylock_recursive in i915_gem_shrinker_scan. I know. Trying to stay on top of all the kmallocs under the struct_mutex and being aware that the shrinker can and will undo your objects as you work is a continual battle. And catches everyone working on i915.ko by surprise. Our policy to avoid surprises is based around pin before alloc. > Lock recursion in all other places could be dropped without runtime > downsides, the only place mutex_trylock_recursive makes a design > difference and makes sense to be used is in i915_gem_shrinker_scan, > the rest are implementation issues not fundamental shrinker design and > it'd be nice if those other mutex_trylock_recursive would all be > removed and the only one that is left is in i915_gem_shrinker_scan and > nowhere else (or to drop it also from i915_gem_shrinker_scan). We do need it for shrinker_count as well. If we just report 0 objects, the shrinker_scan callback will be skipped, iirc. All we do need it for direct calls to i915_gem_shrink() which themselves may or may not be underneath the struct_mutex at the time. > mutex_trylock_recursive() should also be patched to use > READ_ONCE(__mutex_owner(lock)) because currently it breaks C. I don't follow, static inline struct task_struct *__mutex_owner(struct mutex *lock) { return (struct task_struct *)(atomic_long_read(&lock->owner) & ~0x07); } The atomic read is equivalent to READ_ONCE(). What's broken here? (I guess strict aliasing and pointer cast?) > In the whole kernel i915 and msm drm are the only two users of such > function in fact. Yes, Peter will continue to remind us to fix our code and complain until it is. > Another thing is what value return from i915_gem_shrinker_scan when > unlock is false, and we can't possibly wait for the memory to be freed > let alone for a rcu grace period. For various reasons I think it's > safer to return the current "free" even if we could also return "0" in > such case. There are different tradeoffs, returning "free" is less > likely to trigger an early OOM as the VM thinks it's still making > progress and in fact it will get more free memory shortly, while > returning SHRINK_STOP would also be an option and it would insist more > on the other slabs so it would be more reliable at freeing memory > timely, but it would be more at risk of early OOM. I think returning > "free" is the better tradeoff of the two, but I suggest to add a > comment as it's not exactly obvious what is better. Ah. The RCU freeing is only for the small fry, the slabs from which requests and objects are allocated. It's the gigabytes of pages we have pinned that can be released by i915_gem_shrink() are what we count as freed, even though we only return them to the system and hope the lru anon page scanner will make available for reuse (if they are dirty, they will need to be swapped out, but some will be returned to the system directly by truncating the shmemfs filp backing an object.) -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx