On Fri, Jul 28, 2017 at 7:05 AM, Vlastimil Babka <vbabka@xxxxxxx> wrote: > On 07/28/2017 11:30 AM, Peter Zijlstra wrote: >> On Fri, Jul 28, 2017 at 09:45:16AM +0200, Vlastimil Babka wrote: >>> [+CC PeterZ] >>> >>> On 07/27/2017 06:46 PM, Dima Zavin wrote: >>>> In codepaths that use the begin/retry interface for reading >>>> mems_allowed_seq with irqs disabled, there exists a race condition that >>>> stalls the patch process after only modifying a subset of the >>>> static_branch call sites. >>>> >>>> This problem manifested itself as a dead lock in the slub >>>> allocator, inside get_any_partial. The loop reads >>>> mems_allowed_seq value (via read_mems_allowed_begin), >>>> performs the defrag operation, and then verifies the consistency >>>> of mem_allowed via the read_mems_allowed_retry and the cookie >>>> returned by xxx_begin. The issue here is that both begin and retry >>>> first check if cpusets are enabled via cpusets_enabled() static branch. >>>> This branch can be rewritted dynamically (via cpuset_inc) if a new >>>> cpuset is created. The x86 jump label code fully synchronizes across >>>> all CPUs for every entry it rewrites. If it rewrites only one of the >>>> callsites (specifically the one in read_mems_allowed_retry) and then >>>> waits for the smp_call_function(do_sync_core) to complete while a CPU is >>>> inside the begin/retry section with IRQs off and the mems_allowed value >>>> is changed, we can hang. This is because begin() will always return 0 >>>> (since it wasn't patched yet) while retry() will test the 0 against >>>> the actual value of the seq counter. >>> >>> Hm I wonder if there are other static branch users potentially having >>> similar problem. Then it would be best to fix this at static branch >>> level. Any idea, Peter? An inelegant solution would be to have indicate >>> static_branch_(un)likely() callsites ordering for the patching. I.e. >>> here we would make sure that read_mems_allowed_begin() callsites are >>> patched before read_mems_allowed_retry() when enabling the static key, >>> and the opposite order when disabling the static key. >> >> I'm not aware of any other sure ordering requirements. But you can >> manually create this order by using 2 static keys. Then flip them in the >> desired order. > > Right, thanks for the suggestion. I think that would be preferable to > complicating the cookie handling. Add a new key next to > cpusets_enabled_key, let's say "cpusets_enabled_pre_key". Make > read_mems_allowed_begin() check this key instead of cpusets_enabled(). > Change cpuset_inc/dec to inc/dec also this new key in the right order > and that should be it. Dima, can you try that or should I? Yeah, I like that approach much better. I'll re-spin a new version in a bit. --Dima > > Thanks, > Vlastimil > >> -- >> To unsubscribe, send a message with 'unsubscribe linux-mm' in >> the body to majordomo@xxxxxxxxx. For more info on Linux MM, >> see: http://www.linux-mm.org/ . >> Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a> >> > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>