On 26.07.19 10:31, Michal Hocko wrote: > On Fri 26-07-19 10:05:58, David Hildenbrand wrote: >> On 26.07.19 09:57, Michal Hocko wrote: >>> On Thu 25-07-19 22:49:36, David Hildenbrand wrote: >>>> On 25.07.19 21:19, Michal Hocko wrote: >>> [...] >>>>> We need to rationalize the locking here, not to add more hacks. >>>> >>>> No, sorry. The real hack is calling a function that is *documented* to >>>> be called under lock without it. That is an optimization for a special >>>> case. That is the black magic in the code. >>> >>> OK, let me ask differently. What does the device_hotplug_lock actually >>> protects from in the add_memory path? (Which data structures) >>> >>> This function is meant to be used when struct pages and node/zone data >>> structures should be updated. Why should we even care about some device >>> concept here? This should all be handled a layer up. Not all memory will >>> have user space API to control online/offline state. >> >> Via add_memory()/__add_memory() we create memory block devices for all >> memory. So all memory we create via this function (IOW, hotplug) will >> have user space APIs. > > Ups, I have mixed add_memory with add_pages which I've had in mind while > writing that. Sorry about the confusion. No worries :) > > Anyway, my dislike of the device_hotplug_lock persists. I would really > love to see it go rather than grow even more to the hotplug code. We > should be really striving for mem hotplug internal and ideally range > defined locking longterm. Yes, and that is a different story, because it will require major changes to all add_memory() users. (esp, due to the documented race conditions). Having that said, memory hotplug locking is not ideal yet. -- Thanks, David / dhildenb