On Tue 06-10-20 10:40:23, David Hildenbrand wrote: > On 06.10.20 10:34, Michal Hocko wrote: > > On Tue 22-09-20 16:37:12, Vlastimil Babka wrote: > >> Page isolation can race with process freeing pages to pcplists in a way that > >> a page from isolated pageblock can end up on pcplist. This can be fixed by > >> repeated draining of pcplists, as done by patch "mm/memory_hotplug: drain > >> per-cpu pages again during memory offline" in [1]. > >> > >> David and Michal would prefer that this race was closed in a way that callers > >> of page isolation who need stronger guarantees don't need to repeatedly drain. > >> David suggested disabling pcplists usage completely during page isolation, > >> instead of repeatedly draining them. > >> > >> To achieve this without adding special cases in alloc/free fastpath, we can use > >> the same approach as boot pagesets - when pcp->high is 0, any pcplist addition > >> will be immediately flushed. > >> > >> The race can thus be closed by setting pcp->high to 0 and draining pcplists > >> once, before calling start_isolate_page_range(). The draining will serialize > >> after processes that already disabled interrupts and read the old value of > >> pcp->high in free_unref_page_commit(), and processes that have not yet disabled > >> interrupts, will observe pcp->high == 0 when they are rescheduled, and skip > >> pcplists. This guarantees no stray pages on pcplists in zones where isolation > >> happens. > >> > >> This patch thus adds zone_pcplist_disable() and zone_pcplist_enable() functions > >> that page isolation users can call before start_isolate_page_range() and after > >> unisolating (or offlining) the isolated pages. A new zone->pcplist_disabled > >> atomic variable makes sure we disable only pcplists once and don't enable > >> them prematurely in case there are multiple users in parallel. > >> > >> We however have to avoid external updates to high and batch by taking > >> pcp_batch_high_lock. To allow multiple isolations in parallel, change this lock > >> from mutex to rwsem. > > > > The overall idea makes sense. I just suspect you are over overcomplicating > > the implementation a bit. Is there any reason that we cannot start with > > a really dumb implementation first. The only user of this functionality > > is the memory offlining and that is already strongly synchronized > > (mem_hotplug_begin) so a lot of trickery can be dropped here. Should we > > find a new user later on we can make the implementation finer grained > > but now it will not serve any purpose. So can we simply update pcp->high > > and drain all pcp in the given zone and wait for all remote pcp draining > > in zone_pcplist_enable and updte revert all that in zone_pcplist_enable. > > We can stick to the existing pcp_batch_high_lock. > > > > What do you think? > > > > My two cents, we might want to make use of this in some cases of > alloc_contig_range() soon ("try hard mode"). So I'd love to see a > synchronized mechanism. However, that can be factored out into a > separate patch, so this patch gets significantly simpler. Exactly. And the incremental patch can be added along with the a-c-r try harder mode. -- Michal Hocko SUSE Labs