RE: [RFC] mm/memory.c: Optimizing THP zeroing routine for !HIGHMEM cases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > +#else
> > +void clear_huge_page(struct page *page,
> > +                    unsigned long addr_hint, unsigned int
> > +pages_per_huge_page) {
> > +       void *addr;
> > +
> > +       addr = page_address(page);
> > +       memset(addr, 0, pages_per_huge_page*PAGE_SIZE); } #endif
> 
> This seems like a very simplistic solution to the problem, and I am worried
> something like this would introduce latency issues when pages_per_huge_page
> gets to be large. It might make more sense to just wrap the process_huge_page
> call in the original clear_huge_page and then add this code block as an #else
> case. That way you avoid potentially stalling a system for extended periods of
> time if you start trying to clear 1G pages with the function.
> 
> One interesting data point would be to see what the cost is for breaking this up
> into a loop where you only process some fixed number of pages and running it
> with cond_resched() so you can avoid introducing latency spikes.

As per the patch above, it's not using kmap_atomic() and hence preemption & page_fault
are not disabled. Do we still need to explicitly call cond_resched() in this case?
#justAsking




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux