W dniu 15.05.2019 o 17:18, Matthew Wilcox pisze: > On Wed, May 15, 2019 at 08:02:17AM -0700, Eric Dumazet wrote: >> On Wed, May 15, 2019 at 7:43 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: >>> You're seeing a race between page_address(page) being called twice. >>> Between those two calls, something has caused the page to be removed from >>> the page_address_map() list. Eric's patch avoids calling page_address(), >>> so apply it and be happy. >> Hmm... wont the kmap_atomic() done later, after page_copy_sane() would >> suffer from the race ? >> >> It seems there is a real bug somewhere to fix. > No. page_address() called before the kmap_atomic() will look through > the list of mappings and see if that page is mapped somewhere. We unmap > lazily, so all it takes to trigger this race is that the page _has_ > been mapped before, and its mapping gets torn down during this call. > > While the page is kmapped, its mapping cannot be torn down. And that's the answer I'm really glad to hear. In the meantime I've set up a test run with CONFIG_HIGHMEM disabled to be extra sure, however quite expectedly it robs the system of 256MiB of accessible memory. Unfortunatly, looking through defconfigs for 32-bit ARM, CONFIG_HIGHMEM is enabled in quite a few of them, i.MX included. I'll pick up the patch then and drop it when it gets included in 4.19.y. Thanks! -- With kind regards, Lech