On 11/7/18 8:02 AM, Konstantin Khlebnikov wrote: > On 06.11.2018 11:43, Arun KS wrote: >> On 2018-11-06 14:07, Konstantin Khlebnikov wrote: >>> On 06.11.2018 11:30, Arun KS wrote: >>>> On 2018-11-06 13:47, Konstantin Khlebnikov wrote: >>>>> On 06.11.2018 8:38, Arun KS wrote: >>>>>> Any comments? >>>>> >>>>> Looks good. >>>>> Except unclear motivation behind this change. >>>>> This should be in comment of one of patch. >>>> >>>> totalram_pages, zone->managed_pages and totalhigh_pages are sometimes modified outside managed_page_count_lock. Hence convert these >>>> variable to atomic to avoid readers potentially seeing a store tear. >>> >>> So, this is just theoretical issue or splat from sanitizer. >>> After boot memory online\offline are strictly serialized by rw-semaphore. >> >> Few instances which can race with hot add. Please see below, >> https://patchwork.kernel.org/patch/10627521/ > Could you point what exactly are you fixing with this set? > > from v2: > > > totalram_pages, zone->managed_pages and totalhigh_pages updates > > are protected by managed_page_count_lock, but readers never care > > about it. Convert these variables to atomic to avoid readers > > potentially seeing a store tear. > > This? > > > Aligned unsigned long almost always stored at once. The point is "almost always", so better not rely on it :) But the main motivation was that managed_page_count_lock handling was complicating Arun's "memory_hotplug: Free pages as higher order" patch and it seemed a better idea to just remove and convert this to atomics, with preventing potential store-to-read tearing as a bonus. It would be nice to mention it in the changelogs though. > To make it completely correct you could replace > > a += b; > > with > > WRITE_ONCE(a, a + b); Wouldn't be enough to get rid of the locks.