On Tue, Aug 10, 2021 at 06:06:51PM -0700, Mingwei Zhang wrote: > Regarding the pursuit for accuracy, I think there might be several > reasons. One of the most critical reasons that I know is that we need > to ensure dirty logging works correctly, i.e., when dirty logging is > enabled, all huge pages (both 2MB and 1GB) _are_ gone. Hope that > clarifies a little bit? It's just for statistics, right? I mean dirty log should be working even without this change. But I didn't read closely last night, so we want to have "how many huge pages we're mapping", not "how many we've mapped in the history". Yes that makes sense to be accurate. I should have looked more carefully, sorry. PS: it turns out atomic is not that expensive as I thought even on a 200 core system, which takes 7ns (but for sure it's still expensive than normal memory ops, and bus locking); I thought it'll be bigger as on a 40 core system I got 15ns which is 2x of my laptop of 8 cores, but it didn't really grow but shrink. Thanks, -- Peter Xu