On 9/11/23 21:59, Matthew Wilcox wrote: > On Mon, Sep 11, 2023 at 01:22:51PM -0700, Dave Hansen wrote: >> On 9/11/23 12:12, Matthew Wilcox wrote: >>> On Mon, Sep 11, 2023 at 09:55:37AM -0700, Dave Hansen wrote: >>>> On 9/11/23 09:44, Matthew Wilcox wrote: >>>>> After fixing your two typos, this assembles to 176 bytes more code than >>>>> my version. Not sure that's great. >>>> Maybe I'm a fool, but 176 bytes of text bloat isn't scaring me off too >>>> much. I'd much rather have that than another window into x86 goofiness >>>> to maintain. >>>> >>>> Does that 176 bytes translate into meaningful performance, or is it just >>>> a bunch of register bit twiddling that the CPU will sail through? >>> I'm ... not sure how to tell. It's 1120 bytes vs 944 bytes and crawling >>> through that much x86 assembly isn't my idea of a great time. I can >>> send you objdump -dr for all three options if you like? Maybe there's >>> a quick way to compare them that I've never known about. >> Working patches would be great if you're got 'em handy, plus your >> .config and generally what compiler you're on. > gcc (Debian 13.2.0-2) 13.2.0 > > I don't think there's anything particularly strange about my .config > > If you compile this patch as-is, you'll get your preferred code. > Remove the #define DH and you get mine. > > I would say that 176 bytes is 3 cachelines of I$, which isn't free, > even if all the insns in it can be executed while the CPU is waiting > for cache misses. This ought to be a pretty tight loop anyway; we're > just filling in adjacent PTEs. There may not be many spare cycles > for "free" uops to execute. Thanks for that! I went poking at it a bit. One remarkable thing is how many pv_ops calls there are. Those are definitely keeping the compiler from helping is out here too much. Your version has 9 pv_ops calls while mine has 6. So mine may have more instructions in _this_ function, but it could easily be made up for by call overhead and extra instructions in the pv_ops. Also, I went looking for a way to poke at set_ptes() and profile it a bit and get some actual numbers. It seems like in most cases it would be limited to use via fault around. Is there some other way to poke at it easily? So, in the end, I see code which is not (as far as I can see) in a hot path, and (again, to me) there's no compelling performance argument one way or another. I still like my version. *Known* simplicity and uniformity win out in my book over unknown performance benefits. But, fixing the bug is the most important thing. I don't feel strongly about it to NAK your version either.