On Tue, Jun 20, 2023 at 05:16:57PM -0400, Joel Fernandes wrote: > Hi Lorenzo, > > > As far as I can tell, find_vma_prev() already does a walk? I mean this is > > equivalent to find_vma() only retrieving the previous VMA right? I defer to > > Liam, but I'm not sure this would be that much more involved? Perhaps he > > can comment. > > > > An alternative is to create an iterator and use vma_prev(). I find it > > extremely clunky that we search for a VMA we already possess (and it's > > previous one) while not needing the the former. > > > > I'm not hugely familiar with the maple tree (perhaps Liam can comment) but > > I suspect that'd be more performant if that's the concern. Either way I > > would be surprised if this is the correct approach. > > I see your point. I am not sure myself, the maple tree functions for both > APIs are indeed similar. We already have looked up the VMA being aligned > down. If there is a way to get the previous VMA quickly, given an existing > VMA, I can incorporate that change. > > Ideally, if I had access to the ma_state used for lookup of the VMA being > aligned down, I could perhaps reuse that somehow. But when I checked, that > seemed a lot more invasive to pass that state down to these align functions. > > But there is a merit to your suggestion itself in the sense it cuts down a > few more lines of code. Yeah it's thorny, the maple tree seems to add a separation between the vmi and vma that didn't exist previously, and you'd have to thread through the mas or vmi that was used in the first instance or end up having to rewalk the tree anyway. I have spesnt too much time staring at v6 code where it was trivial :) I think given we have to walk the tree either way, we may as well do the find_vma_intersection() [happy to stand corrected by Liam if this turns out not to be less efficient but I don't think it is]. > > > > Considering this, I would keep the code as-is and perhaps you/we could > > > consider the replacement with another API in a subsequent patch as it does > > > the job for this patch. > > > > See above. I don't think this kind of comment is helpful in code > > review. Your disagreement above suffices, I've responded to it and of > > course if there is no other way this is fine. > > > > But I'd be surprised, and re-looking up a VMA we already have is just > > horrid. It's not really a nitpick, it's a code quality issue in my view. > > > > In any case, let's please try to avoid 'if you are bothered, write a follow > > up patch' style responses. If you disagree with something just say so, it's > > fine! :) > > I wasn't disagreeing :) Just saying that the find_vma_prev() suggested in a > previous conversation with Liam fixes the issue (and has been tested a lot > in this series, on my side) so I was hoping to stick to that and we could > iterate more on that in the future. > > However, after taking a deeper look at the maple tree, I'd like to give the > find_vma_intersection() option at least a try (with appropriate attribution > to you). Cheers appreciate it, sorry to be a pain and nitpicky but I think if that works as well it could be quite a nice solution. > > Apologies if the response style in my previous email came across badly. That > wasn't my intent and I will try to improve myself. No worries, text is a sucky medium and likely I misinterpreted you in any case. We're having a productive discussion which is what matters! :) > > [..] > > > > > > + realign_addr(&old_addr, vma, &new_addr, new_vma, PMD_MASK); > > > > > + } > > > > > + > > > > > if (is_vm_hugetlb_page(vma)) > > > > > return move_hugetlb_page_tables(vma, new_vma, old_addr, > > > > > new_addr, len); > > > > > @@ -565,6 +619,13 @@ unsigned long move_page_tables(struct vm_area_struct *vma, > > > > > > > > > > mmu_notifier_invalidate_range_end(&range); > > > > > > > > > > + /* > > > > > + * Prevent negative return values when {old,new}_addr was realigned > > > > > + * but we broke out of the above loop for the first PMD itself. > > > > > + */ > > > > > + if (len + old_addr < old_end) > > > > > + return 0; > > > > > + > > > > > > > > I find this a little iffy, I mean I see that if you align [old,new]_addr to > > > > PMD, then from then on in you're relying on the fact that the loop is just > > > > going from old_addr (now aligned) -> old_end and thus has the correct > > > > length. > > > > > > > > Can't we just fix this issue by correcting len? If you take my review above > > > > which checks len in [maybe_]realign_addr(), you could take that as a > > > > pointer and equally update that. > > > > > > > > Then you can drop this check. > > > > > > The drawback of adjusting len is it changes what move_page_tables() users > > > were previously expecting. > > > > > > I think we should look at the return value of move_page_tables() as well, > > > not just len independently. > > > > > > len is what the user requested. > > > > > > "len + old_addr - old_end" is how much was actually copied and is the return value. > > > > > > If everything was copied, old_addr == old_end and len is unchanged. > > > > Ah yeah I see, sorry I missed the fact we're returning a value, that does > > complicate things... > > > > If we retain the hugetlb logic, then we could work around the issue with > > that instance of len by storing the 'actual length' of the range in > > a new var actual_len and passing that. > > > > If we choose to instead just not do this for hugetlb (I wonder if the > > hugetlb handling code actually does the equivalent of this since surely > > these pages have to be handled a PMD at a time?) then we can drop the whole > > actual_len idea [see below on response to hugetlb thing]. > > Thanks. Yes, you are right. We should already b good with hugetlb handling > as it does appear that hugetlb_move_page_tables() does copy by > huge_page_size(h), so the old_addr should already be PMD-aligned for it to > be able to do that. Cool, in which case we can drop the actual_len idea as this is really the only place we'd need it. > > [..] > > > > > > return len + old_addr - old_end; /* how much done */ > > > > > } > > > > Also I am concerned in the hugetlb case -> len is passed to > > > > move_hugetlb_page_tables() which is now strictly incorrect, I wonder if > > > > this could cause an issue? > > > > > > > > Correcting len seems the neat way of addressing this. > > > > > > That's a good point. I am wondering if we can just change that from: > > > > > > if (is_vm_hugetlb_page(vma)) > > > return move_hugetlb_page_tables(vma, new_vma, old_addr, > > > new_addr, len); > > > > > > to: > > > if (is_vm_hugetlb_page(vma)) > > > return move_hugetlb_page_tables(vma, new_vma, old_addr, > > > new_addr, old_addr - new_addr); > > > > > > Or, another option is to turn it off for hugetlb by just moving: > > > > > > if (len >= PMD_SIZE - (old_addr & ~PMD_MASK)) > > > realign_addr(...); > > > > > > to after: > > > > > > if (is_vm_hugetlb_page(vma)) > > > return move_hugetlb_page_tables(...); > > > > > > thanks, > > > > I think the actual_len solution should sort this right? If not maybe better > > to be conservative and disable for the hugetlb case (I'm not sure if this > > would help given you'd need to be PMD aligned anyway right?), so not to > > hold up the series. > > > > If we do decide not to include hugetlb (the endless 'special case' for so > > much code...) in this then we can drop the actual_len idea altogether. > > > > (Yes I realise it's ironic I'm suggesting deferring to a later patch here > > but there you go ;) > > ;-). Considering our discussion above that hugetlb mremap addresses should > always starts at a PMD boundary, maybe I can just add a warning to the if() > like so to detect any potential? > > if (is_vm_hugetlb_page(vma)) { > WARN_ON_ONCE(old_addr - old_end != len); > return move_hugetlb_page_tables(vma, new_vma, old_addr, > new_addr, len); > } > Yeah looks good [ack your follow up that it should be old_end - old_addr], maybe adding a comment explaining why this is a problem here too. > > Thank you so much and I learnt a lot from you and others in -mm community. No worries, thanks very much for this patch series, this is a nice fixup for a quite stupid failure we were experiencing before with a neat solution. Your hard work is appreciated! > > - Joel >