Re: Prerequisites for Large Anon Folios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24/07/2023 10:33, Yin, Fengwei wrote:
> 
> 
> On 7/24/2023 5:04 PM, Ryan Roberts wrote:
>> On 23/07/2023 13:33, Yin, Fengwei wrote:
>>>
>>>
>>> On 7/20/2023 5:41 PM, Ryan Roberts wrote:
>>>> Hi All,
>>>>
>>>> As discussed at Matthew's call yesterday evening, I've put together a list of
>>>> items that need to be done as prerequisites for merging large anonymous folios
>>>> support.
>>>>
>>>> It would be great to get some review and confirmation as to whether anything is
>>>> missing or incorrect. Most items have an assignee - in that case it would be
>>>> good to check that my understanding that you are working on the item is correct.
>>>>
>>>> I think most things are independent, with the exception of "shared vs exclusive
>>>> mappings", which I think becomes a dependency for a couple of things (marked in
>>>> depender description); again would be good to confirm.
>>>>
>>>> Finally, although I'm concentrating on the prerequisites to clear the path for
>>>> merging an MVP Large Anon Folios implementation, I've included one "enhancement"
>>>> item ("large folios in swap cache"), solely because we explicitly discussed it
>>>> last night. My view is that enhancements can come after the initial large anon
>>>> folios merge. Over time, I plan to add other enhancements (e.g. retain large
>>>> folios over COW, etc).
>>>>
>>>> I'm posting the table as yaml as that seemed easiest for email. You can convert
>>>> to csv with something like this in Python:
>>>>
>>>>   import yaml
>>>>   import pandas as pd
>>>>   pd.DataFrame(yaml.safe_load(open('work-items.yml'))).to_csv('work-items.csv')
>>>>
>>>> Thanks,
>>>> Ryan
>>> Should we add the mremap case to the list? Like how to handle the case that mremap
>>> happens in the middle of large anonymous folio and fails to split it.
>>
>> What's the issue that you see here? My opinion is that if we do nothing special
>> for mremap(), it neither breaks correctness nor performance when we enable large
>> anon folios. So on that basis, its not a prerequisite and I'd rather leave it
>> off the list. We might want to do something later as an enhancement though?
> The issue is related with anonymous folio->index.
> 
> If mremap happens in the middle of the large folio, current code doesn't split it.
> So the large folio will be split to two parts: one is in original place and another
> is in the new place. These two parts which are in different VMA have same folio->index.
> Can rmap_walk_anon() work with this situation? vma_address() combined with head page.
> Can it work for the pages not in same vma as head page?
> 
> I could miss something here. Will try to build test against it.

Hi Fengwei,

Did you ever reach a conclusion on this? Based on David's comment, I'm assuming
this is not a problem and already handled correctly for pte-mapped THP?

I guess vma->vm_pgoff is fixed up in the new vma representing the remapped
portion to take account of the offset? (just a guess).

Thanks,
Ryan


> 
> 
> Regards
> Yin, Fengwei
> 
>>
>> If we could always guarrantee that large anon folios were always naturally
>> aligned in VA space, then that would make many things simpler to implement. And
>> in that case, I can see the argument for doing something special in mremap().
>> But since splitting a folio may fail, I guess we have to live with non-naturally
>> aligned folios for the general case, and therefore the simplification argument
>> goes out of the window?
>>
>>
>>





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux