Re: [PATCH 6/7] arc: Implement the new page table range API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2/13/2023 11:16 PM, Matthew Wilcox wrote:
> On Mon, Feb 13, 2023 at 03:09:37AM +0000, Yin, Fengwei wrote:
>>> +++ b/arch/arc/include/asm/cacheflush.h
>>> @@ -25,17 +25,20 @@
>>>   * in update_mmu_cache()
>>>   */
>>>  #define flush_icache_page(vma, page)
>>> +#define flush_icache_pages(vma, page, nr)
>> Maybe just remove these two definitions because general
>> implementation is just no-op?
> 
> Then arc would have to include asm-generic/cacheflush.h and I don't
> particularly want to debug any issues that might cause.  This is
> easier.
> 
> Long term, asm-generic/cacheflush.h's contents should be moved into
> linux/cacheflush.h, but I've lacked the time to do that work.
> 
> To answer your question from the other email, the documentation says:
> 
>   ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``
> 
>         All the functionality of flush_icache_page can be implemented in
>         flush_dcache_page and update_mmu_cache_range. In the future, the hope
>         is to remove this interface completely.
> 
> I'm not planning on doing that to an architecture that I'm not set up
> to test ...
Thanks a lot for the detail explanation.

> 
>>> +void flush_dcache_page(struct page *page)
>>> +{
>>> +       return flush_dcache_folio(page_folio(page));
>>> +}
>> I am wondering whether we should add flush_dcache_folio_range()
>> because it's possible just part of folio needs be flush. Thanks.
> 
> We could.  I think it's up to the maintainers of architectures that
> need their caches flushing to let us know what would be good for them.
> Since I primarily work on x86, I have no personal desire to do this ;-)
> 
> One of the things that I've always found a little weird about
> flush_dcache_page() (and now flush_dcache_folio()) is that it's used both
> for flushing userspace writes (eg in filemap_read()) and for flushing
> kernel writes (eg in __iomap_write_end()).  Probably it was designed for
> an architecture that flushes by physical address rather than by virtual.
I noticed the copy_page_from_iter_atomic() is using kmap_atomic(page) as
access address. So even if it's VIVT, if there is no highmem, it should
work with flush_dcace_page/folio(). arm is VIVT and seems no complain
about this. It may be very rare that it has no highmem?

> 
> Anyway, if we do have a flush_dcache_folio_kernel(), I'd like it
> to take byte offsets.  That would work well for __iomap_write_end();
> it could be:
> 
> 	flush_dcache_folio_kernel(folio, offset_in_folio(folio, pos), len);
> 
> But I'm not volunteering to do this work.
I'd like to give it a try. :).


Regards
Yin, Fengwei



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux