On Fri, Oct 16, 2009 at 03:39:29PM +0200, Michal Simek wrote: > The second thing which I would like to check is number of functions which are empty. > > flush_dcache_page, flush_dcache_mmap_lock, flush_dcache_mmap_unlock, flush_cache_dup_mm, > flush_cache_vmap, flush_cache_vunmap, flush_cache_mm, flush_cache_page, flush_icache_page > There is no generic answer regarding whether you need these or not, it all depends on your cache architecture and you've left those details out. If you are on a PIPT non-aliasing cache, then obviously you aren't going to care about most of these. flush_icache_page() likewise is something that these days is split up and handled through flush_dcache_page() and update_mmu_cache(). If you're doing a from scratch implementation, you probably want to avoid dealing with it at all. Having said that, all of ARM/MIPS/SH support a wide variety of cache configurations, so it's fairly trivial to see which types of strategies can be undertaken with different cache types just by reading through those. > The second part of this email but it is related. It is about tlb_start_vma and tlb_end_vma. > arm, avr32, sh, sparc and xtensa implement it and mips implement only tlb_start_vma. > Implementation is almost the same. My question is, if is any reason to implement(or not implement) them? > That would be an optimization to reduce expensive TLB flushing for large mappings. With these implemented it's possible to track the range and work out whether to do a partial or full MM flush, which can offer sizeable performance gains, especially if your platform supports ASID tags and your full MM flush is just bumping up the ASID (MIPS and SH both do this, for example). You can look at c20351846efcb755ba849d9fb701fbd9a1ffb7c2 to see the SH change that implemented this, which in turn was derived from an earlier ARM one. That has some more background information and links to the earlier discussion about it. -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html