On Fri, Aug 05, 2016 at 07:25:03PM +1000, Michael Ellerman wrote: > > One way to do that would be to walk through the different memory > > reserved blocks and calculate the size. But Mel feels thats an > > overhead (from his reply to the other thread) esp for just one use > > case. > > OK. I think you're referring to this: > > If fadump is reserving memory and alloc_large_system_hash(HASH_EARLY) > does not know about then then would an arch-specific callback for > arch_reserved_kernel_pages() be more appropriate? > ... > > That approach would limit the impact to ppc64 and would be less costly than > doing a memblock walk instead of using nr_kernel_pages for everyone else. > > That sounds more robust to me than this solution. > It would be the fastest with the least impact but not necessarily the best. Ultimately that dma_reserve/memory_reserve is used for the sizing calculation of the large system hashes but only the e820 map and fadump is taken into account. That's a bit filthy even if it happens to work out ok. Conceptually it would be cleaner, if expensive, to calculate the real memblock reserves if HASH_EARLY and ditch the dma_reserve, memory_reserve and nr_kernel_pages entirely. Unfortuantely, aside from the calculation, there is a potential cost due to a smaller hash table that affects everyone, not just ppc64. However, if the hash table is meant to be sized on the number of available pages then it really should be based on that and not just a made-up number. -- Mel Gorman SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>