On Thu, Sep 07, 2023 at 10:17:39AM +0800, Baoquan He wrote: > Add Kazu and Lianbo to CC, and kexec mailing list > > On 08/29/23 at 10:11am, Uladzislau Rezki (Sony) wrote: > > Store allocated objects in a separate nodes. A va->va_start > > address is converted into a correct node where it should > > be placed and resided. An addr_to_node() function is used > > to do a proper address conversion to determine a node that > > contains a VA. > > > > Such approach balances VAs across nodes as a result an access > > becomes scalable. Number of nodes in a system depends on number > > of CPUs divided by two. The density factor in this case is 1/2. > > > > Please note: > > > > 1. As of now allocated VAs are bound to a node-0. It means the > > patch does not give any difference comparing with a current > > behavior; > > > > 2. The global vmap_area_lock, vmap_area_root are removed as there > > is no need in it anymore. The vmap_area_list is still kept and > > is _empty_. It is exported for a kexec only; > > I haven't taken a test, while accessing all nodes' busy tree to get > va of the lowest address could severely impact kcore reading efficiency > on system with many vmap nodes. People doing live debugging via > /proc/kcore will get a little surprise. > > > Empty vmap_area_list will break makedumpfile utility, Crash utility > could be impactd too. I checked makedumpfile code, it relys on > vmap_area_list to deduce the vmalloc_start value. > It is left part and i hope i fix it in v3. The problem here is we can not give an opportunity to access to vmap internals from outside. This is just not correct, i.e. you are not allowed to access the list directly. -- Uladzislau Rezki _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec