On Fri, May 24, 2019 at 06:33:16PM +0800, Hillf Danton wrote: > > On Wed, 22 May 2019 17:09:37 +0200 Uladzislau Rezki (Sony) wrote: > > /* > > + * Preload this CPU with one extra vmap_area object to ensure > > + * that we have it available when fit type of free area is > > + * NE_FIT_TYPE. > > + * > > + * The preload is done in non-atomic context thus, it allows us > > + * to use more permissive allocation masks, therefore to be more > > + * stable under low memory condition and high memory pressure. > > + * > > + * If success, it returns zero with preemption disabled. In case > > + * of error, (-ENOMEM) is returned with preemption not disabled. > > + * Note it has to be paired with alloc_vmap_area_preload_end(). > > + */ > > +static void > > +ne_fit_preload(int *preloaded) > > +{ > > + preempt_disable(); > > + > > + if (!__this_cpu_read(ne_fit_preload_node)) { > > + struct vmap_area *node; > > + > > + preempt_enable(); > > + node = kmem_cache_alloc(vmap_area_cachep, GFP_KERNEL); > > Alternatively, can you please take another look at the upside to use > the memory node parameter in alloc_vmap_area() for allocating va slab, > given that this preload, unlike adjust_va_to_fit_type() is invoked > with the vmap_area_lock not aquired? > Agree. That makes sense. I will upload the v2 where fix all comments. Thank you! -- Vlad Rezki