On Mon, Jan 27, 2025 at 04:59:30PM +0900, Sergey Senozhatsky wrote: > Introduce new API to map/unmap zsmalloc handle/object. The key > difference is that this API does not impose atomicity restrictions > on its users, unlike zs_map_object() which returns with page-faults > and preemption disabled I think that's not entirely accurate, see below. [..] > @@ -1309,12 +1297,14 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, > goto out; > } > > - /* this object spans two pages */ > - zpdescs[0] = zpdesc; > - zpdescs[1] = get_next_zpdesc(zpdesc); > - BUG_ON(!zpdescs[1]); > + ret = area->vm_buf; > + /* disable page faults to match kmap_local_page() return conditions */ > + pagefault_disable(); Is this accurate/necessary? I am looking at kmap_local_page() and I don't see it. Maybe that's remnant from the old code using kmap_atomic()? > + if (mm != ZS_MM_WO) { > + /* this object spans two pages */ > + zs_obj_copyin(area->vm_buf, zpdesc, off, class->size); > + } > > - ret = __zs_map_object(area, zpdescs, off, class->size); > out: > if (likely(!ZsHugePage(zspage))) > ret += ZS_HANDLE_SIZE;