On Wed, Mar 29, 2023 at 05:23:04PM +0100, Lorenzo Stoakes wrote: > On Wed, Mar 29, 2023 at 05:01:11PM +0200, Uladzislau Rezki wrote: > > Hello, Lorenzo! > > > > > > /* > > > > - * XArray of vmap blocks, indexed by address, to quickly find a vmap block > > > > - * in the free path. Could get rid of this if we change the API to return a > > > > - * "cookie" from alloc, to be passed to free. But no big deal yet. > > > > + * In order to fast access to any "vmap_block" associated with a > > > > + * specific address, we store them into a per-cpu xarray. A hash > > > > + * function is addr_to_vbq() whereas a key is a vb->va->va_start > > > > + * value. > > > > + * > > > > + * Please note, a vmap_block_queue, which is a per-cpu, is not > > > > + * serialized by a raw_smp_processor_id() current CPU, instead > > > > + * it is chosen based on a CPU-index it belongs to, i.e. it is > > > > + * a hash-table. > > > > + * > > > > + * An example: > > > > + * > > > > + * CPU_1 CPU_2 CPU_0 > > > > + * | | | > > > > + * V V V > > > > + * 0 10 20 30 40 50 60 > > > > + * |------|------|------|------|------|------|...<vmap address space> > > > > + * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2 > > > > + * > > > > + * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus > > > > + * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock; > > > > + * > > > > + * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus > > > > + * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock; > > > > + * > > > > + * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus > > > > + * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock. > > > > */ > > > > > > OK so if I understand this correctly, you're overloading the per-CPU > > > vmap_block_queue array to use as a simple hash based on the address and > > > relying on the xa_lock() in xa_insert() to serialise in case of contention? > > > > > > I like the general heft of your comment but I feel this could be spelled > > > out a little more clearly, something like:- > > > > > > In order to have fast access to any vmap_block object associated with a > > > specific address, we use a hash. > > > > > > Rather than waste space on defining a new hash table we take advantage > > > of the fact we already have a static per-cpu array vmap_block_queue. > > > > > > This is already used for per-CPU access to the block queue, however we > > > overload this to _also_ act as a vmap_block hash. The hash function is > > > addr_to_vbq() which hashes on vb->va->va_start. > > > > > > This then uses per_cpu() to lookup the _index_ rather than the > > > _cpu_. Each vmap_block_queue contains an xarray of vmap blocks which are > > > indexed on the same key as the hash (vb->va->va_start). > > > > > > xarray read acceses are protected by RCU lock and inserts are protected > > > by a spin lock so there is no risk of a race here. > > > > > /* > > * In order to fast access to any "vmap_block" associated with a > > * specific address, we use a hash. > > * > > * A per-cpu vmap_block_queue is used in both ways, to serialize > > * an access to free block chains among CPUs(alloc path) and it > > * also acts as a vmap_block hash(alloc/free paths). It means we > > * overload it, since we already have the per-cpu array which is > > * used as a hash table. > > Nit - it may be worth highlighting that when used as a hash it the 'cpu' is > not in fact a cpu but rather a hash key. > > E.g. just add on the end of this something like:- > > When used as a hash table the 'cpu' passed to per_cpu is not actually a CPU > but rather the hash key. > > > * > > * A hash function is addr_to_vbq() which hashes any address to > > * a specific index(in a hash) it belongs to. This then uses a > > * per_cpu() macro to access the array with specific index. > > May need a tweak if you are happy with my review that we can simply have a > helper that returns the xarray in which case we won't necessary have this > function :) but depends of course on how the respin looks! > > > * > > * An example: > > * > > * CPU_1 CPU_2 CPU_0 > > * | | | > > * V V V > > * 0 10 20 30 40 50 60 > > * |------|------|------|------|------|------|...<vmap address space> > > * CPU0 CPU1 CPU2 CPU0 CPU1 CPU2 > > * > > * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus > > * it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock; > > * > > * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus > > * it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock; > > * > > * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus > > * it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock. > > * > > * This technique allows almost remove a lock-contention in locking > > * primitives which protect insert/remove operations. > > This sentence is a little confusing, perhaps rephrase a little:- > > This technique almost always avoids lock contention on insert/remove, > however the xarray spinlock protects against any contention that remains. > > > */ > > Are you find with it? > > Other than the small nits above (sorry!) it seems fine! Thanks for > updating, much appreciated :) > Good. Made the changes. I will upload a new vX patch. Everything that makes it more clear for readers is worth to do :) -- Uladzislau Rezki