On 29.04.22 17:15, Christian Borntraeger wrote: > there are cases that trigger a 2nd shadow event for the same > vmaddr/raddr combination. (prefix changes, reboots, some known races) > This will increase memory usages and it will result in long latencies > when cleaning up, e.g. on shutdown. To avoid cases with a list that has > hundreds of identical raddrs we check existing entries at insert time. > As this measurably reduces the list length this will be faster than > traversing the list at shutdown time. > > In the long run several places will be optimized to create less entries > and a shrinker might be necessary. > > Fixes: 4be130a08420 ("s390/mm: add shadow gmap support") > Signed-off-by: Christian Borntraeger <borntraeger@xxxxxxxxxxxxx> > --- > arch/s390/mm/gmap.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c > index 69c08d966fda..0fc0c26a71f2 100644 > --- a/arch/s390/mm/gmap.c > +++ b/arch/s390/mm/gmap.c > @@ -1185,12 +1185,19 @@ static inline void gmap_insert_rmap(struct gmap *sg, unsigned long vmaddr, > struct gmap_rmap *rmap) > { > void __rcu **slot; > + struct gmap_rmap *temp; > > BUG_ON(!gmap_is_shadow(sg)); > slot = radix_tree_lookup_slot(&sg->host_to_rmap, vmaddr >> PAGE_SHIFT); > if (slot) { > rmap->next = radix_tree_deref_slot_protected(slot, > &sg->guest_table_lock); > + for (temp = rmap->next; temp; temp = temp->next) { > + if (temp->raddr == rmap->raddr) { > + kfree(rmap); > + return; > + } > + } > radix_tree_replace_slot(&sg->host_to_rmap, slot, rmap); > } else { > rmap->next = NULL; Acked-by: David Hildenbrand <david@xxxxxxxxxx> -- Thanks, David / dhildenb