* Elliot Berman <quic_eberman@xxxxxxxxxxx> [2023-01-20 14:46:12]: > + /* Check for overlap */ > + list_for_each_entry(tmp_mapping, &ghvm->memory_mappings, list) { > + if (!((mapping->guest_phys_addr + (mapping->npages << PAGE_SHIFT) <= > + tmp_mapping->guest_phys_addr) || > + (mapping->guest_phys_addr >= > + tmp_mapping->guest_phys_addr + (tmp_mapping->npages << PAGE_SHIFT)))) { > + ret = -EEXIST; > + goto unlock; > + } > + } > + > + list_add(&mapping->list, &ghvm->memory_mappings); I think the potential race condition described last time is still possible. Pls check. > +unlock: > + mutex_unlock(&ghvm->mm_lock); > + if (ret) > + goto free_mapping; > + > + mapping->pages = kcalloc(mapping->npages, sizeof(*mapping->pages), GFP_KERNEL); > + if (!mapping->pages) { > + ret = -ENOMEM; > + goto reclaim; Same comment as last time. Can you check this error path? We seem to call unpin_user_page() in this path, which is not correct. > + } > + > + pinned = pin_user_pages_fast(region->userspace_addr, mapping->npages, > + FOLL_WRITE | FOLL_LONGTERM, mapping->pages); > + if (pinned < 0) { > + ret = pinned; > + goto reclaim; Same comment as above.