From: Tianyu Lan <ltykernel@xxxxxxxxx> Sent: Thursday, September 2, 2021 6:36 AM > > On 9/2/2021 8:23 AM, Michael Kelley wrote: > >> + } else { > >> + pages_wraparound = kcalloc(page_cnt * 2 - 1, > >> + sizeof(struct page *), > >> + GFP_KERNEL); > >> + > >> + pages_wraparound[0] = pages; > >> + for (i = 0; i < 2 * (page_cnt - 1); i++) > >> + pages_wraparound[i + 1] = > >> + &pages[i % (page_cnt - 1) + 1]; > >> + > >> + ring_info->ring_buffer = (struct hv_ring_buffer *) > >> + vmap(pages_wraparound, page_cnt * 2 - 1, VM_MAP, > >> + PAGE_KERNEL); > >> + > >> + kfree(pages_wraparound); > >> + if (!ring_info->ring_buffer) > >> + return -ENOMEM; > >> + } > > With this patch, the code is a big "if" statement with two halves -- one > > when SNP isolation is in effect, and the other when not. The SNP isolation > > case does the work using PFNs with the shared_gpa_boundary added, > > while the other case does the same work but using struct page. Perhaps > > I'm missing something, but can both halves be combined and always > > do the work using PFNs? The only difference is whether to add the > > shared_gpa_boundary, and whether to zero the memory when done. > > So get the starting PFN, then have an "if" statement for whether to > > add the shared_gpa_boundary. Then everything else is the same. > > At the end, use an "if" statement to decide whether to zero the > > memory. It would really be better to have the logic in this algorithm > > coded only once. > > > > Hi Michael: > I have tried this before. But vmap_pfn() only works for those pfns out > of normal memory. Please see vmap_pfn_apply() for detail and > return error when the PFN is valid. > Indeed. This ties into the discussion with Christoph about coming up with generalized helper functions to assist in handling the shared_gpa_boundary. Having a single implementation here in hv_ringbuffer_init() would be a good goal as well. Michael