On Mon, Aug 24, 2009 at 4:55 PM, Avi Kivity<avi@xxxxxxxxxx> wrote: > On 08/24/2009 12:59 AM, Stephen Donnelly wrote: >> >> On Thu, Aug 20, 2009 at 12:14 AM, Avi Kivity<avi@xxxxxxxxxx> wrote: >>> On 08/13/2009 07:07 AM, Stephen Donnelly wrote: >>>> >>>> npages = get_user_pages_fast(addr, 1, 1, page); returns -EFAULT, >>>> presumably because (vma->vm_flags& (VM_IO | VM_PFNMAP)). >>>> >>>> It takes then unlikely branch, and checks the vma, but I don't >>>> understand what it is doing here: pfn = ((addr - vma->vm_start)>> >>>> PAGE_SHIFT) + vma->vm_pgoff; >>> >>> It's calculating the pfn according to pfnmap rules. >> >> From what I understand this will only work when remapping 'main >> memory', e.g. where the pgoff is equal to the physical page offset? >> VMAs that remap IO memory will usually set pgoff to 0 for the start of >> the mapping. > > If so, how do they calculate the pfn when mapping pages? kvm needs to be > able to do the same thing. If the vma->vm_file is /dev/mem, then the pg_off will map to physical addresses directly (at least on x86), and the calculation works. If the vma is remapping io memory from a driver, then vma->vm_file will point to the device node for that driver. Perhaps we can do a check for this at least? >>>> In my case addr == vma->vm_start, and vma->vm_pgoff == 0, so pfn ==0. >>> >>> How did you set up that vma? It should point to the first pfn of your >>> special memory area. >> >> The vma was created with a remap_pfn_range call from another driver. >> Because this call sets VM_PFNMAP and VM_IO any get_user_pages(_fast) >> calls will fail. >> >> In this case the host driver was actually just remapping host memory, >> so I replaced the remap_pfn_range call with a nopage/fault vm_op. This >> allows the get_user_pages_fast call to succeed, and the mapping now >> works as expected. This is sufficient for my work at the moment. > > Well if the fix is correct we need it too. The change is to the external (host) driver. If I submit my device for inclusion upstream then the changes for that driver will be needed as well but would not be part of the qemu-kvm tree. >> I'm still not sure how genuine IO memory (mapped from a driver to >> userspace with remap_pfn_range or io_remap_page_range) could be mapped >> into kvm though. > > If it can be mapped to userspace, it can be mapped to kvm. We just need to > synchronize the rules. We can definitely map it into userspace. The problem seems to be how the kvm kernel module translates the guest pfn back to a host physical address. Is there a kernel equivalent of mmap? Stephen. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html