Re: [PATCH v3 1/6] iommu/core: split mapping to page sizes as supported by the hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 27, 2011 at 1:05 PM, Roedel, Joerg <Joerg.Roedel@xxxxxxx> wrote:
> On Fri, Sep 16, 2011 at 01:51:41PM -0400, Ohad Ben-Cohen wrote:
>>  int iommu_map(struct iommu_domain *domain, unsigned long iova,
>> -             phys_addr_t paddr, int gfp_order, int prot)
>> +             phys_addr_t paddr, size_t size, int prot)
>>  {
>> -       size_t size;
>> +       int ret = 0;
>> +
>> +       /*
>> +        * both the virtual address and the physical one, as well as
>> +        * the size of the mapping, must be aligned (at least) to the
>> +        * size of the smallest page supported by the hardware
>> +        */
>> +       if (!IS_ALIGNED(iova | paddr | size, iommu_min_pagesz)) {
>> +               pr_err("unaligned: iova 0x%lx pa 0x%lx size 0x%lx min_pagesz "
>> +                       "0x%x\n", iova, (unsigned long)paddr,
>> +                       (unsigned long)size, iommu_min_pagesz);
>> +               return -EINVAL;
>> +       }
>> +
>> +       pr_debug("map: iova 0x%lx pa 0x%lx size 0x%lx\n", iova,
>> +                               (unsigned long)paddr, (unsigned long)size);
>> +
>> +       while (size) {
>> +               unsigned long pgsize = iommu_min_pagesz;
>> +               unsigned long idx = iommu_min_page_idx;
>> +               unsigned long addr_merge = iova | paddr;
>> +               int order;
>> +
>> +               /* find the max page size with which iova, paddr are aligned */
>> +               for (;;) {
>> +                       unsigned long try_pgsize;
>> +
>> +                       idx = find_next_bit(iommu_pgsize_bitmap,
>> +                                               iommu_nr_page_bits, idx + 1);
>> +
>> +                       /* no more pages to check ? */
>> +                       if (idx >= iommu_nr_page_bits)
>> +                               break;
>> +
>> +                       try_pgsize = 1 << idx;
>>
>> -       size         = 0x1000UL << gfp_order;
>> +                       /* page too big ? addresses not aligned ? */
>> +                       if (size < try_pgsize ||
>> +                                       !IS_ALIGNED(addr_merge, try_pgsize))
>> +                               break;
>>
>> -       BUG_ON(!IS_ALIGNED(iova | paddr, size));
>> +                       pgsize = try_pgsize;
>> +               }
>
> With an unsigned long you can use plain and fast bit_ops instead of the
> full bitmap functions.

Not sure I follow; the only bit operation I'm using while mapping is
find_next_bit() (which is a bitops.h method).

What other faster variant are you referring to ?

Thanks,
Ohad.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux