DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1;
t=1589656095; bh=+tZ0dBYIJDY6PHAfvMYygljkbJgDRKM2mXYJTiJ5LAU=;
h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From:
Message-ID:Date:User-Agent:MIME-Version:In-Reply-To:
X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language:
Content-Transfer-Encoding;
b=AIbO+yRdmNHn4LV2XE0br8vquXdgTLtrWscElXmWTZiSzrFeRqlATyPGsHleQF3QU
nBcXsRa9tsbQOwgPkyh0nhMBzcV+q6CoKMw4c3CmRhkSXWG6XnQepdpEF4WDC5VJ1j
/kxVKDvmS/WIGEMLowaG/lra0BpLqY9FQLPkCc2up9t94NJ15nHzMx+poYTeVeomWq
x3b9j+KGJesMojeYHF4p02v5kpaquce7dYmP7FjlUMTdEZTgbB46FMu/GynDs3ZPLp
5Jj51SmTeTP/0NR8+K7XjbAFdNc/ux1RzpNITw6FFJ7kmcIImwoKGPat0qKhpN2P6u
J2ThfIZtvw0wg==
On 5/16/2020 4:03 AM, Alex Williamson wrote:
On Sat, 16 May 2020 02:43:20 +0530
Kirti Wankhede <kwankhede@xxxxxxxxxx> wrote:
<snip>
+static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu,
+ dma_addr_t iova, size_t size, size_t pgsize)
+{
+ struct vfio_dma *dma;
+ unsigned long pgshift = __ffs(pgsize);
+ int ret;
+
+ /*
+ * GET_BITMAP request must fully cover vfio_dma mappings. Multiple
+ * vfio_dma mappings may be clubbed by specifying large ranges, but
+ * there must not be any previous mappings bisected by the range.
+ * An error will be returned if these conditions are not met.
+ */
+ dma = vfio_find_dma(iommu, iova, 1);
+ if (dma && dma->iova != iova)
+ return -EINVAL;
+
+ dma = vfio_find_dma(iommu, iova + size - 1, 0);
+ if (dma && dma->iova + dma->size != iova + size)
+ return -EINVAL;
+
+ dma = vfio_find_dma(iommu, iova, size);
+
+ while (dma && (dma->iova >= iova) &&
+ (dma->iova + dma->size <= iova + size)) {
Thanks for doing this! Unfortunately I think I've mislead you :(
But I think there was a bug here in the last version as well, so maybe
it's all for the better ;)
vfio_find_dma() does not guarantee to find the first dma in the range
(ie. the lowest iova), it only guarantees to find a dma in the range.
Since we have a tree structure, as we descend the nodes we might find
multiple nodes within the range. vfio_find_dma() only returns the first
occurrence it finds, so we can't assume that other matching nodes are
next in the tree or that their iovas are greater than the iova of the
node we found.
All the other use cases of vfio_find_dma() are looking for specific
pages or boundaries or checking for the existence of a conflict or are
removing all of the instances within the range, which is probably the
example that was used in the v20 version of this patch, since it was
quite similar to vfio_dma_do_unmap() but tried to adjust the size to
get the next match rather than removing the entry. That could
potentially lead to an entire unexplored half of the tree making our
bitmap incomplete.
So I think my initial suggestion[1] on the previous version is probably
the way we should go. Sorry! OTOH, it would have been a nasty bug to
find later, it's a subtle semantic that's easy to overlook. Thanks,
Alex
[1]https://lore.kernel.org/kvm/20200514212720.479cc3ba@xxxxxxx/
Ok. Got your point.
Replacing
dma = vfio_find_dma(iommu, iova, size);
with below should work
for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) {
struct vfio_dma *ldma = rb_entry(n, struct vfio_dma, node);
if (ldma->iova >= iova)
break;
}
dma = n ? rb_entry(n, struct vfio_dma, node) : NULL;
Should I update all patches with v22 version? or Is it fine to update
this patch with v21 only?
Thanks,
Kirti