On Thu, Oct 19, 2023 at 12:43:19AM +0100, Joao Martins wrote: > On 18/10/2023 23:39, Jason Gunthorpe wrote: > > On Wed, Oct 18, 2023 at 09:27:04PM +0100, Joao Martins wrote: > > > >> +int iommufd_check_iova_range(struct iommufd_ioas *ioas, > >> + struct iommu_hwpt_get_dirty_iova *bitmap) > >> +{ > >> + unsigned long pgshift, npages; > >> + size_t iommu_pgsize; > >> + int rc = -EINVAL; > >> + > >> + pgshift = __ffs(bitmap->page_size); > >> + npages = bitmap->length >> pgshift; > > > > npages = bitmap->length / bitmap->page_size; > > > > ? (if page_size is a bitmask it is badly named) > > > > It was a way to avoid the divide by zero, but > I can switch to the above, and check for bitmap->page_size > being non-zero. should be less obscure Why would we ge so far along with a 0 page size? Reject that when bitmap is created?? > >> +static int __iommu_read_and_clear_dirty(struct iova_bitmap *bitmap, > >> + unsigned long iova, size_t length, > >> + void *opaque) > >> +{ > >> + struct iopt_area *area; > >> + struct iopt_area_contig_iter iter; > >> + struct iova_bitmap_fn_arg *arg = opaque; > >> + struct iommu_domain *domain = arg->domain; > >> + struct iommu_dirty_bitmap *dirty = arg->dirty; > >> + const struct iommu_dirty_ops *ops = domain->dirty_ops; > >> + unsigned long last_iova = iova + length - 1; > >> + int ret = -EINVAL; > >> + > >> + iopt_for_each_contig_area(&iter, area, arg->iopt, iova, last_iova) { > >> + unsigned long last = min(last_iova, iopt_area_last_iova(area)); > >> + > >> + ret = ops->read_and_clear_dirty(domain, iter.cur_iova, > >> + last - iter.cur_iova + 1, > >> + 0, dirty); > > > > This seems like a lot of stuff going on with ret.. > > All to have a single return exit point, given no different cleanup is required. > Thought it was the general best way (when possible) Mm, at least I'm not a believer of single exit point :) Jason