[bug report] drm/vmwgfx: Implement an infrastructure for read-coherent resources

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Thomas Hellstrom,

The patch fb80edb0d766: "drm/vmwgfx: Implement an infrastructure for
read-coherent resources" from Mar 28, 2019, leads to the following
static checker warning:

	drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c:461 vmw_bo_vm_fault()
	warn: missing conversion: 'page_offset + ((1) << 12)' 'page + byte'

	drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c:534 vmw_bo_vm_huge_fault()
	warn: missing conversion: 'page_offset + ((1) << 12)' 'page + byte'

drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
    435 vm_fault_t vmw_bo_vm_fault(struct vm_fault *vmf)
    436 {
    437 	struct vm_area_struct *vma = vmf->vma;
    438 	struct ttm_buffer_object *bo = (struct ttm_buffer_object *)
    439 	    vma->vm_private_data;
    440 	struct vmw_buffer_object *vbo =
    441 		container_of(bo, struct vmw_buffer_object, base);
    442 	pgoff_t num_prefault;
    443 	pgprot_t prot;
    444 	vm_fault_t ret;
    445 
    446 	ret = ttm_bo_vm_reserve(bo, vmf);
    447 	if (ret)
    448 		return ret;
    449 
    450 	num_prefault = (vma->vm_flags & VM_RAND_READ) ? 1 :
    451 		TTM_BO_VM_NUM_PREFAULT;
    452 
    453 	if (vbo->dirty) {
    454 		pgoff_t allowed_prefault;
    455 		unsigned long page_offset;
    456 
    457 		page_offset = vmf->pgoff -
    458 			drm_vma_node_start(&bo->base.vma_node);
    459 		if (page_offset >= bo->resource->num_pages ||
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
page_offset is in terms of pages

    460 		    vmw_resources_clean(vbo, page_offset,
--> 461 					page_offset + PAGE_SIZE,
                                                ^^^^^^^^^^^^^^^^^^^^^^^
It doesn't make sense to add PAGE_SIZE (which is bytes) to pages.  The
code in vmw_bo_vm_huge_fault() has a similar bug.

    462 					&allowed_prefault)) {
    463 			ret = VM_FAULT_SIGBUS;
    464 			goto out_unlock;
    465 		}
    466 
    467 		num_prefault = min(num_prefault, allowed_prefault);
    468 	}
    469 
    470 	/*
    471 	 * If we don't track dirty using the MKWRITE method, make sure
    472 	 * sure the page protection is write-enabled so we don't get
    473 	 * a lot of unnecessary write faults.
    474 	 */
    475 	if (vbo->dirty && vbo->dirty->method == VMW_BO_DIRTY_MKWRITE)
    476 		prot = vm_get_page_prot(vma->vm_flags & ~VM_SHARED);
    477 	else
    478 		prot = vm_get_page_prot(vma->vm_flags);
    479 
    480 	ret = ttm_bo_vm_fault_reserved(vmf, prot, num_prefault, 1);
    481 	if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT))
    482 		return ret;
    483 
    484 out_unlock:
    485 	dma_resv_unlock(bo->base.resv);
    486 
    487 	return ret;
    488 }

regards,
dan carpenter



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux