Hello Xiaoguang Wang, The patch bb9b9eb0ae2e: "scsi: target: tcmu: Fix possible data corruption" from Apr 21, 2022, leads to the following Smatch static checker warning: drivers/target/target_core_user.c:1689 tcmu_blocks_release() warn: sleeping in atomic context drivers/target/target_core_user.c 1661 static u32 tcmu_blocks_release(struct tcmu_dev *udev, unsigned long first, 1662 unsigned long last) 1663 { 1664 XA_STATE(xas, &udev->data_pages, first * udev->data_pages_per_blk); 1665 struct page *page; 1666 u32 pages_freed = 0; 1667 1668 xas_lock(&xas); ^^^^^^^^^^^^^^ We take a spinlock here. 1669 xas_for_each(&xas, page, (last + 1) * udev->data_pages_per_blk - 1) { 1670 xas_store(&xas, NULL); 1671 /* 1672 * While reaching here there may be page faults occurring on 1673 * the to-be-released pages. A race condition may occur if 1674 * unmap_mapping_range() is called before page faults on these 1675 * pages have completed; a valid but stale map is created. 1676 * 1677 * If another command subsequently runs and needs to extend 1678 * dbi_thresh, it may reuse the slot corresponding to the 1679 * previous page in data_bitmap. Though we will allocate a new 1680 * page for the slot in data_area, no page fault will happen 1681 * because we have a valid map. Therefore the command's data 1682 * will be lost. 1683 * 1684 * We lock and unlock pages that are to be released to ensure 1685 * all page faults have completed. This way 1686 * unmap_mapping_range() can ensure stale maps are cleanly 1687 * removed. 1688 */ --> 1689 lock_page(page); ^^^^^^^^^^^^^^^ The lock_page() function calls might_sleep() (inside the declaration block). 1690 unlock_page(page); 1691 __free_page(page); 1692 pages_freed++; 1693 } 1694 xas_unlock(&xas); ^^^^^^^^^^^^^^^^^ Unlock 1695 1696 atomic_sub(pages_freed, &global_page_count); 1697 1698 return pages_freed; 1699 } regards, dan carpenter