Hi John, I'm not really sure who to send this bug report to so you got picked a bit at random... The patch ea996974589e: "RDMA: Convert put_page() to put_user_page*()" from May 24, 2019, leads to the following Smatch static checker warning: ./include/linux/pagemap.h:897 folio_lock() warn: sleeping in atomic context ./include/linux/pagemap.h 895 static inline void folio_lock(struct folio *folio) 896 { --> 897 might_sleep(); 898 if (!folio_trylock(folio)) 899 __folio_lock(folio); 900 } The problem is that unpin_user_pages_dirty_lock() calls folio_lock() which can sleep. Here is the raw Smatch preempt output. As you can see there are several places which seem to call unpin_user_pages_dirty_lock() with preempt disabled. __usnic_uiom_reg_release() <- disables preempt usnic_uiom_reg_get() <- disables preempt -> usnic_uiom_put_pages() rds_tcp_write_space() <- disables preempt -> rds_send_path_drop_acked() -> rds_send_remove_from_sock() -> rds_message_put() -> rds_message_purge() -> rds_rdma_free_op() rds_message_purge() <duplicate> -> rds_atomic_free_op() -> unpin_user_pages_dirty_lock() -> folio_lock() Let's pull out the first example: drivers/infiniband/hw/usnic/usnic_uiom.c 228 spin_lock(&pd->lock); 229 usnic_uiom_remove_interval(&pd->root, vpn_start, 230 vpn_last, &rm_intervals); 231 usnic_uiom_unmap_sorted_intervals(&rm_intervals, pd); 232 233 list_for_each_entry_safe(interval, tmp, &rm_intervals, link) { 234 if (interval->flags & IOMMU_WRITE) 235 writable = 1; 236 list_del(&interval->link); 237 kfree(interval); 238 } 239 240 usnic_uiom_put_pages(&uiomr->chunk_list, dirty & writable); ^^^^^^^^^^^^^^^^^^^^ We're holding a spin lock, but _put_pages() calls unpin_user_pages_dirty_lock(). 241 spin_unlock(&pd->lock); 242 } regards, dan carpenter