The quilt patch titled Subject: mm: apply __must_check to vmap_pages_range_noflush() has been removed from the -mm tree. Its filename was mm-apply-__must_check-to-vmap_pages_range_noflush.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Alexander Potapenko <glider@xxxxxxxxxx> Subject: mm: apply __must_check to vmap_pages_range_noflush() Date: Thu, 13 Apr 2023 15:12:23 +0200 To prevent errors when vmap_pages_range_noflush() or __vmap_pages_range_noflush() silently fail (see the link below for an example), annotate them with __must_check so that the callers do not unconditionally assume the mapping succeeded. Link: https://lkml.kernel.org/r/20230413131223.4135168-4-glider@xxxxxxxxxx Signed-off-by: Alexander Potapenko <glider@xxxxxxxxxx> Reported-by: Dipanjan Das <mail.dipanjan.das@xxxxxxxxx> Link: https://lore.kernel.org/linux-mm/CANX2M5ZRrRA64k0hOif02TjmY9kbbO2aCBPyq79es34RXZ=cAw@xxxxxxxxxxxxxx/ Reviewed-by: Marco Elver <elver@xxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/internal.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) --- a/mm/internal.h~mm-apply-__must_check-to-vmap_pages_range_noflush +++ a/mm/internal.h @@ -885,7 +885,7 @@ size_t splice_folio_into_pipe(struct pip */ #ifdef CONFIG_MMU void __init vmalloc_init(void); -int vmap_pages_range_noflush(unsigned long addr, unsigned long end, +int __must_check vmap_pages_range_noflush(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift); #else static inline void vmalloc_init(void) @@ -893,16 +893,16 @@ static inline void vmalloc_init(void) } static inline -int vmap_pages_range_noflush(unsigned long addr, unsigned long end, +int __must_check vmap_pages_range_noflush(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift) { return -EINVAL; } #endif -int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, - pgprot_t prot, struct page **pages, - unsigned int page_shift); +int __must_check __vmap_pages_range_noflush(unsigned long addr, + unsigned long end, pgprot_t prot, + struct page **pages, unsigned int page_shift); void vunmap_range_noflush(unsigned long start, unsigned long end); _ Patches currently in -mm which might be from glider@xxxxxxxxxx are