The patch titled Subject: mm: unexport folio_memcg_{,un}lock has been added to the -mm tree. Its filename is mm-unexport-folio_memcg_unlock.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-unexport-folio_memcg_unlock.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-unexport-folio_memcg_unlock.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Christoph Hellwig <hch@xxxxxx> Subject: mm: unexport folio_memcg_{,un}lock Patch series "unexport memcg locking helpers". Neither the old page-based nor the new folio-based memcg locking helpers are used in modular code at all, so drop the exports. This patch (of 2): folio_memcg_{,un}lock are only used in built-in core mm code. Link: https://lkml.kernel.org/r/20210820095815.445392-1-hch@xxxxxx Link: https://lkml.kernel.org/r/20210820095815.445392-2-hch@xxxxxx Signed-off-by: Christoph Hellwig <hch@xxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memcontrol.c | 2 -- 1 file changed, 2 deletions(-) --- a/mm/memcontrol.c~mm-unexport-folio_memcg_unlock +++ a/mm/memcontrol.c @@ -2014,7 +2014,6 @@ again: memcg->move_lock_task = current; memcg->move_lock_flags = flags; } -EXPORT_SYMBOL(folio_memcg_lock); void lock_page_memcg(struct page *page) { @@ -2048,7 +2047,6 @@ void folio_memcg_unlock(struct folio *fo { __folio_memcg_unlock(folio_memcg(folio)); } -EXPORT_SYMBOL(folio_memcg_unlock); void unlock_page_memcg(struct page *page) { _ Patches currently in -mm which might be from hch@xxxxxx are mmc-jz4740-remove-the-flush_kernel_dcache_page-call-in-jz4740_mmc_read_data.patch mmc-mmc_spi-replace-flush_kernel_dcache_page-with-flush_dcache_page.patch scatterlist-replace-flush_kernel_dcache_page-with-flush_dcache_page.patch mm-remove-flush_kernel_dcache_page.patch proc-stop-using-seq_get_buf-in-proc_task_name.patch mm-unexport-folio_memcg_unlock.patch mm-unexport-unlock_page_memcg.patch