CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be removed. Cleanup the leftovers before doing so. Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: linux-mm@xxxxxxxxx --- include/linux/pagemap.h | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -168,9 +168,7 @@ void release_pages(struct page **pages, static inline int __page_cache_add_speculative(struct page *page, int count) { #ifdef CONFIG_TINY_RCU -# ifdef CONFIG_PREEMPT_COUNT - VM_BUG_ON(!in_atomic() && !irqs_disabled()); -# endif + VM_BUG_ON(preemptible()) /* * Preempt must be disabled here - we rely on rcu_read_lock doing * this for us.