On 2/4/19 10:15 AM, Alexander Duyck wrote: > +#ifdef CONFIG_KVM_GUEST > +#include <linux/jump_label.h> > +extern struct static_key_false pv_free_page_hint_enabled; > + > +#define HAVE_ARCH_FREE_PAGE > +void __arch_free_page(struct page *page, unsigned int order); > +static inline void arch_free_page(struct page *page, unsigned int order) > +{ > + if (static_branch_unlikely(&pv_free_page_hint_enabled)) > + __arch_free_page(page, order); > +} > +#endif So, this ends up with at least a call, a branch and a ret added to the order-0 paths, including freeing pages to the per-cpu-pageset lists. That seems worrisome. What performance testing has been performed to look into the overhead added to those paths?