[PATCH v2 RFC] eventpoll: try to reuse eppoll_entry allocations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Instead of unconditionally allocating and deallocating pwq objects,
try to reuse them by storing the entry in the eventpoll struct
at deallocation request, and consuming that entry at allocation request.
This way every EPOLL_CTL_ADD operation immediately following an
EPOLL_CTL_DEL operation effectively cancels out its pwq allocation
with the preceding deallocation.

With this patch applied I'm observing ~13% overall speedup when 
benchmarking the following scenario:
1. epoll_ctl(..., EPOLL_CTL_ADD, ...)
2. epoll_ctl(..., EPOLL_CTL_DEL, ...)
which should be a pretty common one for either applications dealing with
a lot of short-lived connections or applications doing a DEL + ADD dance
per level-triggered FD readiness.

This optimization comes with a sizeof(void*) + sizeof(struct eppoll_entry)
per-epoll-instance memory cost, which amounts to 72 bytes for 64-bit

Signed-off-by: Ivan Trofimov <i.trofimow@xxxxxxxxx>
---
NULL check before kmem_cache_free in ep_free is left in place,
as an attempt to pass NULL to kmem_cache_free leads to BUG.

Changes in v2:
 - Fix the typo in ep_alloc_pwq docstring
 - Add a comment about why calling ep_pwq_alloc in the
   ep_ptable_queue_proc callback is safe

 fs/eventpoll.c | 43 +++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 41 insertions(+), 2 deletions(-)

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 882b89edc..c8fb9ec70 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -219,6 +219,9 @@ struct eventpoll {
 	u64 gen;
 	struct hlist_head refs;
 
+	/* a single-item cache used to reuse eppoll_entry allocations */
+	struct eppoll_entry *pwq_slot;
+
 	/*
 	 * usage count, used together with epitem->dying to
 	 * orchestrate the disposal of this struct
@@ -648,6 +651,36 @@ static void ep_remove_wait_queue(struct eppoll_entry *pwq)
 	rcu_read_unlock();
 }
 
+/*
+ * This function either consumes the pwq_slot, or allocates a new
+ * eppoll_entry if the slot is empty.
+ * Must be called with "mtx" held.
+ */
+static struct eppoll_entry *ep_alloc_pwq(struct eventpoll *ep)
+{
+	struct eppoll_entry *pwq = ep->pwq_slot;
+
+	if (pwq) {
+		ep->pwq_slot = NULL;
+		return pwq;
+	}
+	return kmem_cache_alloc(pwq_cache, GFP_KERNEL);
+}
+
+/*
+ * This function either fills the pwq_slot with the eppoll_entry,
+ * or deallocates the entry if the slot is already filled.
+ * Must be called with "mtx" held.
+ */
+static void ep_free_pwq(struct eventpoll *ep, struct eppoll_entry *pwq)
+{
+	if (!ep->pwq_slot) {
+		ep->pwq_slot = pwq;
+		return;
+	}
+	kmem_cache_free(pwq_cache, pwq);
+}
+
 /*
  * This function unregisters poll callbacks from the associated file
  * descriptor.  Must be called with "mtx" held.
@@ -660,7 +693,7 @@ static void ep_unregister_pollwait(struct eventpoll *ep, struct epitem *epi)
 	while ((pwq = *p) != NULL) {
 		*p = pwq->next;
 		ep_remove_wait_queue(pwq);
-		kmem_cache_free(pwq_cache, pwq);
+		ep_free_pwq(ep, pwq);
 	}
 }
 
@@ -789,6 +822,8 @@ static void ep_free(struct eventpoll *ep)
 	mutex_destroy(&ep->mtx);
 	free_uid(ep->user);
 	wakeup_source_unregister(ep->ws);
+	if (ep->pwq_slot)
+		kmem_cache_free(pwq_cache, ep->pwq_slot);
 	kfree(ep);
 }
 
@@ -1384,7 +1419,11 @@ static void ep_ptable_queue_proc(struct file *file, wait_queue_head_t *whead,
 	if (unlikely(!epi))	// an earlier allocation has failed
 		return;
 
-	pwq = kmem_cache_alloc(pwq_cache, GFP_KERNEL);
+	/*
+	 * The callback is invoked from within ep_insert, which is called
+	 * with the ep->mtx held, so this is safe.
+	 */
+	pwq = ep_alloc_pwq(epi->ep);
 	if (unlikely(!pwq)) {
 		epq->epi = NULL;
 		return;
-- 
2.34.1





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux