Re: [RFC PATCH v2 1/7] mm: zswap: add pool shrinking mechanism

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 08, 2023 at 01:05:00PM -0400, Johannes Weiner wrote:
> On Thu, Jun 08, 2023 at 12:52:51PM -0400, Johannes Weiner wrote:
> > On Tue, Jun 06, 2023 at 04:56:05PM +0200, Domenico Cerasuolo wrote:
> > > @@ -584,14 +601,70 @@ static struct zswap_pool *zswap_pool_find_get(char *type, char *compressor)
> > >  	return NULL;
> > >  }
> > >  
> > > +static int zswap_shrink(struct zswap_pool *pool)
> > > +{
> > > +	struct zswap_entry *lru_entry, *tree_entry = NULL;
> > > +	struct zswap_header *zhdr;
> > > +	struct zswap_tree *tree;
> > > +	int swpoffset;
> > > +	int ret;
> > > +
> > > +	/* get a reclaimable entry from LRU */
> > > +	spin_lock(&pool->lru_lock);
> > > +	if (list_empty(&pool->lru)) {
> > > +		spin_unlock(&pool->lru_lock);
> > > +		return -EINVAL;
> > > +	}
> > > +	lru_entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> > > +	list_del_init(&lru_entry->lru);
> > > +	zhdr = zpool_map_handle(pool->zpool, lru_entry->handle, ZPOOL_MM_RO);
> > > +	tree = zswap_trees[swp_type(zhdr->swpentry)];
> > > +	zpool_unmap_handle(pool->zpool, lru_entry->handle);
> > > +	/*
> > > +	 * Once the pool lock is dropped, the lru_entry might get freed. The
> > > +	 * swpoffset is copied to the stack, and lru_entry isn't deref'd again
> > > +	 * until the entry is verified to still be alive in the tree.
> > > +	 */
> > > +	swpoffset = swp_offset(zhdr->swpentry);
> > > +	spin_unlock(&pool->lru_lock);
> > > +
> > > +	/* hold a reference from tree so it won't be freed during writeback */
> > > +	spin_lock(&tree->lock);
> > > +	tree_entry = zswap_entry_find_get(&tree->rbroot, swpoffset);
> > > +	if (tree_entry != lru_entry) {
> > > +		if (tree_entry)
> > > +			zswap_entry_put(tree, tree_entry);
> > > +		spin_unlock(&tree->lock);
> > > +		return -EAGAIN;
> > > +	}
> > > +	spin_unlock(&tree->lock);
> > > +
> > > +	ret = zswap_writeback_entry(pool->zpool, lru_entry->handle);
> > > +
> > > +	spin_lock(&tree->lock);
> > > +	if (ret) {
> > > +		spin_lock(&pool->lru_lock);
> > > +		list_move(&lru_entry->lru, &pool->lru);
> > > +		spin_unlock(&pool->lru_lock);
> > > +	}
> > > +	zswap_entry_put(tree, tree_entry);
> > 
> > On re-reading this, I find the lru_entry vs tree_entry distinction
> > unnecessarily complicated. Once it's known that the thing coming off
> > the LRU is the same thing as in the tree, there is only "the entry".
> > 
> > How about 'entry' and 'tree_entry', and after validation use 'entry'
> > throughout the rest of the function?
> 
> Even better, safe the tree_entry entirely by getting the reference
> from the LRU already, and then just search the tree for a match:
> 
> 	/* Get an entry off the LRU */
> 	spin_lock(&pool->lru_lock);
> 	entry = list_last_entry();
> 	list_del(&entry->lru);
> 	zswap_entry_get(entry);
> 	spin_unlock(&pool->lru_lock);
> 
> 	/* Check for invalidate() race */
> 	spin_lock(&tree->lock);
> 	if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> 		ret = -EAGAIN;
> 		goto put_unlock;
> 	}
> 	spin_unlock(&tree->lock);

Eh, brainfart. It needs the tree lock to bump the ref, of course.

But this should work, right?

 	/* Check for invalidate() race */
 	spin_lock(&tree->lock);
 	if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
 		ret = -EAGAIN;
 		goto unlock;
 	}
	zswap_entry_get(entry);
 	spin_unlock(&tree->lock);




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux