Re: [PATCH v3] mm: add ztree - new allocator for use via zpool API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



В Пн, 07/03/2022 в 15:08 +0000, Matthew Wilcox пишет:
> On Mon, Mar 07, 2022 at 05:27:24PM +0300, Ananda wrote:
> > +/*****************
> > + * Structures
> > + *****************/
> 
> You don't need this.  I can see they're structures.
> 
> > +/**
> > + * struct ztree_block - block metadata
> > + * Block consists of several (1/2/4/8) pages and contains fixed
> > + * integer number of slots for allocating compressed pages.
> > + * @block_node:             links block into the relevant tree in
> > the pool
> > + * @slot_info:      contains data about free/occupied slots
> > + * @compressed_data: pointer to the memory block
> > + * @block_index:        unique for each ztree_block in the tree
> > + * @free_slots:                 number of free slots in the block
> > + * @coeff:           to switch between blocks
> > + * @under_reclaim:   if true shows that block is being evicted
> > + */
> 
> Earlier in the file you say this exposes no API and is to be used
> only
> through zpool.  So what's the point of marking this as kernel-doc?
> 
It will be removed in the next version.

> > +       /* 1 page blocks with 11 slots */
> > +       [1] = { PAGE_SIZE / (11 * sizeof(long)) * sizeof(long),
> > 0xB, 0 },
> 
> Hm.  So 368 bytes on 64-bit, but 372 bytes on 32-bit?  Is that
> intentional?  Why 11?
> 
Yes, 'slot_size' and 'slots_per_block' values are chosen so that in
general the range from 0 to PAGE_SIZE is split more or less evenly and
the size of the block is as close as possible to the power of 2. Also
'slot_size' values are aligned to the size of long.

> > +/*
> > + * allocate new block and add it to corresponding block tree
> > + */
> > +static struct ztree_block *alloc_block(struct ztree_pool *pool,
> > +                                                                  
> >      int block_type, gfp_t gfp)
> 
> You have some very strange indentation (throughout).
> 
I was trying to limit the length of lines.

> > +       block = kmem_cache_alloc(pool->block_cache,
> > +                                       (gfp & ~(__GFP_HIGHMEM |
> > __GFP_MOVABLE)));
> > +       if (!block)
> > +               return NULL;
> > +
> > +       block->compressed_data = (void *)__get_free_pages(gfp,
> > tree_desc[block_type].order);
> 
> It makes no sense to mask out __GFP_HIGHMEM and __GFP_MOVABLE for the
> call
> to slab and then not mask them out here.  Either they shouldn't ever
> be
> passed in, in which case that could either be asserted or made true
> in
> your own code.  Or they can be passed in, and should always be
> masked.
> Or you genuinely want to be able to use highmem & movable memory for
> these data blocks, in which case you're missing calls to kmap() and
> memory notifiers to let you move the memory around.
> 
> This smacks of "I tried something, and slab warned, so I fixed the
> warning" instead of thinking about what the warning meant.
> 
It seems that these flags should be masked out in alloc_block().

> > +       spin_lock(&tree->lock);
> > +       /* check if there are free slots in the current and the
> > last added blocks */
> > +       if (tree->current_block && tree->current_block->free_slots
> > > 0) {
> > +               block = tree->current_block;
> > +               goto found;
> > +       }
> > +       if (tree->last_block && tree->last_block->free_slots > 0) {
> > +               block = tree->last_block;
> > +               goto found;
> > +       }
> > +       spin_unlock(&tree->lock);
> > +
> > +       /* not found block with free slots try to allocate new
> > empty block */
> > +       block = alloc_block(pool, block_type, gfp);
> > +       spin_lock(&tree->lock);
> > +       if (block) {
> > +               tree->current_block = block;
> > +               goto found;
> > +       }
> 
> Another place that looks like "I fixed the warning instead of
> thinking
> about it".  What if you have two threads that execute this path
> concurrently?  Looks to me like you leak the memory allocated by the
> first thread.
> 
Probably I should pass GFP_ATOMIC flag to alloc_block() and execute
this entire section of code under single spinlock.

Copy of previous mail message, but in text/plain.






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux