Re: [PATCH v3] mm: add ztree - new allocator for use via zpool API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 08, 2022 at 06:16:35PM +0300, Ananda Badmaev wrote:
> > > +       /* 1 page blocks with 11 slots */
> > > +       [1] = { PAGE_SIZE / (11 * sizeof(long)) * sizeof(long),
> > > 0xB, 0 },
> > 
> > Hm.  So 368 bytes on 64-bit, but 372 bytes on 32-bit?  Is that
> > intentional?  Why 11?
> > 
> Yes, 'slot_size' and 'slots_per_block' values are chosen so that in
> general the range from 0 to PAGE_SIZE is split more or less evenly and
> the size of the block is as close as possible to the power of 2. Also
> 'slot_size' values are aligned to the size of long.

Is it intrinsic to the design that the blocks are aligned to
sizeof(long)?

> > > +/*
> > > + * allocate new block and add it to corresponding block tree
> > > + */
> > > +static struct ztree_block *alloc_block(struct ztree_pool *pool,
> > > +                                                                  
> > >      int block_type, gfp_t gfp)
> > 
> > You have some very strange indentation (throughout).
> > 
> I was trying to limit the length of lines.

But you didn't achieve that.  The _start_ of the line containing
block_type and gfp was _more indented_ than the end of the previous
line.  Do you have unusual tab settings?

> > > +       block = kmem_cache_alloc(pool->block_cache,
> > > +                                       (gfp & ~(__GFP_HIGHMEM |
> > > __GFP_MOVABLE)));
> > > +       if (!block)
> > > +               return NULL;
> > > +
> > > +       block->compressed_data = (void *)__get_free_pages(gfp,
> > > tree_desc[block_type].order);
> > 
> > It makes no sense to mask out __GFP_HIGHMEM and __GFP_MOVABLE for the
> > call
> > to slab and then not mask them out here.  Either they shouldn't ever
> > be
> > passed in, in which case that could either be asserted or made true
> > in
> > your own code.  Or they can be passed in, and should always be
> > masked.
> > Or you genuinely want to be able to use highmem & movable memory for
> > these data blocks, in which case you're missing calls to kmap() and
> > memory notifiers to let you move the memory around.
> > 
> > This smacks of "I tried something, and slab warned, so I fixed the
> > warning" instead of thinking about what the warning meant.
> > 
> It seems that these flags should be masked out in alloc_block().

Maybe?  I don't know the 

> > > +       spin_lock(&tree->lock);
> > > +       /* check if there are free slots in the current and the
> > > last added blocks */
> > > +       if (tree->current_block && tree->current_block->free_slots
> > > > 0) {
> > > +               block = tree->current_block;
> > thinking
> > about it".  What if you have two threads that execute this path
> > concurrently?  Looks to me like you leak the memory allocated by the
> > first thread.
> > 
> Probably I should pass GFP_ATOMIC flag to alloc_block() and execute
> this entire section of code under single spinlock.

I think that's a bad approach.  Better would be:

	spin_lock();
	if (free_slot_block)
		goto found;
	spin_unlock();
	block = alloc_block();
	spin_lock();
	if (free_slot_block) {
		free_block(block);
		block = free_slot_block;
	} else {
		free_slot_block = block;
	}
found:
	...
	spin_unlock();




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux