Re: [PATCH v3] mm: add ztree - new allocator for use via zpool API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is unreadable.  Please fix your email client.

On Tue, Mar 08, 2022 at 08:32:54AM +0300, Ананда Бадмаев wrote:
> <div>07.03.2022, 18:08, "Matthew Wilcox" &lt;willy@xxxxxxxxxxxxx&gt;:</div><blockquote><p>On Mon, Mar 07, 2022 at 05:27:24PM +0300, Ananda wrote:</p><blockquote> +/*****************<br /> + * Structures<br /> + *****************/</blockquote><p><br />You don't need this. I can see they're structures.<br /> </p><blockquote> +/**<br /> + * struct ztree_block - block metadata<br /> + * Block consists of several (1/2/4/8) pages and contains fixed<br /> + * integer number of slots for allocating compressed pages.<br /> + * @block_node: links block into the relevant tree in the pool<br /> + * @slot_info: contains data about free/occupied slots<br /> + * @compressed_data: pointer to the memory block<br /> + * @block_index: unique for each ztree_block in the tree<br /> + * @free_slots: number of free slots in the block<br /> + * @coeff: to switch between blocks<br /> + * @under_reclaim: if true shows that block is being evicted<br /> + */</blockquote><p><br />Earlier in the file you say this exposes no API and is to be used only<br />through zpool. So what's the point of marking this as kernel-doc?</p></blockquote><div><div>It will be removed in the next version.</div></div><blockquote><blockquote> + /* 1 page blocks with 11 slots */<br /> + [1] = { PAGE_SIZE / (11 * sizeof(long)) * sizeof(long), 0xB, 0 },</blockquote><p><br />Hm. So 368 bytes on 64-bit, but 372 bytes on 32-bit? Is that<br />intentional? Why 11?</p></blockquote><div><div>Yes, 'slot_size' and 'slots_per_block' values are chosen so that in general</div><div>the range from 0 to PAGE_SIZE is split more or less evenly and the size</div><div>of the block is as close as possible to the power of 2. Also 'slot_size' values</div><div>are aligned to the size of long.</div></div><blockquote><blockquote> +/*<br /> + * allocate new block and add it to corresponding block tree<br /> + */<br /> +static struct ztree_block *alloc_block(struct ztree_pool *pool,<br /> + int block_type, gfp_t gfp)</blockquote><p><br />You have some very strange indentation (throughout).</p></blockquote><div><div>I was trying to limit the length of lines.</div></div><blockquote><p> </p><blockquote> + block = kmem_cache_alloc(pool-&gt;block_cache,<br /> + (gfp &amp; ~(__GFP_HIGHMEM | __GFP_MOVABLE)));<br /> + if (!block)<br /> + return NULL;<br /> +<br /> + block-&gt;compressed_data = (void *)__get_free_pages(gfp, tree_desc[block_type].order);</blockquote><p><br />It makes no sense to mask out __GFP_HIGHMEM and __GFP_MOVABLE for the call<br />to slab and then not mask them out here. Either they shouldn't ever be<br />passed in, in which case that could either be asserted or made true in<br />your own code. Or they can be passed in, and should always be masked.<br />Or you genuinely want to be able to use highmem &amp; movable memory for<br />these data blocks, in which case you're missing calls to kmap() and<br />memory notifiers to let you move the memory around.<br /><br />This smacks of "I tried something, and slab warned, so I fixed the<br />warning" instead of thinking about what the warning meant.</p></blockquote><div><div>It seems that these flags should be masked out in alloc_block().</div></div><blockquote><blockquote> + spin_lock(&amp;tree-&gt;lock);<br /> + /* check if there are free slots in the current and the last added blocks */<br /> + if (tree-&gt;current_block &amp;&amp; tree-&gt;current_block-&gt;free_slots &gt; 0) {<!-- --><br /> + block = tree-&gt;current_block;<br /> + goto found;<br /> + }<br /> + if (tree-&gt;last_block &amp;&amp; tree-&gt;last_block-&gt;free_slots &gt; 0) {<!-- --><br /> + block = tree-&gt;last_block;<br /> + goto found;<br /> + }<br /> + spin_unlock(&amp;tree-&gt;lock);<br /> +<br /> + /* not found block with free slots try to allocate new empty block */<br /> + block = alloc_block(pool, block_type, gfp);<br /> + spin_lock(&amp;tree-&gt;lock);<br /> + if (block) {<!-- --><br /> + tree-&gt;current_block = block;<br /> + goto found;<br /> + }</blockquote><p><br />Another place that looks like "I fixed the warning instead of thinking<br />about it". What if you have two threads that execute this path<br />concurrently? Looks to me like you leak the memory allocated by the<br />first thread.</p></blockquote><div><div>Probably I should pass GFP_ATOMIC flag to alloc_block() and execute this entire</div><div>section of code under single spinlock.</div></div>





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux