Re: [PATCH net-next v9 06/13] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024/7/16 1:55, Alexander Duyck wrote:
> On Sat, Jul 13, 2024 at 9:52 PM Yunsheng Lin <yunshenglin0825@xxxxxxxxx> wrote:

...

>>
>> If the option 1 is not what you have in mind, it would be better to be
>> more specific about what you have in mind.
> 
> Option 1 was more or less what I had in mind.
> 
>> If the option 1 is what you have in mind, it seems both option 1 and
>> option 2 have the same semantics as my understanding, right? The
>> question here seems to be what is your perfer option and why?
>>
>> I implemented both of them, and the option 1 seems to have a
>> bigger generated asm size as below:
>> ./scripts/bloat-o-meter vmlinux_non_neg vmlinux
>> add/remove: 0/0 grow/shrink: 1/0 up/down: 37/0 (37)
>> Function                                     old     new   delta
>> __page_frag_alloc_va_align                   414     451     +37
> 
> My big complaint is that it seems option 2 is harder for people to
> understand and more likely to not be done correctly. In some cases if
> the performance difference is negligible it is better to go with the
> more maintainable solution.

Option 1 assuming nc->remaining as a negative value does not seems to
make it a more maintainable solution than option 2. How about something
like below if using a negative value to enable some optimization like LEA
does not have a noticeable performance difference?

struct page_frag_cache {
        /* encoded_va consists of the virtual address, pfmemalloc bit and order
         * of a page.
         */
        unsigned long encoded_va;

#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32)
        __u16 remaining;
        __u16 pagecnt_bias;
#else
        __u32 remaining;
        __u32 pagecnt_bias;
#endif
};

void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
                                 unsigned int fragsz, gfp_t gfp_mask,
                                 unsigned int align_mask)
{
        unsigned int size = page_frag_cache_page_size(nc->encoded_va);
        unsigned int remaining;

        remaining = nc->remaining & align_mask;
        if (unlikely(remaining < fragsz)) {
                if (unlikely(fragsz > PAGE_SIZE)) {
                        /*
                         * The caller is trying to allocate a fragment
                         * with fragsz > PAGE_SIZE but the cache isn't big
                         * enough to satisfy the request, this may
                         * happen in low memory conditions.
                         * We don't release the cache page because
                         * it could make memory pressure worse
                         * so we simply return NULL here.
                         */
                        return NULL;
                }

                if (!__page_frag_cache_refill(nc, gfp_mask))
                        return NULL;

                size = page_frag_cache_page_size(nc->encoded_va);
                remaining = size;
        }

        nc->pagecnt_bias--;
        nc->remaining = remaining - fragsz;

        return encoded_page_address(nc->encoded_va) + (size - remaining);
}





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux