Re: [PATCH 17/17] io_uring/alloc_cache: switch to array based caching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/21/24 9:59 AM, Gabriel Krisman Bertazi wrote:
> Jens Axboe <axboe@xxxxxxxxx> writes:
> 
>> Currently lists are being used to manage this, but lists isn't a very
>> good choice for as extracting the current entry necessitates touching
>> the next entry as well, to update the list head.
>>
>> Outside of that detail, games are also played with KASAN as the list
>> is inside the cached entry itself.
>>
>> Finally, all users of this need a struct io_cache_entry embedded in
>> their struct, which is union'ized with something else in there that
>> isn't used across the free -> realloc cycle.
>>
>> Get rid of all of that, and simply have it be an array. This will not
>> change the memory used, as we're just trading an 8-byte member entry
>> for the per-elem array size.
>>
>> This reduces the overhead of the recycled allocations, and it reduces
>> the code we have to support recycling.
> 
> Hi Jens,
> 
> I tried applying the entire to your for-6.10/io_uring branch to test it
> and only this last patch failed to apply. The tip of the branch I have
> is 22261e73e8d2 ("io_uring/alloc_cache: shrink default max entries from
> 512 to 128").

Yeah it has some dependencies that need unraveling. The easiest is if
you just pull:

git://git.kernel.dk/linux io_uring-recvsend-bundle

into current -git master, and then just test that. That gets you pretty
much everything that's being tested and played with.

Top of tree is d5653d2fcf1383c0fbe8b64545664aea36c7aca2 right now.

>> -static inline struct io_cache_entry *io_alloc_cache_get(struct io_alloc_cache *cache)
>> +static inline void *io_alloc_cache_get(struct io_alloc_cache *cache)
>>  {
>> -	if (cache->list.next) {
>> -		struct io_cache_entry *entry;
>> +	if (cache->nr_cached) {
>> +		void *entry = cache->entries[--cache->nr_cached];
>>  
>> -		entry = container_of(cache->list.next, struct io_cache_entry, node);
>>  		kasan_mempool_unpoison_object(entry, cache->elem_size);
>> -		cache->list.next = cache->list.next->next;
>> -		cache->nr_cached--;
>>  		return entry;
>>  	}
>>  
>>  	return NULL;
>>  }
>>  
>> -static inline void io_alloc_cache_init(struct io_alloc_cache *cache,
>> -				       unsigned max_nr, size_t size)
>> +static inline int io_alloc_cache_init(struct io_alloc_cache *cache,
>> +				      unsigned max_nr, size_t size)
>>  {
>> -	cache->list.next = NULL;
>> +	cache->entries = kvmalloc_array(max_nr, sizeof(void *), GFP_KERNEL);
>> +	if (!cache->entries)
>> +		return -ENOMEM;
>>  	cache->nr_cached = 0;
>>  	cache->max_cached = max_nr;
>>  	cache->elem_size = size;
>> +	return 0;
>>  }
>>  
>>  static inline void io_alloc_cache_free(struct io_alloc_cache *cache,
>> -					void (*free)(struct io_cache_entry *))
>> +				       void (*free)(const void *))
> 
> Minor, but since free is supposed to free the entry, const doesn't
> make sense here.  Also, you actually just cast it away immediately in
> every usage.

It's because then I can use kfree() directly for most cases, only two of
them have special freeing functions. And kfree takes a const void *. I
should add a comment about that.

-- 
Jens Axboe





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux