RE: [PATCH v3 0/7] allocate cache entries from memory pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Junio C Hamano <jch2355@xxxxxxxxx> On Behalf Of Junio C Hamano
> Sent: Thursday, May 24, 2018 12:55 AM
> To: Jameson Miller <jamill@xxxxxxxxxxxxx>
> Cc: git@xxxxxxxxxxxxxxx; pclouds@xxxxxxxxx; jonathantanmy@xxxxxxxxxx;
> sbeller@xxxxxxxxxx; peartben@xxxxxxxxx
> Subject: Re: [PATCH v3 0/7] allocate cache entries from memory pool
> 
> Jameson Miller <jamill@xxxxxxxxxxxxx> writes:
> 
> > Changes from V2:
> >
> > 	- Tweak logic of finding available memory block for memory
> >           allocation
> >
> > 	  - Only search head block
> 
> Hmph.  Is that because we generally do not free() a lot so once a block is filled,
> there is not much chance that we have reclaimed space in the block later?
> 

The design of the memory pool is that once the memory is
claimed from the pool, it is not reused until the
containing pool is discarded. Individual entries are not
freed, only the entire memory pool is freed, and only after we
are sure that there are no references to any of the entries in the
pool.

The memory pool design makes some tradeoffs. It is not meant to
be completely replace malloc / free as a general purpose
allocator, but rather used in scenarios where the benefit (faster
allocations, lower bookkeeping overhead) is worth the
tradeoffs (not able to free individual allocations). The access
patterns around cache entries are well matched with the memory
pool to get the benefits - the majority of cache entries are
allocated up front when reading the index from disk, and are then
discarded in bulk when the index is freed (if the index is freed
at all (rather than just existing)).

> > 	- Tweaked handling of large memory allocations.
> >
> > 	  - Large blocks now tracked in same manner as "regular"
> >             blocks
> >
> > 	  - Large blocks are placed at end of linked list of memory
> >             blocks
> 
> If we are only carving out of the most recently allocated block, it seems that
> there is no point looking for "the end", no?

Right. If we are not searching the list, then there isn't any point in
Appending odd large items to the end vs sticking it immediately past
the head block. I will remove the usage of the tail pointer in the
next version.

Yes, this is true. I can remove the usage of the tail pointer here,
as it is not really leveraged. I will make this change in the next version.

> 
> 
> > 	- Cache_entry type gains notion of whether it was allocated
> >           from memory pool or not
> >
> > 	  - Collapsed cache_entry discard logic into single
> >             function. This should make code easier to maintain
> 
> That certainly should be safer to have a back-pointer pointing to which pool
> each entry came from, but doesn't it result in memory bloat?

Currently, entries claimed from a memory pool are not freed, so we only need
to know whether the entry came from a memory pool or not. This has less memory 
impact than a full pointer but is also a bit more restrictive.

We debated several approaches for what to do here and landed on using a simple bit
for this rather than the full pointer. In the current code we use a full integer field for this, but
we can convert this into a bit or bit field. The current flags word is full, so this would require
a second flags field.





[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux