RE: [PATCH v3 0/7] allocate cache entries from memory pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
> > We debated several approaches for what to do here
> 
> it would be awesome if the list could participate in the discussion even if only
> read-only.

A bit of delay in my response here, but I like the suggestion. here is a summary of
some approaches I considered:

1) Do not include any information about where the cache_entry was allocated.
     Pros: No extra memory overhead
     Cons: Caller is responsible for freeing the cache entry correctly

This was our initial approach. We were hoping that all cache
entries could be allocated from a memory pool, and we would not
have to worry about non-pool allocated entries. There are still a
couple of places that need "temporary" cache entries at the
moment, so we couldn't move completely to only memory pool
allocated cache_entries. This would have resulted in the code
allocating many "temporary" cache_entries from a pool, and the
memory would not be eligible to be reclaimed until the entire
memory pool was freed - and this was a tradeoff we didn't want to
make.

2) Include an extra field encoding whether the cache_entry was
   allocated from a memory pool

    Pro: the discard function can now make a decision regarding how
         to free the cache_entry
    Con: each cache entry grows by an extra int field.

The bits in the existing ce_flags field are all claimed, so we
need an extra field to track this bit of information. We could
claim just a bit in the field now, which would result in the
cache_entry still growing by the same amount, but any future bits
would not require extra space. This pushes off the work for an
actual bit field off into future work.

3) Encode whether the cache_entry was allocated from a memory
   pool as a bit in a field.

    Pro: only a single bit is required to encode this information.
    Con: All the existings bits in the existing ce_flags field are
         claimed. We need to add an extra bit field and take the same
         memory growth.

I considered this approach (and am still open to it), but I went
for a simpler initial approach to make sure the overall change is
acceptable. There is no difference in the memory footprint with
this change, so it is only to enable future changes more easily.

4) Include pointer to the memory pool from which this cache_entry
   was created from

    Pro: Could (potentially) do some extra bookkeeping, such as
         automatically cleaning up the memory_pool when all
         allocated cache_entries are freed.
    Con: extra complexity, larger growth to cache_entry struct to
         accommodate mem_pool pointer

In the end, we didn't see a tangible benefit to this option at this point.

Given the tradeoffs, I went with option #2 for now.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux