Re: [PATCH] mm: cache largest vma

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 2013-11-03 at 10:51 -0800, Linus Torvalds wrote:
> Ugh. This patch makes me angry. It looks way too ad-hoc.
> 
> I can well imagine that our current one-entry cache is crap and could
> be improved, but this looks too random. 

Indeed, my approach is random *because* I wanted to keep things as
simple and low overhead as possible. Caching the largest VMA is probably
as least invasive and as low as overhead as you can get in find_vma().

> Different code for the
> CONFIG_MMU case? Same name, but for non-MMU it's a single entry, for
> MMU it's an array? And the whole "largest" just looks odd. Plus why do
> you set LAST_USED if you also set LARGEST?
> 
> Did you try just a two- or four-entry pseudo-LRU instead, with a
> per-thread index for "last hit"? Or even possibly a small fixed-size
> hash table (say "idx = (add >> 10) & 3" or something)?
> 
> And what happens for threaded models? Maybe we'd be much better off
> making the cache be per-thread, and the flushing of the cache would be
> a sequence number that has to match (so "vma_clear_cache()" ends up
> just incrementing a 64-bit sequence number in the mm)?

I will look into doing the vma cache per thread instead of mm (I hadn't
really looked at the problem like this) as well as Ingo's suggestion on
the weighted LRU approach. However, having seen that we can cheaply and
easily reach around ~70% hit rate in a lot of workloads, makes me wonder
how good is good enough?

> Basically, my complaints boil down to "too random" and "too
> specialized", and I can see (and you already comment on) this patch
> being grown with even *more* ad-hoc random new cases (LAST, LARGEST,
> MOST_USED - what's next?). And while I don't know if we should worry
> about the threaded case, I do get the feeling that this ad-hoc
> approach is guaranteed to never work for that, which makes me feel
> that it's not just ad-hoc, it's also fundamentally limited.
> 
> I can see us merging this patch, but I would really like to hear that
> we do so because other cleaner approaches don't work well. In
> particular, pseudo-LRU tends to be successful (and cheap) for caches.

OK, will report back with comparisons, hopefully I'll have a better
picture by then.

Thanks,
Davidlohr

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]