On Fri, Jan 13, 2012 at 04:22:36PM +0000, Al Viro wrote: > On Fri, Jan 13, 2012 at 09:52:37AM -0600, Dimitri Sivanich wrote: > > When the number of dentry cache hash table entries gets too high > > (2147483648 entries), use of a signed integer in the initialization > > loop prevents the dentry_hashtable from getting initialized, resulting > > in a panic in __d_lookup. Fixing this in dcache_init and a few other > > spots for consistency. > > > static void __init dcache_init(void) > > { > > - int loop; > > + long loop; > > You've got to be kidding. Note that D_HASHMASK is at most 32bit. Use > of long here is an overkill and so's 2^31 hash buckets (that's what, > 16Gb in hash list heads alone? What kind of average chain length do > you expect, BTW?) Yes, long might be overkill right now, but the code is all __init time code. I don't have numbers showing average chain length at this point, I was simply fixing this one end case > > Can alloc_large_system_hash() produce the horrors that large, anyway? On a 16TB system, alloc_large_system_hash() produces 2^31 hash buckets, yes. Would simply capping the value in alloc_large_system_hash() be more palatable? Something like the following? Index: linux/mm/page_alloc.c =================================================================== --- linux.orig/mm/page_alloc.c +++ linux/mm/page_alloc.c @@ -5257,6 +5257,7 @@ void *__init alloc_large_system_hash(con if (max == 0) { max = ((unsigned long long)nr_all_pages << PAGE_SHIFT) >> 4; do_div(max, bucketsize); + max = min(max, 1ULL << 30); } if (numentries > max) -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html