On Tuesday, 22 of April 2008, Jiri Slaby wrote: > On 04/22/2008 01:02 AM, Jiri Slaby wrote: > > On 04/22/2008 12:54 AM, Paul E. McKenney wrote: > >> On Tue, Apr 22, 2008 at 12:26:04AM +0200, Jiri Slaby wrote: > >>>> Having slub_debug enabled, tomorrow will be results, I guess... > > OK, methinks it's tomorrow yet, at least here. > > >>> Sorry, one more entry: > >>> > >>> 00000000000000f0 dentry.d_op (Zdenek, offset ? around 136) > > > > Zdenek's is at offset 184. > > > >>> 00f0000000000000 dentry.d_hash.next (me, offset 24) > >>> ffff81f02003f16c dentry.d_name.name (me, offset 56) > >>> memory ORed by 000000f000000000 > >>> fffff0002004c1b0 file.f_mapping (me, offset 176) > >>> memory hole, it was something like > >>> (ffff81002004c1b0 & ~00000f0000000000) | 0000f00000000000? > >>> ffffffffffffffff dentry.d_hash.next (Rafael, offset ? around 24) > >>> -1, ~0ULL > > The same place, dentry.d_hash.next is 1. No slub debug clues... I think, I'll > give slab a try. Any other clues? Well, SLUB uses some per CPU data structures. Is it possible that they get corrupted and which leads to the observed symptoms? -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html