* Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > lolz. Catastrophic meltdown. Thanks for doing all that work - at a guess > I'd say it's mmap_sem. [...] Looks like we dont need to guess, just look at the call graph profile (a'ka the smoking gun): > > I perf'ed on 2.6.32.9-70.fc12.x86_64 kernel > > > > [...] > > > > callgraph(top part only): > > > > 53.09% dve22lts-mc [kernel] [k] > > _spin_lock_irqsave > > | > > |--49.90%-- __down_read_trylock > > | down_read_trylock > > | do_page_fault > > | page_fault > > | | > > | |--99.99%-- __GI_memcpy > > | | | > > | | |--84.28%-- (nil) > > | | | > > | | |--9.78%-- 0x100000000 > > | | | > > | | --5.94%-- 0x1 > > | --0.01%-- > > [...] > > > > | > > |--49.39%-- __up_read > > | up_read > > | | > > | |--100.00%-- do_page_fault > > | | page_fault > > | | | > > | | |--99.99%-- __GI_memcpy > > | | | | > > | | | |--84.18%-- (nil) > > | | | | > > | | | |--10.13%-- 0x100000000 > > | | | | > > | | | --5.69%-- 0x1 > > | | --0.01%-- > > [...] It shows a very brutal amount of page fault invoked mmap_sem spinning overhead. > Perhaps with some assist from the CPU scheduler. Doesnt look like it, the perf stat numbers show that the scheduler is only very lightly involved: > > 129875.554435 task-clock-msecs # 10.210 CPUs > > 1883 context-switches # 0.000 M/sec a context switch only every ~68 milliseconds. Ingo Ingo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>