On Wed, Jul 12, 2017 at 09:29:17PM -0700, Andi Kleen wrote: > On Wed, Jul 12, 2017 at 05:47:59PM -0500, Josh Poimboeuf wrote: > > On Wed, Jul 12, 2017 at 03:30:31PM -0700, Andi Kleen wrote: > > > Josh Poimboeuf <jpoimboe@xxxxxxxxxx> writes: > > > > > > > > The ORC data format does have a few downsides compared to DWARF. The > > > > ORC unwind tables take up ~1MB more memory than DWARF eh_frame tables. > > > > > > > Can we have an option to just use dwarf instead? For people > > > who don't want to waste a MB+ to solve a problem that doesn't > > > exist (as proven by many years of opensuse kernel experience) > > > > > > As far as I can tell this whole thing has only downsides compared > > > to the dwarf unwinder that was earlier proposed. I don't see > > > a single advantage. > > > > Improved speed, reliability, maintainability. Are those not advantages? > > Ok. We'll see how it works out. > > The memory overhead is quite bad though. You're basically undoing many > years of efforts to shrink kernel text. I hope this can be still > done better. If we're talking *text*, this further shrinks text size by 3% because frame pointers can be disabled. As far as the data size goes, is anyone *truly* impacted by that extra 1MB or so? If you're enabling a DWARF/ORC unwinder, you're already signing up for a few extra megs anyway. I do have a vague idea about how to reduce the data size, if/when the size becomes a problem. Basically there's a *lot* of duplication in the ORC data: $ tools/objtool/objtool orc dump vmlinux | wc -l 311095 $ tools/objtool/objtool orc dump vmlinux |cut -d' ' -f2- |sort |uniq |wc -l 345 So that's over 300,000 6-byte entries, only 345 of which are unique. There should be a way to compress that. However, it will probably require sacrificing some combination of speed and simplicity. -- Josh -- To unsubscribe from this list: send the line "unsubscribe live-patching" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html