On Tue, 17 Jan 2012 15:58:52 -0800 Kees Cook <keescook@xxxxxxxxxxxx> wrote: > Instead of using /dev/mem directly and forcing userspace to know (or > extract) where the platform has defined persistent memory, how many > slots it has, the sizes, etc, use the common pstore infrastructure to > handle Oops gathering and extraction. This presents a much easier to > use filesystem-based view to the memory region. This also means that any > other tools that are written to understand pstore will automatically be > able to process ramoops too. > > ... > > --- a/drivers/char/Kconfig > +++ b/drivers/char/Kconfig > @@ -603,6 +603,7 @@ source "drivers/s390/char/Kconfig" > config RAMOOPS > tristate "Log panic/oops to a RAM buffer" > depends on HAS_IOMEM > + depends on PSTORE > default n > help > This enables panic and oops messages to be logged to a circular I think this is the right thing to do, but boy I hate it. You sit there wondering "wtf do I need to enable so I can use feature X". I sometimes use menuconfig's search feature, other times go trawl the Kconfig files. There should be a capability in Kconfig to just turn on the terminal feature, enabling any precondition features on the way. Everyone loves Kconfig. > > ... > > static int __exit ramoops_remove(struct platform_device *pdev) > { > +#if 0 > + /* TODO(kees): We cannot unload ramoops since pstore doesn't support > + * unregistering yet. > + */ Well that sucks. Is pstore getting fixed? > struct ramoops_context *cxt = &oops_cxt; > > - if (kmsg_dump_unregister(&cxt->dump) < 0) > - pr_warn("could not unregister kmsg_dumper\n"); > - > iounmap(cxt->virt_addr); > release_mem_region(cxt->phys_addr, cxt->size); > + cxt->max_count = 0; > + > + /* TODO(kees): When pstore supports unregistering, call it here. */ > + kfree(cxt->pstore.buf); > + cxt->pstore.bufsize = 0; > + > return 0; > +#endif > + return -EBUSY; > } > > static struct platform_driver ramoops_driver = { -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html