On Thu, Sep 22, 2011 at 2:23 PM, Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote: > On Thu, Sep 22, 2011 at 01:57:06PM -0700, Colin Cross wrote: >> seq_files are often used for debugging data. When the system is under >> memory pressure, and dumping debugging data starts trying to allocate >> large physically contiguous buffers, it often makes the problem worse. > > Please fix the instances that you see issues with by using the full > seq_file interface which was designed for his instead of the simplified > "single" interface that is only designed for small amounts of data. You're probably right, but it's not always easy. For files that need to show an atomic snapshot of some data, the data needs to go somewhere. It would be possible to allocate a smaller data structure and atomically copy a snapshot into it, then use the iterator interface to push the data out to userspace, but that's a lot harder than a vmalloc and a few seq_printfs. If seq_file used a list of buffers, it could allocate much smaller chunks (a page?), and add new buffers to the list instead of reallocating and calling the read function again. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html