On Thu, 14 Jun 2012, Dave Chinner wrote: > Oh, please. I have been hearing this for years, and are we any > closer to it? No, we are further away from ever being able to > acheive this than ever. Face it, filesystems require memory > allocation to write dirty data to disk, and the amount is almost > impossible to define. Hence mempools can't be used because we can't > give any guarantees of forward progress. And for vmalloc? > > Filesystems widely use vmalloc/vm_map_ram because kmalloc fails on > large contiguous allocations. This renders kmalloc unfit for > purpose, so we have to fall back to single page allocation and > vm_map_ram or vmalloc so that the filesystem can function properly. > And to avoid deadlocks, all memory allocation must be able to > specify GFP_NOFS to prevent the MM subsystem from recursing into the > filesystem. Therefore, vmalloc needs to support GFP_NOFS. > > I don't care how you make it happen, just fix it. Trying to place > the blame on the filesystem folk for using vmalloc in GFP_NOFS > contexts is a total and utter cop-out, because mm folk of all people > should know that non-zero order kmalloc is not a reliable > alternative.... > I'd actually like to see a demonstrated problem (i.e. not theoretical) where vmalloc() stalls indefinitely because its passed GFP_NOFS. I've never seen one reported. This is because the per-arch pte allocators have hardwired GFP_KERNEL flags, but then again they also have __GFP_REPEAT which would cause them to loop infinitely in the page allocator if a page was not reclaimed, which has little success without __GFP_FS. But nobody has ever reported a livelock that was triaged back to passing !__GFP_FS to vmalloc(). -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html