On Thu, 26 Aug 2010 18:36:24 +0900 Minchan Kim <minchan.kim@xxxxxxxxx> wrote: > On Thu, Aug 26, 2010 at 1:30 PM, KAMEZAWA Hiroyuki > <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > > On Thu, 26 Aug 2010 13:06:28 +0900 > > Minchan Kim <minchan.kim@xxxxxxxxx> wrote: > > > >> On Thu, Aug 26, 2010 at 12:44 PM, KAMEZAWA Hiroyuki > >> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > >> > On Thu, 26 Aug 2010 11:50:17 +0900 > >> > KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > >> > > >> >> 128MB...too big ? But it's depend on config. > >> >> > >> >> IBM's ppc guys used 16MB section, and recently, a new interface to shrink > >> >> the number of /sys files are added, maybe usable. > >> >> > >> >> Something good with this approach will be you can create "cma" memory > >> >> before installing driver. > >> >> > >> >> But yes, complicated and need some works. > >> >> > >> > Ah, I need to clarify what I want to say. > >> > > >> > With compaction, it's helpful, but you can't get contiguous memory larger > >> > than MAX_ORDER, I think. To get memory larger than MAX_ORDER on demand, > >> > memory hot-plug code has almost all necessary things. > >> > >> True. Doesn't patch's idea of Christoph helps this ? > >> http://lwn.net/Articles/200699/ > >> > > > > yes, I think so. But, IIRC, it's own purpose of Chirstoph's work is > > for removing zones. please be careful what's really necessary. > > Ahh. Sorry for missing point. > You're right. The patch can't help our problem. > > How about changing following this? > The thing is MAX_ORDER is static. But we want to avoid too big > MAX_ORDER of whole zones to support devices which requires big > allocation chunk. > So let's add MAX_ORDER into each zone and then, each zone can have > different max order. > For example, while DMA[32], NORMAL, HIGHMEM can have normal size 11, > MOVABLE zone could have a 15. > > This approach has a big side effect? > Hm...need to check hard coded MAX_ORDER usages...I don't think side-effect is big. Hmm. But I think enlarging MAX_ORDER isn't an important thing. A code which strips contiguous chunks of pages from buddy allocator is a necessaty thing, as.. What I can think of at 1st is... == int steal_pages(unsigned long start_pfn, unsigned long end_pfn) { /* Be careful mutal execution with memory hotplug, because reusing code */ split [start_pfn, end_pfn) to pageblock_order for each pageblock in the range { Mark this block as MIGRATE_ISOLATE try-to-free pages in the range or migrate pages in the range to somewhere. /* Here all pages in the range are on buddy allocator and free and never be allocated by anyone else. */ } please see __rmqueue_fallback(). it selects migration-type at 1st. Then, if you can pass start_migratetype of MIGLATE_ISOLATE, you can automatically strip all MIGRATE_ISOLATE pages from free_area[]. return chunk of pages. } == Thanks, -Kame -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>