On Thu, Aug 15, 2013 at 11:46:07AM +0800, Xishi Qiu wrote: >On 2013/8/15 10:44, Minchan Kim wrote: > >> Hi Xishi, >> >> On Thu, Aug 15, 2013 at 10:32:50AM +0800, Xishi Qiu wrote: >>> On 2013/8/15 2:00, Mel Gorman wrote: >>> >>>>>> Even if the page is still page buddy, there is no guarantee that it's >>>>>> the same page order as the first read. It could have be currently >>>>>> merging with adjacent buddies for example. There is also a really >>>>>> small race that a page was freed, allocated with some number stuffed >>>>>> into page->private and freed again before the second PageBuddy check. >>>>>> It's a bit of a hand grenade. How much of a performance benefit is there >>>>> >>>>> 1. Just worst case is skipping pageblock_nr_pages >>>> >>>> No, the worst case is that page_order returns a number that is >>>> completely garbage and low_pfn goes off the end of the zone >>>> >>>>> 2. Race is really small >>>>> 3. Higher order page allocation customer always have graceful fallback. >>>>> >>> >>> Hi Minchan, >>> I think in this case, we may get the wrong value from page_order(page). >>> >>> 1. page is in page buddy >>> >>>> if (PageBuddy(page)) { >>> >>> 2. someone allocated the page, and set page->private to another value >>> >>>> int nr_pages = (1 << page_order(page)) - 1; >>> >>> 3. someone freed the page >>> >>>> if (PageBuddy(page)) { >>> >>> 4. we will skip wrong pages >> >> So, what's the result by that? >> As I said, it's just skipping (pageblock_nr_pages -1) at worst case > >Hi Minchan, >I mean if the private is set to a large number, it will skip 2^private >pages, not (pageblock_nr_pages -1). I find somewhere will use page->private, >such as fs. Here is the comment about parivate. There is PageBuddy() check. >/* Mapping-private opaque data: > * usually used for buffer_heads > * if PagePrivate set; used for > * swp_entry_t if PageSwapCache; > * indicates order in the buddy > * system if PG_buddy is set. > */ >Thanks, >Xishi Qiu > >> and the case you mentioned is right academically and I and Mel >> already pointed out that. But how often could that happen in real >> practice? I believe such is REALLY REALLY rare. >> So, as Mel said, if you have some workloads to see the benefit >> from this patch, I think we could accept the patch. >> Could you try and respin with the number? >> I guess big contigous memory range or memory-hotplug which are >> full of free pages in embedded CPU which is rather slower than server >> or desktop side could have benefit. >> >> Thanks. >> >>> >>>> nr_pages = min(nr_pages, MAX_ORDER_NR_PAGES - 1); >>>> low_pfn += nr_pages; >>>> continue; >>>> } >>>> } >>>> >>>> It's still race-prone meaning that it really should be backed by some >>>> performance data justifying it. >>>> >>> >>> >>> >> > > > >-- >To unsubscribe, send a message with 'unsubscribe linux-mm' in >the body to majordomo@xxxxxxxxx. For more info on Linux MM, >see: http://www.linux-mm.org/ . >Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a> -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>