Adrian McMenamin wrote:
On 14/12/2007, Tejun Heo <htejun@xxxxxxxxx> wrote:
Hello,
There just isn't much room for maneuver w/ just one segment. Large
contiguous memory region isn't too common these days. That said, there
was a bug recently spotted by Mark Lord which made contiguous memory
regions even rarer. Which kernel version are you using?
Bang up to date latest git, ie -rc5-gitX
..
Not in -git yet, but it is in -mm.
Attached here for your convenience.
Is that right? What is the best way to go here?
If you can spare some memory and cpu cycles, preparing a contiguous
buffer and staging data there might help. It will eat up some cpu
cycles but it won't be too much compared to PIO cycles.
OK, I'll try it
..
That's probably your best bet, even though it will mean copying
to/from your big bounce buffer with each I/O.
The code could be clever, I suppose, and only bounce when the supplied
I/O region is smaller than XXX pages.
Cheers
"Improved version", more similar to the 2.6.23 code:
Fix page allocator to give better chance of larger contiguous segments (again).
Signed-off-by: Mark Lord <mlord@xxxxxxxxx
---
--- old/mm/page_alloc.c 2007-12-13 19:25:15.000000000 -0500
+++ linux-2.6/mm/page_alloc.c 2007-12-13 19:43:07.000000000 -0500
@@ -760,7 +760,7 @@
struct page *page = __rmqueue(zone, order, migratetype);
if (unlikely(page == NULL))
break;
- list_add(&page->lru, list);
+ list_add_tail(&page->lru, list);
set_page_private(page, migratetype);
}
spin_unlock(&zone->lock);