On 2 May 2012 01:38, Jeff Moyer <jmoyer@xxxxxxxxxx> wrote: > KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxx> writes: > >> On Tue, May 1, 2012 at 11:11 AM, Jeff Moyer <jmoyer@xxxxxxxxxx> wrote: >>> KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxx> writes: >>> >>>>> Hello, >>>>> >>>>> Thank you revisit this. But as far as my remember is correct, this issue is NOT >>>>> unaligned access issue. It's just get_user_pages(_fast) vs fork race issue. i.e. >>>>> DIRECT_IO w/ multi thread process should not use fork(). >>>> >>>> The problem is, fork (and its COW logic) assume new access makes cow break, >>>> But page table protection can't detect a DMA write. Therefore DIO may override >>>> shared page data. >>> >>> Hm, I've only seen this with misaligned or multiple sub-page-sized reads >>> in the same page. AFAIR, aligned, page-sized I/O does not get split. >>> But, I could be wrong... >> >> If my remember is correct, the reproducer of past thread is misleading. >> >> dma_thread.c in >> http://lkml.indiana.edu/hypermail/linux/kernel/0903.1/01498.html has >> align parameter. But it doesn't only change align. Because of, every >> worker thread read 4K (pagesize), then >> - when offset is page aligned >> -> every page is accessed from only one worker >> - when offset is not page aligned >> -> every page is accessed from two workers >> >> But I don't remember why two threads are important things. hmm.. I'm >> looking into the code a while. >> Please don't 100% trust me. > > I bet Andrea or Larry would remember the details. KOSAKI-san is correct, I think. The race is something like this: DIO-read page = get_user_pages() fork() COW(page) touch(page) DMA(page) page_cache_release(page); So whether parent or child touches the page, determines who gets the actual DMA target, and who gets the copy. 2 threads are not required, but it makes the race easier to code and a larger window, I suspect. It can also be hit with a single thread, using AIO. -- To unsubscribe from this list: send the line "unsubscribe linux-man" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html