On Tue, Jul 16, 2013 at 11:17:23AM +0530, Aneesh Kumar K.V wrote: > Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> writes: > > > On Mon, Jul 15, 2013 at 08:41:12PM +0530, Aneesh Kumar K.V wrote: > >> Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> writes: > >> > >> > If we map the region with MAP_NORESERVE and MAP_SHARED, > >> > we can skip to check reserve counting and eventually we cannot be ensured > >> > to allocate a huge page in fault time. > >> > With following example code, you can easily find this situation. > >> > > >> > Assume 2MB, nr_hugepages = 100 > >> > > >> > fd = hugetlbfs_unlinked_fd(); > >> > if (fd < 0) > >> > return 1; > >> > > >> > size = 200 * MB; > >> > flag = MAP_SHARED; > >> > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); > >> > if (p == MAP_FAILED) { > >> > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); > >> > return -1; > >> > } > >> > > >> > size = 2 * MB; > >> > flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; > >> > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); > >> > if (p == MAP_FAILED) { > >> > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); > >> > } > >> > p[0] = '0'; > >> > sleep(10); > >> > > >> > During executing sleep(10), run 'cat /proc/meminfo' on another process. > >> > You'll find a mentioned problem. > >> > > >> > Solution is simple. We should check VM_NORESERVE in vma_has_reserves(). > >> > This prevent to use a pre-allocated huge page if free count is under > >> > the reserve count. > >> > >> You have a problem with this patch, which i guess you are fixing in > >> patch 9. Consider two process > >> > >> a) MAP_SHARED on fd > >> b) MAP_SHARED | MAP_NORESERVE on fd > >> > >> We should allow the (b) to access the page even if VM_NORESERVE is set > >> and we are out of reserve space . > > > > I can't get your point. > > Please elaborate more on this. > > > One process mmap with MAP_SHARED and another one with MAP_SHARED | MAP_NORESERVE > Now the first process will result in reserving the pages from the hugtlb > pool. Now if the second process try to dequeue huge page and we don't > have free space we will fail because > > vma_has_reservers will now return zero because VM_NORESERVE is set > and we can have (h->free_huge_pages - h->resv_huge_pages) == 0; I think that this behavior is correct, because a user who mapped with VM_NORESERVE should not think their allocation always succeed. With patch 9, he can be ensured to succeed, but I think it is side-effect. > The below hunk in your patch 9 handles that > > + if (vma->vm_flags & VM_NORESERVE) { > + /* > + * This address is already reserved by other process(chg == 0), > + * so, we should decreament reserved count. Without > + * decreamenting, reserve count is remained after releasing > + * inode, because this allocated page will go into page cache > + * and is regarded as coming from reserved pool in releasing > + * step. Currently, we don't have any other solution to deal > + * with this situation properly, so add work-around here. > + */ > + if (vma->vm_flags & VM_MAYSHARE && chg == 0) > + return 1; > + else > + return 0; > + } > > so may be both of these should be folded ? I think that these patches should not be folded, because these handle two separate issues. Reserve count mismatch issue mentioned in patch 9 is not introduced by patch 7. Thanks. > > -aneesh > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@xxxxxxxxx. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a> -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>