On 07/15/2013 03:24 AM, David Gibson wrote:
On Sun, Jul 14, 2013 at 08:16:44PM -0700, Davidlohr Bueso wrote:
Reading the existing comment, this change looks very suspicious to me.
A per-vma mutex is just not going to provide the necessary exclusion, is
it? (But I recall next to nothing about these regions and
reservations.)
A per-VMA lock is definitely wrong. I think it handles one form of
the race, between threads sharing a VM on a MAP_PRIVATE mapping.
However another form of the race can and does occur between different
MAP_SHARED VMAs in the same or different processes. I think there may
be edge cases involving mremap() and MAP_PRIVATE that will also be
missed by a per-VMA lock.
Note that the libhugetlbfs testsuite contains tests for both PRIVATE
and SHARED variants of the race.
Can we get away with simply using a mutex in the file?
Say vma->vm_file->mapping->i_mmap_mutex?
That might help with multiple processes initializing
multiple shared memory segments at the same time, and
should not hurt the case of a process mapping its own
hugetlbfs area.
It might have the potential to hurt when getting private
copies on a MAP_PRIVATE area, though. I have no idea
how common it is for multiple processes to MAP_PRIVATE
the same hugetlbfs file, though...
--
All rights reversed
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>