On Thu, Dec 31, 2009 at 02:02:38AM +0800, Andi Kleen wrote: > Wu Fengguang <fengguang.wu@xxxxxxxxx> writes: > > * the ra fields can be accessed concurrently in a racy way. > > --- linux.orig/mm/fadvise.c 2009-12-30 13:02:03.000000000 +0800 > > +++ linux/mm/fadvise.c 2009-12-30 13:23:05.000000000 +0800 > > @@ -77,12 +77,14 @@ SYSCALL_DEFINE(fadvise64_64)(int fd, lof > > switch (advice) { > > case POSIX_FADV_NORMAL: > > file->f_ra.ra_pages = bdi->ra_pages; > > + file->f_ra.flags &= ~RA_FLAG_RANDOM; > > break; > > case POSIX_FADV_RANDOM: > > - file->f_ra.ra_pages = 0; > > + file->f_ra.flags |= RA_FLAG_RANDOM; > > What prevents this from racing with a parallel readahead > state modification, losing the bits? Oh I pretended that the problem don't exist.. To be serious, the race only exist inside a mutithread application, where one single fd is shared between two threads, one is doing fadvise, another doing readahead. A sane application won't do fadvise(POSIX_FADV_RANDOM) while active reads are going one concurrently: this leads to indeterminate behavior by itself -- from which request the random hint takes effect? fadvise() shall always be in the same streamline with all reads. In real workloads, 1% applications may do POSIX_FADV_RANDOM, among which 1% applications may be broken. And if the race does happen, the impact is very small. So I choose to just ignore the race and use non-atomic operations.. Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html