On Tue, Jun 05, 2012 at 10:57:30PM -0400, Vivek Goyal wrote: > On Tue, Jun 05, 2012 at 04:10:45PM -0400, Vivek Goyal wrote: > > On Tue, Jun 05, 2012 at 02:48:53PM -0400, Vivek Goyal wrote: > > > > [..] > > > So sync_file_range() test keeps less in flight requests on on average > > > hence better latencies. It might not produce throughput drop on SATA > > > disks but might have some effect on storage array luns. Will give it > > > a try. > > > > Well, I ran dd and syn_file_range test on a storage array Lun. Wrote a > > file of size 4G on ext4. Got about 300MB/s write speed. In fact when I > > measured time using "time", sync_file_range test finished little faster. > > > > Then I started looking at blktrace output. sync_file_range() test > > initially (for about 8 seconds), drives shallow queue depth (about 16), > > but after 8 seconds somehow flusher gets involved and starts submitting > > lots of requests and we start driving much higher queue depth (upto more than > > 100). Not sure why flusher should get involved. Is everything working as > > expected. I thought that as we wait for last 8MB IO to finish before we > > start new one, we should have at max 16MB of IO in flight. Fengguang? > > Ok, found it. I am using "int index" which in turn caused signed integer > extension of (i*BUFSIZE). Once "i" crosses 255, integer overflow happens > and 64bit offset is sign extended and offsets are screwed. So after 2G > file size, sync_file_range() effectively stops working leaving dirty > pages which are cleaned up by flusher. So that explains why flusher > was kicking during my tests. Change "int" to "unsigned int" and problem > if fixed. Good catch! Besides that, I do see a small chance for the flusher thread to kick in: at the time when the inode dirty expires after 30s. Just a kind reminder, because I don't see how it can impact this workload in some noticeable way. Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html