From: Mikulas Patocka <mpatocka@xxxxxxxxxx> Removing the REQ_SYNC flag improves write throughput twice when writing to the origin with a snapshot on the same device (using the CFQ I/O scheduler). Sequential write throughput (chunksize of 4k, 32k, 512k) unpatched: 8.5, 8.6, 9.3 MB/s patched: 15.2, 18.5, 17.5 MB/s Snapshot exception reallocations are triggered by writes that are usually async, so mark the associated dm_io_request as async as well. This helps when using the CFQ I/O scheduler because it has separate queues for sync and async I/O, async is optimized for throughput, sync for latency. With this change we're consciously favoring throughput over latency. Signed-off-by: Mikulas Patocka <mpatocka@xxxxxxxxxx> Signed-off-by: Mike Snitzer <snitzer@xxxxxxxxxx> --- drivers/md/dm-kcopyd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) v2: updated patch header Index: linux-2.6.36-rc7-fast/drivers/md/dm-kcopyd.c =================================================================== --- linux-2.6.36-rc7-fast.orig/drivers/md/dm-kcopyd.c 2010-10-07 16:09:13.000000000 +0200 +++ linux-2.6.36-rc7-fast/drivers/md/dm-kcopyd.c 2010-10-15 03:18:52.000000000 +0200 @@ -345,7 +345,7 @@ static int run_io_job(struct kcopyd_job { int r; struct dm_io_request io_req = { - .bi_rw = job->rw | REQ_SYNC | REQ_UNPLUG, + .bi_rw = job->rw | REQ_UNPLUG, .mem.type = DM_IO_PAGE_LIST, .mem.ptr.pl = job->pages, .mem.offset = job->offset, -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel