Re: [PATCH 6/8] dm: don't start current request if it would've merged with the previous

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/09/15 12:30, Merla, ShivaKrishna wrote:
>> Secondly, for this comment from Merla ShivaKrishna:
>>
>>> Yes, Indeed this the exact issue we saw at NetApp. While running sequential
>>> 4K write I/O with large thread count, 2 paths yield better performance than
>>> 4 paths and performance drastically drops with 4 paths. The device queue_depth
>>> as 32 and with blktrace we could see better I/O merging happening and average
>>> request size was > 8K through iostat. With 4 paths none of the I/O gets merged and
>>> always average request size is 4K. Scheduler used was noop as we are using SSD
>>> based storage. We could get I/O merging to happen even with 4 paths but with lower
>>> device queue_depth of 16. Even then the performance was lacking compared to 2 paths.
>>
>> Have you tried increasing nr_requests of the dm device?
>> E.g. setting nr_requests to 256.
>>
>> 4 paths with each queue depth 32 means that it can have 128 I/Os in flight.
>> With the default value of nr_requests 128, the request queue is almost
>> always empty and I/O merge could not happen.
>> Increasing nr_requests of the dm device allows some more requests
>> queued,
>> thus the chance of merging may increase.
>> Reducing the lower device queue depth could be another solution. But if
>> the depth is too low, you might not be able to keep the optimal speed.
>>
> Yes, we have tried this as well but didn't help. Indeed, we have tested with queue_depth
> of 16 on each path as well with 64 I/O's in flight and resulted in same issue. We did try
> reducing the queue_depth with 4 paths, but couldn't achieve comparable performance
> as of 2 paths. With Mike's patch, we see tremendous improvement with just a small delay 
> of ~20us with 4 paths. This might vary with different configurations but sure have proved 
> that a tunable to delay dispatches with sequential workloads has helped a lot.

Hi,

did you try increasing nr_requests of dm request queue?
If so, what was the increased value of nr_requests in the case of
device queue_depth 32?

-- 
Jun'ichi Nomura, NEC Corporation

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux