Re: IO scheduler, queue depth, nr_requests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Miquel van Smoorenburg wrote:


No, I'm actually referring to a struct request. I'm logging this in the SCSI layer, in scsi_request_fn(), just after elv_next_request(). I have in fact logged all the bio's submitted to __make_request, and the output of the elevator from elv_next_request(). The bio's are submitted sequentially, the resulting requests aren't. But this is because nr_requests is 128, while the 3ware device has a queue of 254 entries (no tagging though). Upping nr_requests to 512 makes this go away ..

That shouldn't be necessary though. I only see this with LVM over 3ware-raid5,
not on the 3ware-raid5 array directly (/dev/sda1). And it gets less troublesome
with a lot of debugging (unless I set nr_requests lower again), which points
to a timing issue.



So the problem you are seeing is due to "unlucky" timing between two processes submitting IO. And the very efficient mechanisms (merging, sorting) we have to improve situations exactly like this is effectively disabled. And to make it worse, it appears that your controller shits itself on this trivially simple pattern.

Your hack makes a baby step in the direction of per *process*
request limits, which I happen to be an advocate of. As it stands
though, I don't like it.

Jens has the final say when it comes to the block layer though.

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux