Re: poor read performance on rbd+LVM, LVM overload

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 21 2013 at  2:06pm -0400,
Christoph Hellwig <hch@infradead.org> wrote:

> On Mon, Oct 21, 2013 at 11:01:29AM -0400, Mike Snitzer wrote:
> > It isn't DM that splits the IO into 4K chunks; it is the VM subsystem
> > no?
> 
> Well, it's the block layer based on what DM tells it.  Take a look at
> dm_merge_bvec
> 
> >From dm_merge_bvec:
> 
> 	/*
>          * If the target doesn't support merge method and some of the devices
>          * provided their merge_bvec method (we know this by looking at
>          * queue_max_hw_sectors), then we can't allow bios with multiple vector
>          * entries.  So always set max_size to 0, and the code below allows
>          * just one page.
>          */
> 	
> Although it's not the general case, just if the driver has a
> merge_bvec method.  But this happens if you using DM ontop of MD where I
> saw it aswell as on rbd, which is why it's correct in this context, too.

Right, but only if the DM target that is being used doesn't have a
.merge method.  I don't think it was ever shared which DM target is in
use here.. but both the linear and stripe DM targets provide a .merge
method.
 
> Sorry for over generalizing a bit.

No problem.

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux