Re: best base / worst case RAID 5,6 write speeds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Robert,

The "FastWrite" code requires a complete stripe all contained inside
of a single BIO.  Depending on your drive count and stripe size, you
could end up above the BIO size limit (1MB for most kernels).  You
might try a smaller chunk size.

Our "application" calls this from a kernel thread, so there may be
other issue that happen if you drive this from user space.  I would
have thought that O_DIRECT would work OK from user space, but have not
tried it.

There is a huge if statement that tests for the FastWrite leg in the
patch.  Pretty much all of the planets need to align for the code to
get used.  printk statements are your friend ;)

Your code changes were probably mostly "bi_iter" stuff with the new
bio iterator structure.  If the code compiles, you probably made the
correct changes.  Plus, it did not crash (this is where a serial
console is helpful ;) ).

Doug

On Mon, Jan 4, 2016 at 10:56 AM, Robert Kierski <rkierski@xxxxxxxx> wrote:
> Hey Doug,
>
> I'm trying to get the patch to work and not having much luck.
>
> I'm guessing that you're using the generic CentOS 7 kernel (3.10).  I'm using a 3.18.4 kernel, so there were a few changes I had to make to get the patch to apply.  But they weren't significant as far as I can tell, and shouldn't have caused the FastWrite code to be ignored.
>
> There must be additional changes in the kernel that are necessary.  When I create the special case MDRaid, and then use the special case IO pattern, I see only the debug messages indicating that the FastWrite code was ignored.
>
> I've added more debugging code to try to understand what's going on.  It turns out that no matter what I do, no matter how I configure the MDRaid, and no matter what IO pattern I use, the size of the IO is such that it doesn't conform to the criteria required to call FastWrite.  It seems that in the 3.18 .4 kernel, IO's are broken up before calling MDRaid's make_request function.  There doesn't seem to be an obvious relationship between the size being passed in, and the chunk size that's configured by mdadm.
>
> At first, it appeared that it was the minimum IO size being used.  But then I tried setting the minimum IO and optimal IO to the same thing.  The resulting IO's are now 1/2 the size of the minimum IO.... which is not the size of the optimal IO (which is the chunk size * number of data disks).
>
> Bob Kierski
> Senior Storage Performance Engineer
> Cray Inc.
> 380 Jackson Street
> Suite 210
> St. Paul, MN 55101
> Tele: 651-967-9590
> Fax:  651-605-9001
> Cell: 651-890-7461
>



-- 
Doug Dumitru
EasyCo LLC
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux