Re: Shell Scripts or Arbitrary Priority Callouts?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 27, 2009 at 03:03:35AM -0400, John A. Sullivan III wrote:
> > > 
> > > > The once piece which is still a mystery is why using four targets on
> > > > four separate interfaces striped with dmadm RAID0 does not produce an
> > > > aggregate of slightly less than four times the IOPS of a single target
> > > > on a single interface. This would not seem to be the out of order SCSI
> > > > command problem of multipath.  One of life's great mysteries yet to be
> > > > revealed.  Thanks again, all - John
> > > 
> > > Hmm.. maybe the out-of-order problem happens at the target? It gets IO
> > > requests to nearby offsets from 4 different sessions and there's some kind
> > > of locking or so going on? 
> > Ross pointed out a flaw in my test methodology.  By running one I/O at a
> > time, it was literally doing that - not one full RAID0 I/O but one disk
> > I/O apparently.  He said to truly test it, I would need to run as many
> > concurrent I/Os as there were disks in the array.  Thanks - John
> > ><snip>
> Argh!!! This turned out to be alarmingly untrue.  This time, we were
> doing some light testing on a different server with two bonded
> interfaces in a single bridge (KVM environment) going to the same SAM we
> used in our four port test.
> 

Is the SAN also using bonded interfaces?

> For kicks and to prove to ourselves that RAID0 scaled with multiple I/O
> as opposed to limiting the test to only single I/O, we tried some actual
> file transfers to the SAN mounted in sync mode.  We found concurrently
> transferring two identical files to the RAID0 array composed of two
> iSCSI attached drives was 57% slower than concurrently transferring the
> files to the drives separately. In other words, copying file1 and file2
> concurrently to RAID0 took 57% longer than concurrently copying file1 to
> disk1 and file2 to disk2.
> 

Hmm.. I wonder why that happens..

> We then took a little different approach and used disktest.  We ran two
> concurrent sessions with -K1.  In one case, we ran both sessions to the
> 2 disk RAID0 array.  The performance was significantly less again, than
> running the two concurrent tests against two separate iSCSI disks.  Just
> to be clear, these were the same disks as composed the array, just not
> grouped in the array.
> 

There has to be some logical explanation to this.. 

> Even more alarmingly, we did the same test using multipath multibus,
> i.e., two concurrent disktest with -K1 (both reads and rights, all
> sequential with 4K block sizes).  The first session completely starved
> the second.  The first one continued at only slightly reduced speed
> while the second one (kicked off just as fast as we could hit the enter
> key) received only roughly 50 IOPS.  Yes, that's fifty.
> 
> Frightening but I thought I had better pass along such extreme results
> to the multipath team.  Thanks - John

Hmm, so you had mpath0 and mpath1, and you ran disktest against both, at the
same time? 

-- Pasi

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux