Re: (was: Re: md: raid5 vs raid10 (f2,n2,o2) benchmarks [w/10 raptors])

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 25, 2008 at 02:09:32PM -0500, David Lethe wrote:
> -----Original Message-----
> From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-owner@xxxxxxxxxxxxxxx] On Behalf Of Keld Jørn Simonsen
> Sent: Wednesday, June 25, 2008 1:49 PM
> To: Justin Piszcz
> Cc: Conway S. Smith; linux-raid@xxxxxxxxxxxxxxx
> Subject: Re: (was: Re: md: raid5 vs raid10 (f2,n2,o2) benchmarks [w/10 raptors])
> 
> > On Thu, Apr 03, 2008 at 12:02:46PM +0200, Keld Jørn Simonsen wrote:
> > > On Wed, Apr 02, 2008 at 01:49:44PM -0400, Justin Piszcz wrote:
> > > > 
> > > > 
> > > > On Wed, 2 Apr 2008, Justin Piszcz wrote:
> > > > 
> > > > >
> > > > >
> > > > >On Wed, 2 Apr 2008, Conway S. Smith wrote:
> > > 
> > > I have referenced both of your benchmaks in the wiki on performance. So
> > > now I just hope that your URLs will live forever. I also took down some
> > > of your recomendations there. 
> > > 
> > > I note that raid10,f2 has a much higher cpu load than raid10,n2 or
> > > raid10,o2. How come? it is 31-38 % for f2, where n2 and o2 is around 15 %.
> >
> > I found a reason for this, it seems that CPU usage and IO speed are very
> > related, so because the raid10,f2 has about double the IO performance
> > for sequential reading, it also has about double the cpu use.
> >
> > Justin's benchmark is on http://home.comcast.net/~jpiszcz/20080329-raid/
> >
> > Another of Justin's benchmarks also reveals the relation between
> > IO rate and CPU use:
> > http://home.comcast.net/~jpiszcz/raid/20080528/raid-levels.html
> >
> > Why does IO use that much CPU? Is it mostly moving around the data from
> > the kernel to the user space? Does it matter here whether one is running
> > a 32 bit or a 64 bit system?
> 
> > It seems like the RAM bus can be a bottleneck. I read that DDR-400
> > can have a peak performance of 1600 MB/s. If this is halved on 32 bit OS
> > then this is 800 MB/s. And you need to both read and write when you move
> > things around. So that is 400 MB/s... And you need to still be able to 
> > read in from the disk controller at 330 MB/s. For 64 bit systems this is
> > a max around 400 MB/s - given that there is a flow from the disk
> > controller to system disk buffers, then from kernel buffers to user
> > buffers and then from user buffers to some processing. Or am I wrong
> > here?
> >
> > Best regards
> > Keld
> 
> These chunk sizes are profoundly meaningless if you plan on using them to estimate performance in the real world.   The relationship between IO rate, IO throughput, and CPU overhead will be dramatically different with default md settings.  Also consider that your kernel was recompiled with little or no cpu-specific optimization, so you are wasting cpu cycles .. and don't get me started on multicore vs. single core for such benchmarks.

I think something is wrong with the chunk sizes. 16 G and 7 G are most
likely erroneous. Justin?

David, if you have some benchmarks then please feel free to report them.

Best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux