Re: raid5 read performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



----- Original Message ----- 
From: "Raz Ben-Jehuda(caro)" <raziebe@xxxxxxxxx>
To: "JaniD++" <djani22@xxxxxxxxxxxxx>
Cc: <linux-raid@xxxxxxxxxxxxxxx>
Sent: Tuesday, January 10, 2006 9:05 PM
Subject: Re: raid5 read performance


> NBD for network block device ?

Yes. :-)

> why do u use it ?

I need only one big block device.
In the beginning, i try almost all tool to transport the block devices to
the concentrator, and the best choise (speed and stability) looks like
RedHat's GNBD.
But GNBD is have the same problem, like NBD, the old deadlock problem on
heavy write.
The only difference is the GNBD issues that rarely than NBD.
Couple of months ago, Herbert Xu have fixed the NBD-deadlock problem (with
my help:-), and now the fixed NBD is the best choise!

Do you have better idea? :-)
Please let me know!

> what type of elevator do you use ?

Elevator?
What do you think exactly?
My system's actually performance is thanks to block devices good readahead
settings. (in all layer, including nbd)

Cheers,
Janos

>
>
> On 1/10/06, JaniD++ <djani22@xxxxxxxxxxxxx> wrote:
> >
> > ----- Original Message -----
> > From: "Raz Ben-Jehuda(caro)" <raziebe@xxxxxxxxx>
> > To: "JaniD++" <djani22@xxxxxxxxxxxxx>
> > Cc: "Linux RAID Mailing List" <linux-raid@xxxxxxxxxxxxxxx>
> > Sent: Tuesday, January 10, 2006 12:25 AM
> > Subject: Re: raid5 read performance
> >
> >
> > > 1. it is not good to use so many disks in one raid. this means that in
> > > degraded mode
> > >     10 disks would be needed to reconstruct one slice of data.
> > > 2. i did not understand what is raid purpose.
> >
> > Yes, i know that.
> > In my system, this was the best choise.
> >
> > I have 4 disk node inside 4x12 Maxtor 200GB (exactly 10xIDE+2xSATA).
> > The disk nodes sevres nbd.
> > The concentrator joins the nodes with sw-raid0
> >
> > The system is a generally free web storage.
> >
> > > 3. 10 MB/s is very slow. what sort of disks do u have ?
> >
> > 4x(2xSATA+10xIDE) Maxtor 200GB
> >
> > The system sometimes have 500-800-1000 downloaders at same time.
> > In this load, the per node traffic is only 10MB/s. (~100Mbit/s)
> >
> > First i think the sync/async IO problem.
> > At this time i think the bottleneck on the nodes is the PCI-32 with 8
HDD.
> > :(
> >
> > > 4. what is the raid stripe size ?
> >
> > Currently all raid layers have 32KB chunks.
> >
> > Cheers,
> > Janos
> >
> > >
> > > On 1/4/06, JaniD++ <djani22@xxxxxxxxxxxxx> wrote:
> > > >
> > > > ----- Original Message -----
> > > > From: "Raz Ben-Jehuda(caro)" <raziebe@xxxxxxxxx>
> > > > To: "JaniD++" <djani22@xxxxxxxxxxxxx>
> > > > Cc: "Linux RAID Mailing List" <linux-raid@xxxxxxxxxxxxxxx>
> > > > Sent: Wednesday, January 04, 2006 2:49 PM
> > > > Subject: Re: raid5 read performance
> > > >
> > > >
> > > > > 1. do you want the code ?
> > > >
> > > > Yes.
> > > > If it is difficult to set.
> > > > I use 4 big raid5 array (4 disk node), and the performace is not too
> > good.
> > > > My standalone disk can do ~50MB/s, but 11 disk in one raid array
does
> > only
> > > > ~150Mbit/s.
> > > > (With linear read using dd)
> > > > At this time i think this is my systems pci-bus bottleneck.
> > > > But on normal use, and random seeks, i am happy, if one disk-node
can do
> > > > 10MB/s ! :-(
> > > >
> > > > Thats why i am guessing this...
> > > >
> > > > > 2. I managed to gain linear perfromance with raid5.
> > > > >     it seems that both raid 5 and raid 0 are caching read a head
> > buffers.
> > > > >     raid 5 cached small amount of read a head while raid0 did not.
> > > >
> > > > Aham.
> > > > But...
> > > > I dont understand...
> > > > You wrote that, the RAID5 is slower than RAID0.
> > > > The read a head buffering/caching is bad for performance?
> > > >
> > > > Cheers,
> > > > Janos
> > > >
> > > >
> > > > >
> > > > >
> > > > > On 1/4/06, JaniD++ <djani22@xxxxxxxxxxxxx> wrote:
> > > > > >
> > > > > > ----- Original Message -----
> > > > > > From: "Raz Ben-Jehuda(caro)" <raziebe@xxxxxxxxx>
> > > > > > To: "Mark Hahn" <hahn@xxxxxxxxxxxxxxxxxxx>
> > > > > > Cc: "Linux RAID Mailing List" <linux-raid@xxxxxxxxxxxxxxx>
> > > > > > Sent: Wednesday, January 04, 2006 9:14 AM
> > > > > > Subject: Re: raid5 read performance
> > > > > >
> > > > > >
> > > > > > > I guess i was not clear enough.
> > > > > > >
> > > > > > > i am using raid5 over 3 maxtor disks. the chunk size is 1MB.
> > > > > > > i mesured the io coming from one disk alone when I READ
> > > > > > > from it with 1MB buffers , and i know that it is ~32MB/s.
> > > > > > >
> > > > > > > I created raid0 over two disks and my throughput grown to
> > > > > > > 64 MB/s.
> > > > > > >
> > > > > > > Doing the same thing with raid5 ended in 32 MB/s.
> > > > > > >
> > > > > > > I am using async io since i do not want to wait for several
disks
> > > > > > > when i send an IO. By sending a buffer which is striped
aligned
> > > > > > > i am supposed to have one to one relation between a disk and
an
> > > > > > > io.
> > > > > > >
> > > > > > > iostat show that all of the three disks work but not fully.
> > > > > >
> > > > > > Hello,
> > > > > >
> > > > > > How do you set sync/async io?
> > > > > > Please, let me know! :-)
> > > > > >
> > > > > > Thanks,
> > > > > > Janos
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Raz
> > > >
> > > >
> > >
> > >
> > > --
> > > Raz
> > > -
> > > To unsubscribe from this list: send the line "unsubscribe linux-raid"
in
> > > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> >
>
>
> --
> Raz

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux