Re: Problems with software RAID + iSCSI or GNBD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This discussion intrigues me.  I think there is a lot of merit to running
a raid in this manner.  However, if I understand correctly, under normal
circumstances reads from a Raid1 md device will always round-robin between
the devices to increase performance.  Is there a way or what would need to
be done to set a single component in the device to be the primary.  ie.
don't read from the other device unless the first one fails.
I think there was some discussion about this a month or so ago concerning
ramdisk(which I don't know would be quite as useful), but the theory can
apply to any block devices with significantly different speed/latencies,
etc.
Please advise.
--David Dougall


On Wed, 29 Jun 2005, Bill Davidsen wrote:

> On Wed, 29 Jun 2005, Christopher Smith wrote:
>
> > > Personally, I wouldn't mess with iSCSI or GNBD. You don't need GNBD in
> > > this scenario anyway; simple nbd (which is in the mainline kernel...get
> > > the userland tools at sourceforge.net/projects/nbd) will do just fine,
> > > and I'd be willing to bet that it is more stable and faster...
> >
> > I'd briefly tried nbd, but decided to look elsewhere since it needed to
> > much manual configuration (no included rc script, /dev nodes appear to
> > have to be manually created - yes, I'm lazy).
>
> Based on one test of nbd, it seems to be stable. I did about what you are
> trying, a RAID1 to create an md device, then did an encrypted filesystem
> on it. My test was minor, throw a lot of data at it, check that it is
> there (md5sum), reboot and verify everything still works, drop the local
> drive and rebuild. I did NOT try a rebuild on the nbd drive.
>
> >
> > I've just finished trying NBD now and it seems to solve both my problems
> > - rebuild speed is a healthy 40MB/sec + and the failures are dealt with
> > "properly" (ie: the md device goes into degraded mode if a component nbd
> > suddenly disappears).  This looks like it might be a goer for the disk
> > node/RAID-over-network back-end.
>
> I failed it manually, can't say what pulling the plug will do. Glad it's
> working, I may be doing this on a WAN in the fall.
>
> --
> bill davidsen <davidsen@xxxxxxx>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux