RE: CentOS 4.4 lvm and drbd 0.8?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



> -----Original Message-----
> From: centos-bounces@xxxxxxxxxx 
> [mailto:centos-bounces@xxxxxxxxxx] On Behalf Of Ross S. W. Walker
> Sent: Sunday, March 04, 2007 10:44 AM
> To: CentOS mailing list
> Subject: RE:  CentOS 4.4 lvm and drbd 0.8?
> 
> > -----Original Message-----
> > From: centos-bounces@xxxxxxxxxx 
> > [mailto:centos-bounces@xxxxxxxxxx] On Behalf Of Johnny Hughes
> > Sent: Sunday, March 04, 2007 7:16 AM
> > To: CentOS ML
> > Subject: Re:  CentOS 4.4 lvm and drbd 0.8?
> > 
> > On Fri, 2007-03-02 at 15:35 -0600, Les Mikesell wrote:
> > > Johnny Hughes wrote:
> > > > On Tue, 2007-02-27 at 00:20 -0600, Les Mikesell wrote:
> > > >> Johnny Hughes wrote:
> > > >>> I am not quite sure that drbd-8 is totally ready yet 
> > for prime time.
> > > >>> Not that I don't trust them (I use drbd in production 
> > and I love it),
> > > >>> but I want to wait for an 8.0.1 or 8.0.2 level before 
> I move the
> > > >>> enterprise CentOS RPMS to that version.
> > > >>>
> > > >>> I would be open to producing some 8.0.0 rpms for 
> > testing ... though that
> > > >>> will probably need to wait until after CentOS 5 Beta is 
> > released.
> > > >> Could you get the same effect by running software RAID1 
> > with one of the 
> > > >> drives connected via iscsi?
> > > > 
> > > > Provides the same effect as DRBD? ... not really ... as 
> > DRBD provides a
> > > > second machine in hot standby mode with a totally synced 
> > partition that
> > > > is ready to take over on a failure of the first machine.  
> > If the first
> > > > computer blows up (power supply, hard drive crash, etc.), 
> > the second one
> > > > starts up and takes over with no down time (except the 
> > time it takes to
> > > > mount the partition and start the services on the new machine).
> > > 
> > > How is the mirror/sync different than RAID1, and how is 
> > DRBD's version
> > > different than you would have if you exported the 2nd machine's 
> > > partitions via iscsi and mirrored the live machine using md 
> > devices with 
> > > one local, one iscsi member for each?  If that is actually 
> > possible, I'd 
> > > expect those general purpose components too be much better 
> > tested and 
> > > more reliable than little-used code like DRBD.   Does DRBD 
> > have special 
> > > handling for updating slightly out-of-sync copies or does 
> > it have to 
> > > rebuild the whole thing if not taken down cleanly also?
> > 
> > I have no idea how it works, other than it uses the md device 
> > and raid 1
> > kernel code to mirror the drive/partition to a second machine 
> > ... and do
> > so in real time.  It uses heartbeat to create a cluster and 
> does real
> > time failover.
> > 
> > It does not require rebuilding the whole device if shutdown
> > uncleanly ... it syncs from the last updated point.
> > 
> > My point was that the 0.8 (actually renamed 8.0.0) code was just
> > released.  The 0.6 and 0.7 code has been out and stable for quite
> > sometime and I have been using it for more than 2 years.
> 
> If you were running a later kernel version of MD, it is conceivable
> that you could create a mirror with a remote storage drive over
> iscsi.
> 
> It would be up to you though to figure out how to fail-over to it
> and to limit the bandwidth MD takes to that remote mirror and
> releasize that it will always be fully synchronous and so
> performance may not be the best over a WAN.
> 
> You can also use a pair of vise grip plyers to do the job of an
> adjustable wrench, but it will probably strip the bolt in the
> process.

If you do plan on using MD over iscsi why not try something
interesting like a RAID level other than 1, say a RAID 3,4,5,6
and get some increased performance over drbd and regular iscsi.

You need a later kernel that supports MD bitmaps to prevent
complete re-sync on disconnect and the storage would have to
all be local, but say you have a bunch of servers all with
direct attached storage and you wish to consolidate storage,
but want to leverage all your existing direct-connect. You
can have each server export it's storage via iSCSI have
a central server that mounts all this storage and creates
a fault tolerant MD RAID out of it, creates a LVM VG on
top then re-exports it via iSCSI to different platforms.

-Ross


______________________________________________________________________
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux