RE: Re: If I have 5 GNBD server?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

I tried creating a md device from the GNBD's and I was able to create one, I
created a raid 1 device (however it is still going thru its sync process(yet
there is no difference the volume is from shared storage)). The problem is I
cannot mount the device as shown below

Aug 31 14:46:27 dell-1650-31 kernel: GFS: Trying to join cluster "lock_dlm",
"sansilvercash:gfs2"
Aug 31 14:46:27 dell-1650-31 kernel: lock_dlm: fence domain not found; check
fenced Aug 31 14:46:27 dell-1650-31 kernel: GFS: can't mount proto =
lock_dlm, table = sansilvercash:gfs2, hostdata = Aug 31 14:46:39
dell-1650-31 hal.hotplug[5236]: timout(10000 ms) waiting for
/block/diapered_dm-1 Aug 31 14:50:01 dell-1650-31 crond(pam_unix)[7698]:
session opened for user root by (uid=0) Aug 31 14:50:01 dell-1650-31
crond(pam_unix)[7698]: session closed for user root.

Yet I can mount the device by gnbds individually.

PVSCAN shows:

[root@dell-1650-31 ~]# pvscan
  PV /dev/sda3   VG VolGroupData   lvm2 [31.81 GB / 32.00 MB free]
  PV /dev/md0    VG space          lvm2 [1.07 TB / 0    free]
  Total: 2 [1.10 TB] / in use: 2 [1.10 TB] / in no VG: 0 [0   ]
[root@dell-1650-31 ~]#


Any clues on why I wouldn't be able to mount the device? This is the same
behavior when I tried to mount a multipathed device ( which I may have
configured right or wrong)

-Brian
-----Original Message-----
From: brianu [mailto:brianu@xxxxxxxxxxxxxx] 
Sent: Wednesday, August 31, 2005 10:26 AM
To: linux-cluster@xxxxxxxxxx
Cc: 'brianu'
Subject: FW:  Re: If I have 5 GNBD server?

Ok, so I tried using heartbeat to create a virtual ip that floats between
the gnbd servers it worked out okay I can actually mount under that virtual
gnbd server ip. And If I wanted to manually fail out one gnbd server for
another no prob if all was done cleany, i.e. ( vgchange -aln, gnbd_import
-R) then would remount by gnbd_import -i gnbd_vip:

Which would give the unique gnbd in /dev/gnbd, then I'd run vgchange -aly,
would would bring it in lvm and device mapper.

However a manual failover test hangs with "vgchange -aln" as the old device
"unique gnbd" is still attempted to be accessed, also kill the process with
a killall or a kill -11, still doesn't cleanly allow clmvd to return to a
clean state.


As for Multipath as  Ben has wrote

> If the gnbds are exported uncached (the default), the client will fail
back IO
> if it can no longer talk to the server after a specified timeout.  However
> the userspace tools for dm-multipath are still to SCSI-centric to allow
you
> to run on top of gnbd.  You can manually run dm-setup commands to build
> the appropriate multipath map, scan the map to check if a path has failed,
> remove the failed gnbd from the map (so the device can close and gnbd can
> start trying to reconnect), and them manually add the gnbd device back
into
> the map when it has reconnected. That's pretty much all the dm-multipath
> userspace tools do.  Someone could even write a pretty simple daemon that
did
> this, and become the personal hero of many people on this list.

> The only problem is that if you manually execute the commands, or write
the
> daemon in bash or some other scripting language, you can run into a memory
> deadlock. If you are in a very low memory situation, and you need to
complete
> gnbd IO requests to free up memory, the daemon can't allocate any memory
in
> doing it's job.
& 
> If you have the gnbd exported in caching mode, each server will maintain
it's
> own cache, So if you write a block to one server, and then the server
crashes,
> when you read the block from the second server, if it was already cached
> before the read, you will get invalid data, so that won't work. If you
> set the gnbd to uncached mode, the client will fail the IO back, and
something
> (a multipath driver) needs to be there to reissue the request.

> -Ben

I have tried to get the dm-multipath setup working correctly but had little
success I had started a earlier thread on it and didn't get any response,

My test was based on 
https://www.redhat.com/archives/linux-cluster/2005-April/msg00062.html

I posted my issues here.

https://www.redhat.com/archives/linux-cluster/2005-August/msg00080.html

The initial command used was 
echo "0 1146990592  multipath 0 0 1 1 round-robin 0 2 1 251:1 1000 251:3
1000 " | dmsetup create dm-1

(251:1 & 251:3 are the major:minor ids of the gnbds obtained from the
command cat /proc/partitions)
(1146990592 -> I believe is the size of the block device.) 
This resulted in a block device of which I still could not mount, I tried
multipath -0ll (after installing multipath and create a rudimentary
multipath.conf) and the result was

dm-1

[size=546 GB][features="0"][hwhandler="0"] \_ round-robin 0 [active][first]
  \_ 0:0:0:0      251:0   [undef ][active]
\_ round-robin 0 [enabled]
  \_ 0:0:0:0      251:4   [undef ][active]

"notice that the size was 1/2 the actual size!?! (I have no idea what this
means "somebody enlighten me, please!)

When I attempted to mount
 
[root dell-1650-31 ~]# mount -t gfs /dev/mapper/dm-1 /mnt/gfs1

mount: /dev/dm-1: can't read superblock

This was tried previously off a multipathed device in  which dmsetup status
gives the output below:

dm-1: 0 1146990592 multipath 1 0 0 2 1 A 0 1 0 251:1 A 0 E 0 1 0 251:3 A 0

dmsetup deps gives
dm-1: 2 dependencies    : (251, 3) (251, 1)

and dmsetup info gives
Name:              dm-1
State:             ACTIVE
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 1
Number of targets: 1

I have managed to get a nfs/gnbd failover type scenario working, in which
gnbd_servers export the shared storage via nfs and the clients mount via a
heartbeat VIP. I then created a script which I will rewrite into into a mini
daemon soon, that checks status of the servers then when the ip is token
over stops apache, unmounts nfs via the "umount -l -t nfs $mountpoint" then
"mount $mountpoint" & starts apache again. I have tested it and it works (
by checking for stale handles and remounting cleanly) Some of the same
principles can go into one for gnbd. But my bump right now is LVM.


Can GNBD be used without LVM? Or does anyone know how to enable failover
correctly on dm-multipath?


Any help would be appreciated.

Brian Urrutia 


-----Original Message-----
From: brian urrutia [mailto:brianu@xxxxxxxxxxxxxxxxxxx] 
Sent: Monday, August 29, 2005 12:56 AM
To: linux-cluster@xxxxxxxxxx
Cc: mikore.li@xxxxxxxxx; brianu
Subject: Re:  Re: If I have 5 GNBD server?

> > If using LVM to make a volume of imported gnbds is not the answer for
> > redundancy can anyone suggest a method that is? Im not opposed to using
any
> > other resource of cluster or GFS but I would really like to implement a
> > redundant solution, ( gnbd, gulm, etc.). 
> > 
> Hi, Brianu, maybe LVM + md + gnbd should be one of the solution for
> redundancy, for example, you have 2 gnbd servers, each one exports 1
> disk. Then, the steps should be:
> 1. create a RAID-1  /dev/md0 on GFS client with imported 2 gnbd block
devices.
> 2. use LVM  create /dev/vg0 on top of them.
> 3. mkfs_gfs on /dev/vg0.
> I haven't tried this configuration, theoretically, it should work.
> 
> Thanks,
> 
> Michael

I will look into trying a md & lvm combo, as far as keepalived or
rgmanager to failover an ip, i havent seen a clear example on how to use
rgmanager, but i have tried heartbeat (linux HA) to failover the ip, and
the problem is that the gnbd clients still seem to lock on the former
server regardless of that the ip has failed over to another ip ( and
continuly try and reconnect as Fajar had mentioned).


The shared storage I have is a HP MSA 100 SAN
It might be a config error on my part as far as rgmanager is concerned i
will have to post my cluster.conf tommorrow.

-Brian





--

Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux