comments inline.
On 03/12/15 01:08, Surya K Ghatty
wrote:
Hi Soumya, Kaleb, all:
Thanks for the response!
Quick follow-up to this question - We tried running ganesha and
gluster on two separate machines and the configuration seems to
be working without issues.
Follow-up question I have is this: what changes do I need to
make to put the Ganesha in active active HA mode - where backend
gluster and ganesha will be on a different node. I am using the
instructions here for putting Ganesha in HA mode. http://www.slideshare.net/SoumyaKoduri/high-49117846.
This presentation refers to commands like gluster
cluster.enable-shared-storage to enable HA.
1. Here is the config I am hoping to achieve:
glusterA and glusterB on individual bare metals - both in
Trusted pool, with volume gvol0 up and running.
Ganesha 1 and 2 on machines ganesha1, and ganesha1. And my
gluster storage will be on a third machine gluster1. (with a
peer on another machine gluster2).
Ganesha node1: on a VM ganeshaA.
Ganesha node2: on another vm GaneshaB.
I would like to know what it takes to put ganeshaA and GaneshaB
in Active Active HA mode. Is it technically possible?
Technically possible, but difficult to do that, u must manually
follow the steps which are internally by "gluster nfs-ganesha
enable"
(Kaleb will have clear idea about it)
a. How do commands like cluster.enable-shared-storage work in
this case?
you should manually configure a shared storage(an export which both
GaneshaA and GaneshaB can access)
b. where does this command need to be run? on the ganesha node,
or on the gluster nodes?
As a I mentioned before, u cannot do this with help of gluster cli
if ganesha cluster outside trusted pool.
I don't understand your requirement correctly, if it falls to any
of the following, I had answered according to my best knowledge
1.) "ganesha should run on nodes in which gluster volume(/bricks) is
created"
i. created trust pool using glusterA, glusterB, GaneshaA, GaneshaB
ii. create volume using glusterA and glusterB
iii. add GaneshaA and GaneshaB on server list in ganesha-ha.conf
file
iv then follow remaining the steps for exporting volume via
nfs-ganesha
2.) "ganesha cluster(vms) should not be part of gluster trusted
pool"
(hacky way)
i.) created trusted pool using glusterA and glusterB.
ii.) create and start volume gvol0 using it
iii.) created trusted pool using GaneshaA and GaneshaB
iv.) before enabling nfs-ganesha option, add EXPORT{} for gvol0 in
/etc/ganesha/ganesha.conf
in both GaneshaA and GaneshaB
Note : The value for hostname in EXPORT{ FSAL {} } should be
glusterA or glusterB.
2. Also, is it possible to have multiple ganesha servers point
to the same gluster volume in the back end? say, in the
configuration #1, I have another ganesha server GaneshaC that is
not clustered with ganeshaA or ganeshaB. Can it export the
volume gvol0 that ganeshaA and ganeshaB are also exporting?
Yes it is possible, but u may need to start GaneshaC manually
(running two different ganesha clusters in trusted pool via cli is
not supported)
thank you!
Regards,
Jiffin
Surya.
Regards,
Surya Ghatty
"This too shall pass"
________________________________________________________________________________________________________
Surya Ghatty | Software Engineer | IBM Cloud Infrastructure
Services Development | tel: (507) 316-0559 | ghatty@xxxxxxxxxx
Soumya
Koduri ---11/18/2015 05:08:02 AM---On 11/17/2015 10:21 PM,
Surya K Ghatty wrote: > Hi:
From: Soumya
Koduri <skoduri@xxxxxxxxxx>
To: Surya
K Ghatty/Rochester/IBM@IBMUS, gluster-users@xxxxxxxxxxx
Date: 11/18/2015
05:08 AM
Subject: Re:
Configuring Ganesha and gluster on separate
nodes?
On 11/17/2015 10:21 PM, Surya K Ghatty wrote:
> Hi:
>
> I am trying to understand if it is technically feasible to
have gluster
> nodes on one machine, and export a volume from one of these
nodes using
> a nfs-ganesha server installed on a totally different
machine? I tried
> the below and showmount -e does not show my volume
exported. Any
> suggestions will be appreciated.
>
> 1. Here is my configuration:
>
> Gluster nodes: glusterA and glusterB on individual bare
metals - both in
> Trusted pool, with volume gvol0 up and running.
> Ganesha node: on bare metal ganeshaA.
>
> 2. my ganesha.conf looks like this with IP address of
glusterA in the FSAL.
>
> FSAL {
> Name = GLUSTER;
>
> # IP of one of the nodes in the trusted pool
> *hostname = "WW.ZZ.XX.YY" --> IP address of GlusterA.*
>
> # Volume name. Eg: "test_volume"
> volume = "gvol0";
> }
>
> 3. I disabled nfs on gvol0. As you can see, *nfs.disable is
set to on.*
>
> [root@glusterA ~]# gluster vol info
>
> Volume Name: gvol0
> Type: Distribute
> Volume ID: 16015bcc-1d17-4ef1-bb8b-01b7fdf6efa0
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: glusterA:/data/brick0/gvol0
> Options Reconfigured:
> *nfs.disable: on*
> nfs.export-volumes: off
> features.quota-deem-statfs: on
> features.inode-quota: on
> features.quota: on
> performance.readdir-ahead: on
>
> 4. I then ran ganesha.nfsd -f /etc/ganesha/ganesha.conf -L
> /var/log/ganesha.log -N NIV_FULL_DEBUG
> Ganesha server was put in grace, no errors.
>
> 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA:
> nfs-ganesha-26426[reaper] fridgethr_freeze :RW LOCK :F_DBG
:Released
> mutex 0x7f21a92818d0 (&fr->mtx) at
>
/builddir/build/BUILD/nfs-ganesha-2.2.0/src/support/fridgethr.c:484
> 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA:
> nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG
:Acquired mutex
> 0x7f21ad1f18e0 (&grace.g_mutex) at
>
/builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:129
> *17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA :
> nfs-ganesha-26426[reaper] nfs_in_grace :STATE :DEBUG :NFS
Server IN GRACE*
> 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA :
> nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG
:Released mutex
> 0x7f21ad1f18e0 (&grace.g_mutex) at
>
/builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:141
>
You shall still need gluster-client bits on the machine where
nfs-ganesha server is installed to export a gluster volume.
Check if you
have got libgfapi.so installed on that machine.
Also, ganesha server does log the warnings if its unable to
process the
EXPORT/FSAL block. Please recheck the logs if you have got any.
Thanks,
Soumya
> 5. [root@ganeshaA glusterfs]# showmount -e
> Export list for ganeshaA:
> <empty>
>
> Any suggestions on what I am missing?
>
> Regards,
>
> Surya Ghatty
>
> "This too shall pass"
>
________________________________________________________________________________________________________
> Surya Ghatty | Software Engineer | IBM Cloud Infrastructure
Services
> Development | tel: (507) 316-0559 | ghatty@xxxxxxxxxx
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users
>
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
|