Because this was a LAB I could quickly remove the gluster setup and
recreated it and used the FQDN's, it quickly picked up the new names.
Exactly as expected per this thread.
[root@mdskvm-p02 network-scripts]# gluster volume status
Status of volume: mdsgv01
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
lusterv01 49152 0 Y
4375
Brick mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/g
lusterv02 49152 0 Y
4376
NFS Server on localhost N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A Y
4402
NFS Server on mdskvm-p01.nix.mds.xyz N/A N/A N N/A
Self-heal Daemon on mdskvm-p01.nix.mds.xyz N/A N/A Y
4384
Task Status of Volume mdsgv01
------------------------------------------------------------------------------
There are no active volume tasks
[root@mdskvm-p02 network-scripts]#
Would be handy to have a rename function in future releases.
Cheers,
TK
On 9/25/2019 7:47 AM, TomK wrote:
Thanks Thorgeir. Since then I upgraded to Gluster 6. Though this issue
remaind the same, anything in the way of new options to change what's
displayed?
Reason for the ask is that this get's inherited by oVirt when doing
discovery of existing gluster volumes. So now I have an IP for a host,
a short name for a host and FQDN's for the rest.
[root@mdskvm-p02 glusterfs]# gluster volume status
Status of volume: mdsgv01
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------------------------
Brick mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/g
lusterv02 49152 0 Y 22368
Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
lusterv01 49152 0 Y 24487
NFS Server on localhost N/A N/A N
N/A
Self-heal Daemon on localhost N/A N/A Y 22406
NFS Server on 192.168.0.60 N/A N/A N
N/A
Self-heal Daemon on 192.168.0.60 N/A N/A Y 25867
Task Status of Volume mdsgv01
------------------------------------------------------------------------------
There are no active volume tasks
[root@mdskvm-p02 glusterfs]#
Cheers,
TK
On 9/24/2019 2:58 AM, Thorgeir Marthinussen wrote:
In an effort to answer the actual question, in my experience the
Gluster internals captures the address the first time you probe
another node.
So if you're logged into the first node and probe the second using an
IP-address, that is what will "forever" be displayed by gluster
status, and if you use a hostname that's what will be shown.
Brick paths are captured when the brick is registered, so using a path
with IP will always show the IP as part of the path, and hostname will
show that, etc.
I haven't verified, but the second node I believe will attempt a
reverse lookup of the first node (when probing first->second) and
record that name (if any) as the "primary" name of the first node.
Also good to know, nodes can have multiple names, the primary name is
the one "configured" during setup, and secondary names can be added by
probing them afterwards.
All IP/hostname/FQDN parts of the brick-path has to be known to the
cluster, by probing that IP/hostname/FQDN.
Best regards
*THORGEIR MARTHINUSSEN*
-----Original Message-----
*From*: TomK <tomkcpr@xxxxxxxxxxx
<mailto:TomK%20%3ctomkcpr@xxxxxxxxxxx%3e>>
*Reply-To*: tomkcpr@xxxxxxxxxxx <mailto:tomkcpr@xxxxxxxxxxx>
*To*: gluster-users@xxxxxxxxxxx <mailto:gluster-users@xxxxxxxxxxx>
*Subject*: Re: Where does Gluster capture the
hostnames from?
*Date*: Mon, 23 Sep 2019 21:31:19 -0400
Hey All,
My hosts below:
[root@mdskvm-p01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
[root@mdskvm-p01 ~]# hostname
mdskvm-p01.nix.mds.xyz
[root@mdskvm-p01 ~]# hostname -f
mdskvm-p01.nix.mds.xyz
[root@mdskvm-p01 ~]#
[root@mdskvm-p02 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
[root@mdskvm-p02 ~]# hostname
mdskvm-p02.nix.mds.xyz
[root@mdskvm-p02 ~]# hostname -f
mdskvm-p02.nix.mds.xyz
[root@mdskvm-p02 ~]#
My take on the /etc/hosts file discussion:
1) If hostname / hostname -f returns any valid values, the software
should capture it.
2) There is no benefit or need to use /etc/hosts in a small setup.
Larger setups resolving hosts against an enterprise DNS behind many
switches could be a problem. Managing our /etc/hosts files using
Ansible helped to reduce some of these problems esp since lookups are
logged against the connection tracking tables, that can get full,
network response time could vary etc. ("Semi static" I guess might
describe this approach best?) These are populated, if changes are
needed, via an initial DNS lookup once a day. Invariably, managing
/etc/hosts is time consuming and messy.
3) Running a good DNS cluster, something like a two node IPA cluster
that I run for a small setup, prevents such outages. This particularly
when also placing a VIP across the nodes and locating cluster nodes
across different hardware and locations.
4) Point 2) should be no reason why an application cannot obtain or
resolve proper DNS entries in 1).
Having said that, decided to check if there's any benefit to having
entries in /etc/hosts:
[root@mdskvm-p01 ~]# time $(dig mdskvm-p01.nix.mds.xyz >/dev/null)
real 0m0.092s
user 0m0.087s
sys 0m0.005s
[root@mdskvm-p01 ~]# time $(dig mdskvm-p02.nix.mds.xyz >/dev/null)
real 0m0.092s
user 0m0.084s
sys 0m0.008s
[root@mdskvm-p01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
192.168.0.60 mdskvm-p01.nix.mds.xyz mdskvm-p01
192.168.0.39 mdskvm-p02.nix.mds.xyz mdskvm-p02
[root@mdskvm-p01 ~]# vi /etc/hosts
[root@mdskvm-p01 ~]# time $(dig mdskvm-p01.nix.mds.xyz >/dev/null)
real 0m0.093s
user 0m0.082s
sys 0m0.010s
[root@mdskvm-p01 ~]# time $(dig mdskvm-p02.nix.mds.xyz >/dev/null)
real 0m0.093s
user 0m0.085s
sys 0m0.007s
[root@mdskvm-p01 ~]# time $(dig mdskvm-p01.nix.mds.xyz >/dev/null)
real 0m0.094s
user 0m0.084s
sys 0m0.010s
[root@mdskvm-p01 ~]# time $(dig mdskvm-p02.nix.mds.xyz >/dev/null)
real 0m0.092s
user 0m0.081s
sys 0m0.011s
[root@mdskvm-p01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
[root@mdskvm-p01 ~]#
So with /etc/hosts file entries present makes little difference in small
setup when governed by /etc/nsswitch.conf .
Having entries in /etc/hosts, doesn't affect how gluster displays the
entries when calling gluster volume status .
Cheers,
TK
On 9/23/2019 11:36 AM, Joe Julian wrote:
Perhaps I misread the intent, I apologize if I did. I read "static
entries" as "ip addresses" which I've seen suggested (from my
perspective) far too often. /etc/hosts is a valid solution that can
still adapt if the network needs to evolve.
On 9/23/19 8:29 AM, ROUVRAIS Cedric wrote:
Hello,
I guess everyone sort of has his perspective on this topic.
I don't want to take this thread on an off-topic conversation
(discussing the merits of having a local hosts file) but I do dissent,
and therefore had to respond, on the shortcut that using a local
etc/host file creates a fixed network configuration that can never
adapt as business needs change. I'm running a k8s infrastructure and
actually have local conf files, FWIW.
Regards,
Cédric
-----Original Message-----
From:
gluster-users-bounces@xxxxxxxxxxx
<mailto:gluster-users-bounces@xxxxxxxxxxx>
<
gluster-users-bounces@xxxxxxxxxxx
<mailto:gluster-users-bounces@xxxxxxxxxxx>
> On Behalf Of Joe Julian
Sent: lundi 23 septembre 2019 17:06
To:
gluster-users@xxxxxxxxxxx
<mailto:gluster-users@xxxxxxxxxxx>
Subject: Re: Where does Gluster capture the hostnames
from?
I disagree about it being "best practice" to lock yourself in to a
fixed network configuration that can never adapt as business needs
change.
There are other resilient ways of ensuring your hostnames resolve
consistently (so that your cluster doesn't run loose ;-)).
On 9/23/19 7:38 AM, Strahil wrote:
Also,
It's more safe to have static entries for your cluster - after all if
DNS fails for some reason - you don't want to loose your cluster.A
kind of 'Best Practice'.
Best Regards,
Strahil NikolovOn Sep 23, 2019 15:01, TomK <
tomkcpr@xxxxxxxxxxx
<mailto:tomkcpr@xxxxxxxxxxx>
> wrote:
Do I *really* need specific /etc/hosts entries when I have IPA?
[root@mdskvm-p01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
[root@mdskvm-p01 ~]#
I really shouldn't need too. ( Ref below, everything resolves fine.
)
Cheers,
TK
On 9/23/2019 1:32 AM, Strahil wrote:
Check your /etc/hosts for an entry like:
192.168.0.60 mdskvm-p01.nix.mds.xyz mdskvm-p01
Best Regards,
Strahil NikolovOn Sep 23, 2019 06:58, TomK <
tomkcpr@xxxxxxxxxxx
<mailto:tomkcpr@xxxxxxxxxxx>
> wrote:
Hey All,
Take the two hosts below as example. One host shows NFS Server on
192.168.0.60 (FQDN is mdskvm-p01.nix.mds.xyz).
The other shows mdskvm-p02 (FQDN is mdskvm-p02.nix.mds.xyz).
Why is there no consistency or correct hostname resolution? Where
does gluster get the hostnames from?
[root@mdskvm-p02 glusterfs]# gluster volume status Status of
volume: mdsgv01 Gluster process TCP
Port RDMA Port Online Pid
-------------------------------------------------------------------
----------- Brick mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/g
lusterv02 49153 0 Y
17503
Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
lusterv01 49153 0 Y
15044
NFS Server on localhost N/A N/A N
N/A Self-heal Daemon on localhost N/A N/A
Y
17531
NFS Server on 192.168.0.60 N/A N/A N
N/A Self-heal Daemon on 192.168.0.60 N/A N/A
Y
15073
Task Status of Volume mdsgv01
-------------------------------------------------------------------
-----------
There are no active volume tasks
[root@mdskvm-p02 glusterfs]#
[root@mdskvm-p01 ~]# gluster volume status Status of volume:
mdsgv01 Gluster process TCP Port RDMA
Port Online Pid
-------------------------------------------------------------------
----------- Brick mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/g
lusterv02 49153 0 Y
17503
Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
lusterv01 49153 0 Y
15044
NFS Server on localhost N/A N/A N
N/A Self-heal Daemon on localhost N/A N/A
Y
15073
NFS Server on mdskvm-p02 N/A N/A N
N/A Self-heal Daemon on mdskvm-p02 N/A N/A
Y
17531
Task Status of Volume mdsgv01
-------------------------------------------------------------------
-----------
There are no active volume tasks
[root@mdskvm-p01 ~]#
But when verifying everything all seems fine:
(1):
[root@mdskvm-p01 glusterfs]# dig -x 192.168.0.39 ;; QUESTION
SECTION:
;39.0.168.192.in-addr.arpa. IN PTR
;; ANSWER SECTION:
39.0.168.192.in-addr.arpa. 1200 IN PTR
mdskvm-p02.nix.mds.xyz.
[root@mdskvm-p01 glusterfs]# hostname -f mdskvm-p01.nix.mds.xyz
[root@mdskvm-p01 glusterfs]# hostname -s
mdskvm-p01
[root@mdskvm-p01 glusterfs]# hostname mdskvm-p01.nix.mds.xyz
[root@mdskvm-p01 glusterfs]#
(2):
[root@mdskvm-p02 glusterfs]# dig -x 192.168.0.60 ;; QUESTION
SECTION:
;60.0.168.192.in-addr.arpa. IN PTR
;; ANSWER SECTION:
60.0.168.192.in-addr.arpa. 1200 IN PTR
mdskvm-p01.nix.mds.xyz.
[root@mdskvm-p02 glusterfs]# hostname -s
mdskvm-p02
[root@mdskvm-p02 glusterfs]# hostname -f mdskvm-p02.nix.mds.xyz
[root@mdskvm-p02 glusterfs]# hostname mdskvm-p02.nix.mds.xyz
[root@mdskvm-p02 glusterfs]#
Gluster version used is:
[root@mdskvm-p01 glusterfs]# rpm -aq|grep -Ei gluster
glusterfs-server-3.12.15-1.el7.x86_64
glusterfs-client-xlators-3.12.15-1.el7.x86_64
glusterfs-rdma-3.12.15-1.el7.x86_64
glusterfs-3.12.15-1.el7.x86_64
glusterfs-events-3.12.15-1.el7.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.12.x86_64
glusterfs-libs-3.12.15-1.el7.x86_64
glusterfs-fuse-3.12.15-1.el7.x86_64
glusterfs-geo-replication-3.12.15-1.el7.x86_64
python2-gluster-3.12.15-1.el7.x86_64
________
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge:
https://bluejeans.com/118564314
NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge:
https://bluejeans.com/118564314
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
<mailto:Gluster-users@xxxxxxxxxxx>
https://lists.gluster.org/mailman/listinfo/gluster-users
________
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge:
https://bluejeans.com/118564314
NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge:
https://bluejeans.com/118564314
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
<mailto:Gluster-users@xxxxxxxxxxx>
https://lists.gluster.org/mailman/listinfo/gluster-users
________
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge:
https://bluejeans.com/118564314
NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge:
https://bluejeans.com/118564314
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
<mailto:Gluster-users@xxxxxxxxxxx>
https://lists.gluster.org/mailman/listinfo/gluster-users
________
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge:
https://bluejeans.com/118564314
NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge:
https://bluejeans.com/118564314
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
<mailto:Gluster-users@xxxxxxxxxxx>
https://lists.gluster.org/mailman/listinfo/gluster-users
________
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314
NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Thx,
TK.
________
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314
NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users