Hello everyone,
I am using the following entry on a CentOS server.
gluster01.home:/videos /data2/plex/videos glusterfs _netdev 0 0
gluster01.home:/photos /data2/plex/photos glusterfs _netdev 0 0
gluster01.home:/photos /data2/plex/photos glusterfs _netdev 0 0
I am able to use sudo mount -a to mount the volumes without any problems. When I reboot my server, nothing is mounted.
I can see errors in /var/log/glusterfs/data2-plex-photos.log:
...
[2020-01-24 01:24:18.302191] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 3679
[2020-01-24 01:24:18.310017] E [MSGID: 101075] [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)
[2020-01-24 01:24:18.310046] E [name.c:266:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host gluster01.home
[2020-01-24 01:24:18.310187] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
[2020-01-24 01:24:18.310017] E [MSGID: 101075] [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)
[2020-01-24 01:24:18.310046] E [name.c:266:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host gluster01.home
[2020-01-24 01:24:18.310187] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0
...
I am able to to nslookup on gluster01 and gluster01.home without problems, so "DNS resolution failed" is confusing to me. What happens here?
Output of my volumes.
sudo gluster volume status
Status of volume: documents
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01.home:/data/documents 49152 0 Y 5658
Brick gluster02.home:/data/documents 49152 0 Y 5340
Brick gluster03.home:/data/documents 49152 0 Y 5305
Self-heal Daemon on localhost N/A N/A Y 5679
Self-heal Daemon on gluster03.home N/A N/A Y 5326
Self-heal Daemon on gluster02.home N/A N/A Y 5361
Task Status of Volume documents
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: photos
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01.home:/data/photos 49153 0 Y 5779
Brick gluster02.home:/data/photos 49153 0 Y 5401
Brick gluster03.home:/data/photos 49153 0 Y 5366
Self-heal Daemon on localhost N/A N/A Y 5679
Self-heal Daemon on gluster03.home N/A N/A Y 5326
Self-heal Daemon on gluster02.home N/A N/A Y 5361
Task Status of Volume photos
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: videos
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01.home:/data/videos 49154 0 Y 5883
Brick gluster02.home:/data/videos 49154 0 Y 5452
Brick gluster03.home:/data/videos 49154 0 Y 5416
Self-heal Daemon on localhost N/A N/A Y 5679
Self-heal Daemon on gluster03.home N/A N/A Y 5326
Self-heal Daemon on gluster02.home N/A N/A Y 5361
Task Status of Volume videos
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: documents
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01.home:/data/documents 49152 0 Y 5658
Brick gluster02.home:/data/documents 49152 0 Y 5340
Brick gluster03.home:/data/documents 49152 0 Y 5305
Self-heal Daemon on localhost N/A N/A Y 5679
Self-heal Daemon on gluster03.home N/A N/A Y 5326
Self-heal Daemon on gluster02.home N/A N/A Y 5361
Task Status of Volume documents
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: photos
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01.home:/data/photos 49153 0 Y 5779
Brick gluster02.home:/data/photos 49153 0 Y 5401
Brick gluster03.home:/data/photos 49153 0 Y 5366
Self-heal Daemon on localhost N/A N/A Y 5679
Self-heal Daemon on gluster03.home N/A N/A Y 5326
Self-heal Daemon on gluster02.home N/A N/A Y 5361
Task Status of Volume photos
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: videos
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01.home:/data/videos 49154 0 Y 5883
Brick gluster02.home:/data/videos 49154 0 Y 5452
Brick gluster03.home:/data/videos 49154 0 Y 5416
Self-heal Daemon on localhost N/A N/A Y 5679
Self-heal Daemon on gluster03.home N/A N/A Y 5326
Self-heal Daemon on gluster02.home N/A N/A Y 5361
Task Status of Volume videos
------------------------------------------------------------------------------
There are no active volume tasks
On the server (Ubuntu) following versions are installed.
glusterfs-client/bionic,now 7.2-ubuntu1~bionic1 armhf [installed,automatic]
glusterfs-common/bionic,now 7.2-ubuntu1~bionic1 armhf [installed,automatic]
glusterfs-server/bionic,now 7.2-ubuntu1~bionic1 armhf [installed]
glusterfs-common/bionic,now 7.2-ubuntu1~bionic1 armhf [installed,automatic]
glusterfs-server/bionic,now 7.2-ubuntu1~bionic1 armhf [installed]
On the client (CentOS) following versions are installed.
sudo rpm -qa | grep gluster
glusterfs-client-xlators-7.2-1.el7.x86_64
glusterfs-cli-7.2-1.el7.x86_64
glusterfs-libs-7.2-1.el7.x86_64
glusterfs-7.2-1.el7.x86_64
glusterfs-api-7.2-1.el7.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-23.el7_7.3.x86_64
centos-release-gluster7-1.0-1.el7.centos.noarch
glusterfs-fuse-7.2-1.el7.x86_64
glusterfs-client-xlators-7.2-1.el7.x86_64
glusterfs-cli-7.2-1.el7.x86_64
glusterfs-libs-7.2-1.el7.x86_64
glusterfs-7.2-1.el7.x86_64
glusterfs-api-7.2-1.el7.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-23.el7_7.3.x86_64
centos-release-gluster7-1.0-1.el7.centos.noarch
glusterfs-fuse-7.2-1.el7.x86_64
I tried to disable IPv6 on the client voa sysctl with following parameters.
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
That did not help.
Volumes are configured with inet.
sudo gluster volume info videos
Volume Name: videos
Type: Replicate
Volume ID: 8fddde82-66b3-447f-8860-ed3768c51876
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster01.home:/data/videos
Brick2: gluster02.home:/data/videos
Brick3: gluster03.home:/data/videos
Options Reconfigured:
features.ctime: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Volume Name: videos
Type: Replicate
Volume ID: 8fddde82-66b3-447f-8860-ed3768c51876
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster01.home:/data/videos
Brick2: gluster02.home:/data/videos
Brick3: gluster03.home:/data/videos
Options Reconfigured:
features.ctime: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
I tried turning off ctime but that did not work either.
Any ideas? How do I do this correctly?
Cheers
Sherry
________ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users