Re: No possible to mount a gluster volume via /etc/fstab?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On January 24, 2020 4:08:12 PM GMT+02:00, Sherry Reese <s.reese4u@xxxxxxxxx> wrote:
>Hi Strahil.
>
>yes I know but I already tried that and failed at implementing it.
>I'm now even suspecting gluster to have some kind of bug.
>
>Could you show me how to do it correctly? Which services goes into
>after?
>Do have example unit files for mounting gluster volumes?
>
>Cheers
>Sherry
>
>On Fri, 24 Jan 2020 at 14:03, Strahil Nikolov <hunter86_bg@xxxxxxxxx>
>wrote:
>
>> On January 24, 2020 10:20:50 AM GMT+02:00, Sherry Reese <
>> s.reese4u@xxxxxxxxx> wrote:
>> >Hello Hubert,
>> >
>> >that would be an easy fix. I already tried that.
>> >I additionally tried a service like the following one. Does not work
>> >either.
>> >
>> >I'm lost here. Even a workaround would be a relief.
>> >
>> >[Unit]
>> >Description=Gluster Mounting
>> >After=network.target
>> >After=systemd-user-sessions.service
>> >After=network-online.target
>> >
>> >[Service]
>> >Type=simple
>> >RemainAfterExit=true
>> >ExecStart=/bin/mount -a -t glusterfs
>> >TimeoutSec=30
>> >Restart=on-failure
>> >RestartSec=30
>> >StartLimitInterval=350
>> >StartLimitBurst=10
>> >
>> >[Install]
>> >WantedBy=multi-user.target
>> >
>> >Cheers
>> >Sherry
>> >
>> >On Fri, 24 Jan 2020 at 06:50, Hu Bert <revirii@xxxxxxxxxxxxxx>
>wrote:
>> >
>> >> Hi Sherry,
>> >>
>> >> maybe at the time, when the mount from /etc/fstab should take
>place,
>> >> name resolution is not yet working? In your case i'd try to place
>> >> proper entries in /etc/hosts and test it with a reboot.
>> >>
>> >>
>> >> regards
>> >> Hubert
>> >>
>> >> Am Fr., 24. Jan. 2020 um 02:37 Uhr schrieb Sherry Reese <
>> >> s.reese4u@xxxxxxxxx>:
>> >> >
>> >> > Hello everyone,
>> >> >
>> >> > I am using the following entry on a CentOS server.
>> >> >
>> >> > gluster01.home:/videos /data2/plex/videos glusterfs _netdev 0 0
>> >> > gluster01.home:/photos /data2/plex/photos glusterfs _netdev 0 0
>> >> >
>> >> > I am able to use sudo mount -a to mount the volumes without any
>> >> problems. When I reboot my server, nothing is mounted.
>> >> >
>> >> > I can see errors in /var/log/glusterfs/data2-plex-photos.log:
>> >> >
>> >> > ...
>> >> > [2020-01-24 01:24:18.302191] I [glusterfsd.c:2594:daemonize]
>> >> 0-glusterfs: Pid of current running process is 3679
>> >> > [2020-01-24 01:24:18.310017] E [MSGID: 101075]
>> >> [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed
>> >> (family:2) (Name or service not known)
>> >> > [2020-01-24 01:24:18.310046] E
>> >> [name.c:266:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS
>> >resolution
>> >> failed on host gluster01.home
>> >> > [2020-01-24 01:24:18.310187] I [MSGID: 101190]
>> >> [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started
>> >thread
>> >> with index 0
>> >> > ...
>> >> >
>> >> > I am able to to nslookup on gluster01 and gluster01.home without
>> >> problems, so "DNS resolution failed" is confusing to me. What
>happens
>> >here?
>> >> >
>> >> > Output of my volumes.
>> >> >
>> >> > sudo gluster volume status
>> >> > Status of volume: documents
>> >> > Gluster process                             TCP Port  RDMA Port
>> >Online
>> >> Pid
>> >> >
>> >>
>>
>>
>>------------------------------------------------------------------------------
>> >> > Brick gluster01.home:/data/documents        49152     0         
>Y
>> >>  5658
>> >> > Brick gluster02.home:/data/documents        49152     0         
>Y
>> >>  5340
>> >> > Brick gluster03.home:/data/documents        49152     0         
>Y
>> >>  5305
>> >> > Self-heal Daemon on localhost               N/A       N/A       
>Y
>> >>  5679
>> >> > Self-heal Daemon on gluster03.home          N/A       N/A       
>Y
>> >>  5326
>> >> > Self-heal Daemon on gluster02.home          N/A       N/A       
>Y
>> >>  5361
>> >> >
>> >> > Task Status of Volume documents
>> >> >
>> >>
>>
>>
>>------------------------------------------------------------------------------
>> >> > There are no active volume tasks
>> >> >
>> >> > Status of volume: photos
>> >> > Gluster process                             TCP Port  RDMA Port
>> >Online
>> >> Pid
>> >> >
>> >>
>>
>>
>>------------------------------------------------------------------------------
>> >> > Brick gluster01.home:/data/photos           49153     0         
>Y
>> >>  5779
>> >> > Brick gluster02.home:/data/photos           49153     0         
>Y
>> >>  5401
>> >> > Brick gluster03.home:/data/photos           49153     0         
>Y
>> >>  5366
>> >> > Self-heal Daemon on localhost               N/A       N/A       
>Y
>> >>  5679
>> >> > Self-heal Daemon on gluster03.home          N/A       N/A       
>Y
>> >>  5326
>> >> > Self-heal Daemon on gluster02.home          N/A       N/A       
>Y
>> >>  5361
>> >> >
>> >> > Task Status of Volume photos
>> >> >
>> >>
>>
>>
>>------------------------------------------------------------------------------
>> >> > There are no active volume tasks
>> >> >
>> >> > Status of volume: videos
>> >> > Gluster process                             TCP Port  RDMA Port
>> >Online
>> >> Pid
>> >> >
>> >>
>>
>>
>>------------------------------------------------------------------------------
>> >> > Brick gluster01.home:/data/videos           49154     0         
>Y
>> >>  5883
>> >> > Brick gluster02.home:/data/videos           49154     0         
>Y
>> >>  5452
>> >> > Brick gluster03.home:/data/videos           49154     0         
>Y
>> >>  5416
>> >> > Self-heal Daemon on localhost               N/A       N/A       
>Y
>> >>  5679
>> >> > Self-heal Daemon on gluster03.home          N/A       N/A       
>Y
>> >>  5326
>> >> > Self-heal Daemon on gluster02.home          N/A       N/A       
>Y
>> >>  5361
>> >> >
>> >> > Task Status of Volume videos
>> >> >
>> >>
>>
>>
>>------------------------------------------------------------------------------
>> >> > There are no active volume tasks
>> >> >
>> >> > On the server (Ubuntu) following versions are installed.
>> >> >
>> >> > glusterfs-client/bionic,now 7.2-ubuntu1~bionic1 armhf
>> >> [installed,automatic]
>> >> > glusterfs-common/bionic,now 7.2-ubuntu1~bionic1 armhf
>> >> [installed,automatic]
>> >> > glusterfs-server/bionic,now 7.2-ubuntu1~bionic1 armhf
>[installed]
>> >> >
>> >> > On the client (CentOS) following versions are installed.
>> >> >
>> >> > sudo rpm -qa | grep gluster
>> >> > glusterfs-client-xlators-7.2-1.el7.x86_64
>> >> > glusterfs-cli-7.2-1.el7.x86_64
>> >> > glusterfs-libs-7.2-1.el7.x86_64
>> >> > glusterfs-7.2-1.el7.x86_64
>> >> > glusterfs-api-7.2-1.el7.x86_64
>> >> > libvirt-daemon-driver-storage-gluster-4.5.0-23.el7_7.3.x86_64
>> >> > centos-release-gluster7-1.0-1.el7.centos.noarch
>> >> > glusterfs-fuse-7.2-1.el7.x86_64
>> >> >
>> >> > I tried to disable IPv6 on the client voa sysctl with following
>> >> parameters.
>> >> >
>> >> > net.ipv6.conf.all.disable_ipv6 = 1
>> >> > net.ipv6.conf.default.disable_ipv6 = 1
>> >> >
>> >> > That did not help.
>> >> >
>> >> > Volumes are configured with inet.
>> >> >
>> >> > sudo gluster volume info videos
>> >> >
>> >> > Volume Name: videos
>> >> > Type: Replicate
>> >> > Volume ID: 8fddde82-66b3-447f-8860-ed3768c51876
>> >> > Status: Started
>> >> > Snapshot Count: 0
>> >> > Number of Bricks: 1 x 3 = 3
>> >> > Transport-type: tcp
>> >> > Bricks:
>> >> > Brick1: gluster01.home:/data/videos
>> >> > Brick2: gluster02.home:/data/videos
>> >> > Brick3: gluster03.home:/data/videos
>> >> > Options Reconfigured:
>> >> > features.ctime: on
>> >> > transport.address-family: inet
>> >> > nfs.disable: on
>> >> > performance.client-io-threads: off
>> >> >
>> >> > I tried turning off ctime but that did not work either.
>> >> >
>> >> > Any ideas? How do I do this correctly?
>> >> >
>> >> > Cheers
>> >> > Sherry
>> >> > ________
>> >> >
>> >> > Community Meeting Calendar:
>> >> >
>> >> > APAC Schedule -
>> >> > Every 2nd and 4th Tuesday at 11:30 AM IST
>> >> > Bridge: https://bluejeans.com/441850968
>> >> >
>> >> > NA/EMEA Schedule -
>> >> > Every 1st and 3rd Tuesday at 01:00 PM EDT
>> >> > Bridge: https://bluejeans.com/441850968
>> >> >
>> >> > Gluster-users mailing list
>> >> > Gluster-users@xxxxxxxxxxx
>> >> > https://lists.gluster.org/mailman/listinfo/gluster-users
>> >>
>>
>> Systemd services are bad approach to define a mount.
>> Use systemd's '.mount' unit instead.
>> You can define 'before' & 'after' what it should be initiated.
>>
>> Best Regards,
>> Strahil Nikolov
>>

Hi,

Here are my customizations:

Glusterd service:

[root@ovirt1 ~]# systemctl cat glusterd
# /etc/systemd/system/glusterd.service
[Unit]
Description=GlusterFS, a clustered file-system server
Requires=rpcbind.service gluster_bricks-engine.mount glusterAfter=network.target rpcbind.service gluster_bricks-engine.mBefore=network-online.target

[Service]
Type=forking
PIDFile=/var/run/glusterd.pid
LimitNOFILE=65536
Environment="LOG_LEVEL=INFO"
EnvironmentFile=-/etc/sysconfig/glusterd
ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid  --logKillMode=process
SuccessExitStatus=15

[Install]
WantedBy=multi-user.target

# /etc/systemd/system/glusterd.service.d/99-cpu.conf
[Service]
CPUAccounting=yes
Slice=glusterfs.slice

Note: It's starting after all necessary bricks, as I use a lot of volumes in oVirt (multiple 1GBE NICs and in order to utilize them all, bonding/teaming needs different ports for source: destination pair)

One of the bricks:

[root@ovirt1 ~]# systemctl cat gluster_bricks-data.mount
# /etc/systemd/system/gluster_bricks-data.mount
[Unit]
Description=Mount glusterfs brick - DATA
Requires = vdo.service
After = vdo.service
Before = glusterd.service
Conflicts = umount.target

[Mount]
What=/dev/mapper/gluster_vg_md0-gluster_lv_data
Where=/gluster_bricks/data
Type=xfs
Options=inode64,noatime,nodiratime

[Install]
WantedBy=glusterd.service

This one starts only when the VDO service has started. If you don't use VDO -> then you are easier.

In my case I have:
- MDADM -> software raid 0
- VDO -> compression and deduplication (or just deduplication)
- Thin LVM -> for Gluster snapshots
- Bricks
- Gluster
- CTDB (already removed that due to changes in the lab  - long story)
- NFS Ganesha (already removed that due to changes in the lab  - long story)


You just need to tell systemd what is after what and it will make the chain working :)

If you need extra examples, do not hesitate to contact me.

P.S.: You can use systemd automount function if you don't use gluster snapshots (a bug is already opened -> https://bugzilla.redhat.com/show_bug.cgi?id=1699309 )  instead of automatically restart the mount process.

Best Regards,
Strahil Nikolov
________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux