> yes I know but I already tried that and failed at implementing it. > I'm now even suspecting gluster to have some kind of bug. > > Could you show me how to do it correctly? Which services goes into after? > Do have example unit files for mounting gluster volumes? I have had some struggles with this, in the depths of systemd. I ended up making a oneshot systemd service and a helper script. I have one helper script for my gluster server/nfs server nodes that tries to carefully not mount gluster paths until gluster is actually started. It also ensures ctdb is started only after the gluster lock is actually available. Your case seems to be more like gluster-client-only, which I have a simpler helper script for. Note that ideas for this came from this very mailing list as I recall. So I'm not taking credit for the whole idea. Now this is very specific to my situation but maybe you can get some ideas. Otherwise, trash this email :) systemd service: # This cluster manager service ensures.... # - shared storage is mounted # - bind mounts are mounted # - Works around distro problems (like RHEL8.0) that ignore _netdev # and try to mount network filesystems before the network is up # - Also helps handle the case where the whole cluster is powered up and # the admin won't be able to mount shared stoage until SU leaders up. [Unit] Description=CM ADMIN Service to ensure mounts are good After=network-online.target time-sync.target [Service] Type=oneshot RemainAfterExit=yes User=root ExecStart=/opt/clmgr/lib/cm-admin-mounts [Install] WantedBy=multi-user.target And the helper: #! /bin/bash # Copyright (c) 2019 Hewlett Packard Enterprise Development LP # All rights reserved. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA # # This script handles ensuring: # * Shared storage is actually mounted # * bind mounts are sourced by shared storage and not by local directories # # This script solves two problems. One is a bug in RHEL 8.0 where systemd # ignores _netdev in fstab and tries to mount network storage before the # network is up. Additionally, this script is useful in all scenarios to # handle the data-center-power-outage use case. In this case, SU leaders may # take a while to get up and running -- longer than systemd might wait for # mounts. # # In all cases, if systemd fails to mount the shared storage, it may # ignore the dependencies and do the bind mounts any, which could # incorrectly point to local directories instead of shared storage. # me=$(basename $0) # # Safety. Don't run on wrong node type. # if ! grep -q -P '^NODETYPE="admin"' /etc/opt/sgi/cminfo; then echo "$me: Error: This script is only to be run on admin nodes." > /dev/stderr logger "$me: Error: This script is only to be run on admin nodes." > /dev/stderr exit 1 fi if [ ! -r /opt/clmgr/lib/su-leader-functions.sh ]; then echo "$me: Error: /opt/clmgr/lib/su-leader-functions.sh not found." > /dev/stderr logger "$me: Error: /opt/clmgr/lib/su-leader-functions.sh not found." > /dev/stderr exit 1 fi source /opt/clmgr/lib/su-leader-functions.sh # # enable-su-leader would have placed a shared_storage entry in fstab. # If that is not present, this admin may have been de-coupled from the # leaders. Exit in that case. # if ! grep -P -q "\d+\.\d+\.\d+\.\d+:/cm_shared\s+" /etc/fstab; then logger "$me: Shared storage not enabled. Exiting." exit 0 fi logger "$me: Unmount temporarily any bind mounts" umount_bind_mounts_local logger "$me: Keep trying to mount shared storage..." while true; do umount /opt/clmgr/shared_storage &> /dev/null mount /opt/clmgr/shared_storage/ if [ $? -ne 0 ]; then logger "$me: /opt/clmgr/shared_storage mount failed. Will re-try." umount /opt/clmgr/shared_storage/ sleep 3 continue fi logger "$me: Mount command reports gluster mount success. Verifying." if ! cat /proc/mounts | grep -q -P "\d+\.\d+\.\d+\.\d+:\S+\s+/opt/clmgr/shared_storage\s+fuse.glusterfs"; then logger "$me: Verification. /opt/clmgr/shared_storage not in /proc/mounts as glusterfs. Retry" sleep 3 continue fi logger "$me: Gluster mounts look correct in /proc/mounts." break done # Now safe for bind mounts logger "$me: Unmounting bind mounts, then mounting bind mounts..." umount_bind_mounts_local mount_bind_mounts_local logger "$me: done" And this is the sourced function. I separate it because I do something similar but not the same on the gluster servers. I cut most of the content not related to the use case. # # This is like umount_bind_mounts, but for use on leaders by systemd # helper. No ssh. # function umount_bind_mounts_local { local bind_mounts=$(grep "bind," /etc/fstab | awk '{print $2}') local b local umount_list="" for b in $bind_mounts; do if awk '{print $2}' /proc/mounts | grep -q "^${b}$"; then umount_list="$umount_list umount $b;" fi done if [ -n "$umount_list" ]; then eval $umount_list fi rm -f $tfile } # # Mount bind mounts and nothing else from /etc/fstab # hook script. No ssh. # function mount_bind_mounts_local { local bind_mounts=$(grep "bind," /etc/fstab | awk '{print $2}') local b for b in $bind_mounts; do mount $b done } ________ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users