Strahil:
I just tested it with TCP. That failed as well. [root@node1 user]# gluster volume stop gv0 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gv0: failed: Volume gv0 is not in the started state
[root@node1 user]# gluster volume delete gv0
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gv0: success
[root@node1 user]# gluster volume create gv0 node{1..4}:/mnt/ramdisk/gv0
volume create: gv0: failed: /mnt/ramdisk/gv0 is already part of a volume
[root@node1 user]# gluster volume create gv0 node{1..4}:/mnt/ramdisk/gv0 force
volume create: gv0: success: please start the volume to access data
[root@node1 user]# gluster volume start gv0
volume start: gv0: failed: Commit failed on localhost. Please check log file for details.
[root@node1 user]# gluster volume start gv0 force
volume start: gv0: success
[root@node1 home]# mount -t glusterfs node1:/gv0 /home/glusterfs
Mount failed. Check the log file for more details.
I have attached the glusterd.log as well as the cli.log files to see if that might be of use/help in trying to help figure out why the commit is failing.
(Once you've associated a brick to a volume, I don't know how to dissociate said brick from the volume. I know that I am supposed to be able to use gluster volume brick-remove to remove the bricks, but it won't let me remove the very last brick from the volume, so that's why I keep having to use the force command because otherwise it thinks that node1:/mnt/ramdisk/gv0 is already part of a gluster volume.)
Your help is greatly appreciated.
Thank you. Sincerely,
Ewen
From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
Sent: July 11, 2021 2:49 AM To: gluster-users@xxxxxxxxxxx <gluster-users@xxxxxxxxxxx>; Ewen Chan <alpha754293@xxxxxxxxxxx> Subject: Re: distributed glusterfs volume of four ramdisks problems Does it crash with tcp ?
What happens when you mount on one of the hosts ? Best Regards, Strahil Nikolov В събота, 10 юли 2021 г., 18:55:40 ч. Гринуич+3, Ewen Chan <alpha754293@xxxxxxxxxxx> написа: Hello everybody. I have a cluster with four nodes and I am trying to create a distributed glusterfs volume consisting of four RAM drives, each being 115 GB in size. I am running CentOS 7.7.1908. I created the ramdrives on each of the four nodes with the following command: # mount -t tmpfs -o size=115g tmpfs /mnt/ramdisk I then create the mount point for the gluster volume on each of the nodes: # mkdir -p /mnt/ramdisk/gv0 And then I tried to create the glusterfs distributed volume: # gluster volume create gv0 transport tcp,rdma node{1..4}:/mnt/ramdisk/gv0 And that came back with: volume create: gv0: success: pleas start the volume to access data When I tried to start the volume with: # gluster volume start gv0 gluster responds with: volume start: gv0: failed: Commit failed on localhost. Please check log file for details. So I tried forcing the start with: # gluster volume start gv0 force gluster responds with: volume start: gv0: success I then created the mount point for the gluster volume: # mkdir -p /home/gluster And tried to mount the gluster gv0 volume: # mount -t glusterfs -o transport=rdma,direct-io-mode=enable node1:/gv0 /home/gluster and the system crashes. After rebooting the system and switching users back to root, I get this: ABRT has detected 1 problem(s). For more info run: abrt-cli list --since 1625929899 # abrt-cli list --since 1625929899 id 2a8ae7a1207acc48a6fc4a6cd8c3c88ffcf431be reason: glusterfsd killed by SIGSEGV time: Sat 10 Jul 2021 10:56:13 AM EDT cmdline: /usr/sbin/glusterfsd -s aes1 --volfile-id gv0.aes1.mnt-ramdisk-gv0 -p /var/run/gluster/vols/gv0/aes1-mnt-ramdisk-g v0.pid -S /var/run/gluster/5c2a19a097c93ac6.socket --brick-name /mnt/ramdisk/gv0 -l /var/log/glusterfs/bricks/mnt-ramdisk-gv0.log --xlator-option *-posix.glusterd-uuid=0a569353-5991-4bc1-a61f-4ca6950f313d --process-name brick --brick-port 49152 49153 --xlator- option gv0-server.transport.rdma.listen-port=49153 --xlator-option gv0-server.listen-port=49152 --volfile-server-transport=socket, rdma package: glusterfs-fuse-9.3-1.el7 uid: 0 (root) count: 4 Directory: /var/spool/abrt/ccpp-2021-07-10-10:56:13-4935 The Autoreporting feature is disabled. Please consider enabling it by issuing 'abrt-auto-reporting enabled' as a user with root privileges Where do I begin to even remotely try and fix this, and to get this up and running? Any help in regards to this is greatly appreciated. Thank you. Sincerely, Ewen ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users |
Attachment:
glusterd.log
Description: glusterd.log
Attachment:
cli.log
Description: cli.log
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users