Hello,
I'm new here. I tried to google the answer but was not successful.
To give it a try I configured one node Gluster FS with 2 volumes on CentOS 7.5.1804. After server reboot the volumes start but are offline. Is there a way to fix that so they are online after reboot?
#gluster volume status
Status of volume: os_images
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server.example.com:/opt/os_images/br
ick N/A N/A N N/A
Task Status of Volume os_images
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: ovirtvol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server.example.com:/opt/ovirt/brick N/A N/A N N/A
Task Status of Volume ovirtvol
------------------------------------------------------------------------------
There are no active volume tasks
After running:
gluster volume start <volume_name> force
the volumes become Online.
# gluster volume start os_images force
volume start: os_images: success
# gluster volume start ovirtvol force
volume start: ovirtvol: success
# gluster volume status
Status of volume: os_images
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server.example.com:/opt/os_images/br
ick 49154 0 Y 4354
Task Status of Volume os_images
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: ovirtvol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server.example.com:/opt/ovirt/brick 49155 0 Y 4846
Task Status of Volume ovirtvol
------------------------------------------------------------------------------
There are no active volume tasks
Here are details of the volumes:
# gluster volume status ovirtvol detail
Status of volume: ovirtvol
------------------------------------------------------------------------------
Brick : Brick server.example.com:/opt/ovirt/brick
TCP Port : 49155
RDMA Port : 0
Online : Y
Pid : 4846
File System : xfs
Device : /dev/mapper/centos-home
Mount Options : rw,seclabel,relatime,attr2,inode64,noquota
Inode Size : 512
Disk Space Free : 405.6GB
Total Disk Space : 421.7GB
Inode Count : 221216768
Free Inodes : 221215567
# gluster volume info ovirtvol
Volume Name: ovirtvol
Type: Distribute
Volume ID: 82b93589-0197-4ed5-a996-ffdda8d661d1
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: server.example.com:/opt/ovirt/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36
# gluster volume status os_images detail
Status of volume: os_images
------------------------------------------------------------------------------
Brick : Brick server.example.com:/opt/os_images/brick
TCP Port : 49154
RDMA Port : 0
Online : Y
Pid : 4354
File System : xfs
Device : /dev/mapper/centos-home
Mount Options : rw,seclabel,relatime,attr2,inode64,noquota
Inode Size : 512
Disk Space Free : 405.6GB
Total Disk Space : 421.7GB
Inode Count : 221216768
Free Inodes : 221215567
# gluster volume info os_images
Volume Name: os_images
Type: Distribute
Volume ID: 85b5c5e6-def6-4df3-a3ab-fcd17f105713
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: server.example.com:/opt/os_images/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36
Regards!
Jarek
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users