Re: gluster forcing IPV6 on our IPV4 servers, glusterd fails (was gluster update question regarding new DNS resolution requirement)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 21, 2021 at 04:18:10PM +0000, Strahil Nikolov wrote:
> As far as I know a fix was introduced recently, so even missing to run the
> script won't be so critical - you can run it afterwards.
> I would use Ansible to roll out such updates on a set of nodes - this will
> prevent human errors and will give the opportunity to run such tiny details
> like geo-rep modifying script.
> 
> P.S.: Out of curiosity, are you using distributed-replicated or
> distributed-dispersed volumes ?


Distributed-Replicated, with different volume configurations per use
case and one sharded.

PS: I am HOPING to take another crack at Ganesha tomorrow to try to "get
off our dependence on gnfs" but we'll see how things go with the crisis
of the day always blocking progress. I hope to deprecate the use of
expanded NFS trees (ie compute node root filesystems that are
file-by-file served by the NFS server) in favor of image objects
(squashfs images sitting in sharded volumes). I think what caused us
trouble with ganesha a couple years ago was the huge metadata load which
should be greatly reduced. We will see!




Output from one test system if you're curious:


[root@leader1 ~]# gluster volume info

Volume Name: cm_logs
Type: Distributed-Replicate
Volume ID: 27ffa15b-9fed-4322-b591-225270ca9de5
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x 3 = 18
Transport-type: tcp
Bricks:
Brick1: 172.23.0.3:/data/brick_cm_logs
Brick2: 172.23.0.2:/data/brick_cm_logs
Brick3: 172.23.0.4:/data/brick_cm_logs
Brick4: 172.23.0.5:/data/brick_cm_logs
Brick5: 172.23.0.6:/data/brick_cm_logs
Brick6: 172.23.0.7:/data/brick_cm_logs
Brick7: 172.23.0.8:/data/brick_cm_logs
Brick8: 172.23.0.9:/data/brick_cm_logs
Brick9: 172.23.0.10:/data/brick_cm_logs
Brick10: 172.23.0.11:/data/brick_cm_logs
Brick11: 172.23.0.12:/data/brick_cm_logs
Brick12: 172.23.0.13:/data/brick_cm_logs
Brick13: 172.23.0.14:/data/brick_cm_logs
Brick14: 172.23.0.15:/data/brick_cm_logs
Brick15: 172.23.0.16:/data/brick_cm_logs
Brick16: 172.23.0.17:/data/brick_cm_logs
Brick17: 172.23.0.18:/data/brick_cm_logs
Brick18: 172.23.0.19:/data/brick_cm_logs
Options Reconfigured:
nfs.auth-cache-ttl-sec: 360
nfs.auth-refresh-interval-sec: 360
nfs.mount-rmtab: /-
nfs.exports-auth-enable: on
nfs.export-dirs: on
nfs.export-volumes: on
nfs.nlm: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

Volume Name: cm_obj_sharded
Type: Distributed-Replicate
Volume ID: 311bee36-09af-4d68-9180-b34b45e3c10b
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x 3 = 18
Transport-type: tcp
Bricks:
Brick1: 172.23.0.3:/data/brick_cm_obj_sharded
Brick2: 172.23.0.2:/data/brick_cm_obj_sharded
Brick3: 172.23.0.4:/data/brick_cm_obj_sharded
Brick4: 172.23.0.5:/data/brick_cm_obj_sharded
Brick5: 172.23.0.6:/data/brick_cm_obj_sharded
Brick6: 172.23.0.7:/data/brick_cm_obj_sharded
Brick7: 172.23.0.8:/data/brick_cm_obj_sharded
Brick8: 172.23.0.9:/data/brick_cm_obj_sharded
Brick9: 172.23.0.10:/data/brick_cm_obj_sharded
Brick10: 172.23.0.11:/data/brick_cm_obj_sharded
Brick11: 172.23.0.12:/data/brick_cm_obj_sharded
Brick12: 172.23.0.13:/data/brick_cm_obj_sharded
Brick13: 172.23.0.14:/data/brick_cm_obj_sharded
Brick14: 172.23.0.15:/data/brick_cm_obj_sharded
Brick15: 172.23.0.16:/data/brick_cm_obj_sharded
Brick16: 172.23.0.17:/data/brick_cm_obj_sharded
Brick17: 172.23.0.18:/data/brick_cm_obj_sharded
Brick18: 172.23.0.19:/data/brick_cm_obj_sharded
Options Reconfigured:
features.shard: on
nfs.auth-cache-ttl-sec: 360
nfs.auth-refresh-interval-sec: 360
server.event-threads: 32
performance.io-thread-count: 32
nfs.mount-rmtab: /-
transport.listen-backlog: 16384
nfs.exports-auth-enable: on
nfs.export-dirs: on
nfs.export-volumes: on
nfs.nlm: off
performance.nfs.io-cache: on
performance.cache-refresh-timeout: 60
performance.flush-behind: on
performance.cache-size: 8GB
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: off
performance.client-io-threads: on

Volume Name: cm_shared
Type: Distributed-Replicate
Volume ID: 38093b8e-e668-4542-bc5e-34ffc491311a
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x 3 = 18
Transport-type: tcp
Bricks:
Brick1: 172.23.0.3:/data/brick_cm_shared
Brick2: 172.23.0.2:/data/brick_cm_shared
Brick3: 172.23.0.4:/data/brick_cm_shared
Brick4: 172.23.0.5:/data/brick_cm_shared
Brick5: 172.23.0.6:/data/brick_cm_shared
Brick6: 172.23.0.7:/data/brick_cm_shared
Brick7: 172.23.0.8:/data/brick_cm_shared
Brick8: 172.23.0.9:/data/brick_cm_shared
Brick9: 172.23.0.10:/data/brick_cm_shared
Brick10: 172.23.0.11:/data/brick_cm_shared
Brick11: 172.23.0.12:/data/brick_cm_shared
Brick12: 172.23.0.13:/data/brick_cm_shared
Brick13: 172.23.0.14:/data/brick_cm_shared
Brick14: 172.23.0.15:/data/brick_cm_shared
Brick15: 172.23.0.16:/data/brick_cm_shared
Brick16: 172.23.0.17:/data/brick_cm_shared
Brick17: 172.23.0.18:/data/brick_cm_shared
Brick18: 172.23.0.19:/data/brick_cm_shared
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
cluster.lookup-optimize: on
client.event-threads: 32
server.event-threads: 32
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.inode-lru-limit: 1000000
performance.io-thread-count: 32
performance.cache-size: 8GB
performance.parallel-readdir: off
cluster.lookup-unhashed: auto
performance.flush-behind: on
performance.aggregate-size: 2048KB
performance.write-behind-trickling-writes: off
transport.listen-backlog: 16384
performance.write-behind-window-size: 1024MB
server.outstanding-rpc-limit: 1024
nfs.outstanding-rpc-limit: 1024
nfs.acl: on
storage.max-hardlinks: 0
performance.cache-refresh-timeout: 60
performance.md-cache-statfs: off
performance.nfs.io-cache: on
nfs.mount-rmtab: /-
nfs.nlm: off
nfs.export-volumes: on
nfs.export-dirs: on
nfs.exports-auth-enable: on
nfs.auth-refresh-interval-sec: 360
nfs.auth-cache-ttl-sec: 360

Volume Name: ctdb
Type: Distributed-Replicate
Volume ID: cb229583-d25a-4d85-b567-421cdf526b3b
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x 3 = 18
Transport-type: tcp
Bricks:
Brick1: 172.23.0.3:/data/brick_ctdb
Brick2: 172.23.0.2:/data/brick_ctdb
Brick3: 172.23.0.4:/data/brick_ctdb
Brick4: 172.23.0.5:/data/brick_ctdb
Brick5: 172.23.0.6:/data/brick_ctdb
Brick6: 172.23.0.7:/data/brick_ctdb
Brick7: 172.23.0.8:/data/brick_ctdb
Brick8: 172.23.0.9:/data/brick_ctdb
Brick9: 172.23.0.10:/data/brick_ctdb
Brick10: 172.23.0.11:/data/brick_ctdb
Brick11: 172.23.0.12:/data/brick_ctdb
Brick12: 172.23.0.13:/data/brick_ctdb
Brick13: 172.23.0.14:/data/brick_ctdb
Brick14: 172.23.0.15:/data/brick_ctdb
Brick15: 172.23.0.16:/data/brick_ctdb
Brick16: 172.23.0.17:/data/brick_ctdb
Brick17: 172.23.0.18:/data/brick_ctdb
Brick18: 172.23.0.19:/data/brick_ctdb
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.nlm: off
nfs.export-volumes: on
nfs.export-dirs: on
nfs.exports-auth-enable: on
nfs.mount-rmtab: /-
nfs.auth-refresh-interval-sec: 360
nfs.auth-cache-ttl-sec: 360

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux