Gluster Users - Date Index
[Prev Page][Next Page]
- Re: not support so called “structured data”
- From: sankarshan <sankarshan@xxxxxxxxx>
- Re: Sharding on 7.4 - filesizes may be wrong
- From: Dmitry Antipov <dmantipov@xxxxxxxxx>
- Re: 回复: Re: not support so called “structured data”
- From: Yaniv Kaul <ykaul@xxxxxxxxxx>
- Release 5.13: Expected tagging on 6th April
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- 回复: Re: not support so called “structured data”
- From: "sz_cuitao@xxxxxxx" <sz_cuitao@xxxxxxx>
- Re: not support so called “structured data”
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- not support so called “structured data”
- From: "sz_cuitao@xxxxxxx" <sz_cuitao@xxxxxxx>
- 回复: Re: Cann't mount NFS,please help!
- From: "sz_cuitao@xxxxxxx" <sz_cuitao@xxxxxxx>
- Re: 回复: Re: Cann't mount NFS,please help!
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- 回复: Re: Cann't mount NFS,please help!
- From: "sz_cuitao@xxxxxxx" <sz_cuitao@xxxxxxx>
- GlusterFS geo-replication progress question
- From: Alexander Iliev <ailiev+gluster@xxxxxxxxx>
- Gluster Volume Rebalance inode modify change times
- From: Matthew Benstead <matthewb@xxxxxxx>
- Re: Cann't mount NFS,please help!
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Sharding on 7.4 - filesizes may be wrong
- From: Claus Jeppesen <cjeppesen@xxxxxxxxx>
- fuse Stale file handle error
- From: Eli V <eliventer@xxxxxxxxx>
- Re: Cann't mount NFS,please help!
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Gluster 6.8: some error messages during op-version-update
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Gluster 6.8 & debian
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Cann't mount NFS,please help!
- From: Olivier <Olivier.Nicole@xxxxxxxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Cann't mount NFS,please help!
- From: "sz_cuitao@xxxxxxx" <sz_cuitao@xxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: [rhgs-devel] Announcing Gluster release 5.12
- From: Alan Orth <alan.orth@xxxxxxxxx>
- Re: [rhgs-devel] Announcing Gluster release 5.12
- From: Kaleb Keithley <kkeithle@xxxxxxxxxx>
- Re: Gluster 6.8 & debian
- From: Sheetal Pamecha <spamecha@xxxxxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Announcing Gluster release 5.12
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: Announcing Gluster release 5.12
- From: Alan Orth <alan.orth@xxxxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Repository down ?
- From: Renaud Fortier <Renaud.Fortier@xxxxxxxxxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: Gluster 6.8 & debian
- From: Sheetal Pamecha <spamecha@xxxxxxxxxx>
- Re: Gluster 6.8 & debian
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Gluster 6.8 & debian
- From: Sheetal Pamecha <spamecha@xxxxxxxxxx>
- Re: Gluster 6.8 & debian
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- gnfs split brain when 1 server in 3x1 down (high load) - help request
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Minutes of Gluster Community Meeting - 24-03-2020
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Gluster 6.8 & debian
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: cannot remove empty directory on gluster file system
- From: Felix Kölzow <felix.koelzow@xxxxxx>
- Re: cannot remove empty directory on gluster file system
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: cannot remove empty directory on gluster file system
- From: Mauro Tridici <mauro.tridici@xxxxxxx>
- Re: [EXT] cannot remove empty directory on gluster file system
- From: Stefan Solbrig <stefan.solbrig@xxxxx>
- Re: cannot remove empty directory on gluster file system
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: cannot remove empty directory on gluster file system
- From: Mauro Tridici <mauro.tridici@xxxxxxx>
- Re: Geo-Replication File not Found on /.glusterfs/XX/XX/XXXXXXXXXXXX
- From: Senén Vidal Blanco <senenvidal@xxxxxxxxxxx>
- Re: Geo-Replication File not Found on /.glusterfs/XX/XX/XXXXXXXXXXXX
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: Georeplication questions
- From: wodel youchi <wodel.youchi@xxxxxxxxx>
- Re: cannot remove empty directory on gluster file system
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- cannot remove empty directory on gluster file system
- From: Mauro Tridici <mauro.tridici@xxxxxxx>
- Geo-Replication File not Found on /.glusterfs/XX/XX/XXXXXXXXXXXX
- From: Senén Vidal Blanco <senenvidal@xxxxxxxxxxx>
- Re: [Gluster-devel] Announcing Gluster release 7.4
- From: Vijay Bellur <vbellur@xxxxxxxxxx>
- Failed to synchronize cache for repo 'centos-gluster6'
- From: Felix Kölzow <felix.koelzow@xxxxxx>
- Re: Memory and CPU
- From: Olivier <Olivier.Nicole@xxxxxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: [Gluster-devel] Announcing Gluster release 7.4
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: Memory and CPU
- From: Jorick Astrego <jorick@xxxxxxxxxxx>
- Re: mount of 2 volumes fails at boot (/etc/fstab)
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Convert to Sharding
- From: Gionatan Danti <g.danti@xxxxxxxxxx>
- Re: Convert to Sharding
- From: Amar Tumballi <amar@xxxxxxxxx>
- Re: Convert to Sharding
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Convert to Sharding
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Convert to Sharding
- From: Amar Tumballi <amar@xxxxxxxxx>
- Re: Convert to Sharding
- From: Gionatan Danti <g.danti@xxxxxxxxxx>
- Re: mount of 2 volumes fails at boot (/etc/fstab)
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Convert to Sharding
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Convert to Sharding
- From: Christian Reiss <email@xxxxxxxxxxxxxxxxxx>
- Re: Memory and CPU
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Memory and CPU
- From: Olivier <Olivier.Nicole@xxxxxxxxxxxx>
- Gluster datacenter internet link was down
- From: Michael Scherer <mscherer@xxxxxxxxxx>
- Invitation: Gluster community meeting @ Tue Mar 24, 2020 2:30pm - 3:30pm (IST) (gluster-users@xxxxxxxxxxx)
- From: hgowtham@xxxxxxxxxx
- Geo-Replication - Does not finish syncing - Codification Error
- From: Senén Vidal Blanco <senenvidal@xxxxxxxxxxx>
- [Gluster-devel] Announcing Gluster release 7.4
- From: Rinku Kothiya <rkothiya@xxxxxxxxxx>
- mount of 2 volumes fails at boot (/etc/fstab)
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Storage really slow on k8s
- From: Gionatan Danti <g.danti@xxxxxxxxxx>
- Re: just discovered that OpenShift/OKD dropped GlusterFS storage support...
- From: Arman Khalatyan <arm2arm@xxxxxxxxx>
- Re: just discovered that OpenShift/OKD dropped GlusterFS storage support...
- From: Aravinda VK <aravinda@xxxxxxxxx>
- Re: [Gluster-devel] Announcing Gluster release 7.3
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: just discovered that OpenShift/OKD dropped GlusterFS storage support...
- From: Arman Khalatyan <arm2arm@xxxxxxxxx>
- Re: just discovered that OpenShift/OKD dropped GlusterFS storage support...
- From: Yaniv Kaul <ykaul@xxxxxxxxxx>
- Re: just discovered that OpenShift/OKD dropped GlusterFS storage support...
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Storage really slow on k8s
- From: Rene Bon Ciric <renich@xxxxxxxxxxxxxx>
- just discovered that OpenShift/OKD dropped GlusterFS storage support...
- From: Arman Khalatyan <arm2arm@xxxxxxxxx>
- Re: geo-replication sync issue
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: geo-replication sync issue
- From: Etem Bayoğlu <etembayoglu@xxxxxxxxx>
- Re: geo-replication sync issue
- From: Kotresh Hiremath Ravishankar <khiremat@xxxxxxxxxx>
- Re: [Gluster-devel] Announcing Gluster release 7.3
- From: Sheetal Pamecha <spamecha@xxxxxxxxxx>
- Re: Trying out changelog xlator - human readable output?
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: Trying out changelog xlator - human readable output?
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: Trying out changelog xlator - human readable output?
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Trying out changelog xlator - human readable output?
- From: David Spisla <spisla80@xxxxxxxxx>
- Another transaction is in progress
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: Is rebalance in progress or not?
- From: Alexander Iliev <ailiev+gluster@xxxxxxxxx>
- Re: Is rebalance in progress or not?
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: boot auto mount NFS-Ganesha exports failed
- From: Soumya Koduri <skoduri@xxxxxxxxxx>
- Re: Is rebalance in progress or not?
- From: Alexander Iliev <ailiev+gluster@xxxxxxxxx>
- Re: Is rebalance in progress or not?
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Is rebalance in progress or not?
- From: Alexander Iliev <ailiev+gluster@xxxxxxxxx>
- Re: Stale file handle
- From: Amar Tumballi <amar@xxxxxxxxx>
- Re: Image File Owner change Situation. (root:root)
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Re: Stale file handle
- From: Pat Haley <phaley@xxxxxxx>
- Image File Owner change Situation. (root:root)
- From: "Robert O'Kane" <okane@xxxxxx>
- boot auto mount NFS-Ganesha exports failed
- From: Renaud Fortier <Renaud.Fortier@xxxxxxxxxxxxxx>
- Re: Stale file handle
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: geo-replication sync issue
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Stale file handle
- From: Pat Haley <phaley@xxxxxxx>
- Re: geo-replication sync issue
- From: Etem Bayoğlu <etembayoglu@xxxxxxxxx>
- Re: geo-replication sync issue
- From: Etem Bayoğlu <etembayoglu@xxxxxxxxx>
- Re: Bricks going offline
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: geo-replication sync issue
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Bricks going offline
- From: John Burt <johnburt.jab@xxxxxxxxx>
- Re: geo-replication sync issue
- From: Etem Bayoğlu <etembayoglu@xxxxxxxxx>
- Re: Erroneous "No space left on device." messages
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Erroneous "No space left on device." messages
- From: Pat Haley <phaley@xxxxxxx>
- Re: Faulty status in Geo-replication
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: geo-replication sync issue
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- geo-replication sync issue
- From: Etem Bayoğlu <etembayoglu@xxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: Gluster Performance Issues
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Erroneous "No space left on device." messages
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: Gluster Performance Issues
- From: Felix Kölzow <felix.koelzow@xxxxxx>
- Re: dispersed volume + cifs export does not work (replicated + cifs works fine)
- From: Felix Kölzow <felix.koelzow@xxxxxx>
- Re: Erroneous "No space left on device." messages
- From: Pat Haley <phaley@xxxxxxx>
- Re: Erroneous "No space left on device." messages
- From: Pat Haley <phaley@xxxxxxx>
- Re: Erroneous "No space left on device." messages
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Erroneous "No space left on device." messages
- From: Pat Haley <phaley@xxxxxxx>
- Erroneous "No space left on device." messages
- From: Pat Haley <phaley@xxxxxxx>
- Re: Upgrade from 4.1
- From: Michael Böhm <dudleyperkins@xxxxxxxxx>
- Upgrade from 4.1
- From: Marcus Pedersén <marcus.pedersen@xxxxxx>
- Re: Faulty status in Geo-replication
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: Dispersed volumes won't heal on ARM
- From: Fox <foxxz.net@xxxxxxxxx>
- Re: set larger field width for status command
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: Disk use with GlusterFS
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: Disk use with GlusterFS
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Disk use with GlusterFS
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Disk use with GlusterFS
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: Disk use with GlusterFS
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: What command can check the default value of all options?
- From: gil han Choi <ghchoi.choi@xxxxxxxxx>
- Re: Disk use with GlusterFS
- From: Aravinda VK <aravinda@xxxxxxxxx>
- Re: What command can check the default value of all options?
- From: Aravinda VK <aravinda@xxxxxxxxx>
- Re: Disk use with GlusterFS
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: Disk use with GlusterFS
- From: Aravinda VK <aravinda@xxxxxxxxx>
- What command can check the default value of all options?
- From: gil han Choi <ghchoi.choi@xxxxxxxxx>
- Re: Disk use with GlusterFS
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: volume add-brick: failed: Pre Validation failed
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: Disk use with GlusterFS
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: volume add-brick: failed: Pre Validation failed
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: volume add-brick: failed: Pre Validation failed
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- Re: Disk use with GlusterFS
- From: Aravinda VK <aravinda@xxxxxxxxx>
- Disk use with GlusterFS
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: Geo-replication
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: writing to fuse device failed: No such file or directory
- From: Danny Lee <dannyl@xxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Amar Tumballi <amar@xxxxxxxxx>
- Re: Geo-replication
- From: Aravinda VK <aravinda@xxxxxxxxx>
- Re: Geo-replication
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: writing to fuse device failed: No such file or directory
- From: Danny Lee <dannyl@xxxxxx>
- Re: writing to fuse device failed: No such file or directory
- From: Danny Lee <dannyl@xxxxxx>
- Re: volume add-brick: failed: Pre Validation failed
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- volume add-brick: failed: Pre Validation failed
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: writing to fuse device failed: No such file or directory
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: writing to fuse device failed: No such file or directory
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: writing to fuse device failed: No such file or directory
- From: Felix Kölzow <felix.koelzow@xxxxxx>
- Re: writing to fuse device failed: No such file or directory
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: Geo-replication
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Geo-replication
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: writing to fuse device failed: No such file or directory
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Re: writing to fuse device failed: No such file or directory
- From: Danny Lee <dannyl@xxxxxx>
- Re: writing to fuse device failed: No such file or directory
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: writing to fuse device failed: No such file or directory
- From: Felix Kölzow <felix.koelzow@xxxxxx>
- set larger field width for status command
- From: Brian Andrus <toomuchit@xxxxxxxxx>
- writing to fuse device failed: No such file or directory
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Advice on moving volumes/bricks to new servers
- From: Ronny Adsetts <ronny.adsetts@xxxxxxxxxxxxxxxxxxx>
- Announcing Gluster release 5.12
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: Geo-replication
- From: Aravinda VK <aravinda@xxxxxxxxx>
- Re: Geo-replication
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: WORM: If autocommit-period 0 file will be WORMed with 0 Byte during initial write
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- Re: Brick Goes Offline After server reboot/Or Gluster Container is restarted, on which a gluster node is running
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- Re: Geo-replication
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: Advice on moving volumes/bricks to new servers
- From: Ronny Adsetts <ronny.adsetts@xxxxxxxxxxxxxxxxxxx>
- Re: Advice on moving volumes/bricks to new servers
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Dispersed volumes won't heal on ARM
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Dispersed volumes won't heal on ARM
- From: Fox <foxxz.net@xxxxxxxxx>
- Advice on moving volumes/bricks to new servers
- From: Ronny Adsetts <ronny.adsetts@xxxxxxxxxxxxxxxxxxx>
- Re: Brick Goes Offline After server reboot/Or Gluster Container is restarted, on which a gluster node is running
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Brick Goes Offline After server reboot/Or Gluster Container is restarted, on which a gluster node is running
- From: Rifat Ucal <rucal@xxxxxxxx>
- WORM: If autocommit-period 0 file will be WORMed with 0 Byte during initial write
- From: David Spisla <spisla80@xxxxxxxxx>
- Minutes of Gluster Community Meeting 25th Feb 2020
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: Release 6.8: Expected tagging on 27th February
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Release 6.8: Expected tagging on 27th February
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: Geo-replication
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: Geo-replication
- From: Aravinda VK <aravinda@xxxxxxxxx>
- Geo-replication
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Invitation: Invitation: GlusterFS community meeting @ Tue Feb 25, 2020 2:30pm - 3:30pm (IST) (gluster-users@xxxxxxxxxxx)
- From: hgowtham@xxxxxxxxxx
- Re: Docker swarm on top of Replica 3?
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Docker swarm on top of Replica 3?
- From: Shareef Jalloq <shareef@xxxxxxxxxxxx>
- Re: Advice for running out of space on a replicated 4-brick gluster
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: [Gluster-devel] Announcing Gluster release 7.3
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: Advice for running out of space on a replicated 4-brick gluster
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- lock recovery / failover timeout
- From: Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
- Ansible RAID1 options
- From: Shareef Jalloq <shareef@xxxxxxxxxxxx>
- Re: Gluster Performance Issues
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Gluster Performance Issues
- From: Felix Kölzow <felix.koelzow@xxxxxx>
- Re: recommendation: gluster version upgrade and/or OS dist-upgrade
- From: Michael Böhm <dudleyperkins@xxxxxxxxx>
- Re: recommendation: gluster version upgrade and/or OS dist-upgrade
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: [Gluster-devel] Announcing Gluster release 7.3
- From: Dmitry Melekhov <dm@xxxxxxxxxx>
- [Gluster-devel] Announcing Gluster release 7.3
- From: Rinku Kothiya <rkothiya@xxxxxxxxxx>
- Re: Low performance of Gluster
- From: Felix Kölzow <felix.koelzow@xxxxxx>
- Low performance of Gluster
- From: Cloud Udupi <udupi.cloud@xxxxxxxxx>
- Re: It appears that readdir is not cached for FUSE mounts
- From: Matthias Schniedermeyer <matthias-gluster-users@xxxxxxxxxxxxx>
- Re: recommendation: gluster version upgrade and/or OS dist-upgrade
- From: Michael Böhm <dudleyperkins@xxxxxxxxx>
- Re: Advice for running out of space on a replicated 4-brick gluster
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Advice for running out of space on a replicated 4-brick gluster
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: Feature request: max total size for trashcan
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: recommendation: gluster version upgrade and/or OS dist-upgrade
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- recommendation: gluster version upgrade and/or OS dist-upgrade
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Advice for running out of space on a replicated 4-brick gluster
- From: Amar Tumballi <amar@xxxxxxxxx>
- Re: Feature request: max total size for trashcan
- From: Amar Tumballi <amar@xxxxxxxxx>
- Re: Advice for running out of space on a replicated 4-brick gluster
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Feature request: max total size for trashcan
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Advice for running out of space on a replicated 4-brick gluster
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Feature request: max total size for trashcan
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: Gluster setup for virtualization cluster
- From: Gionatan Danti <g.danti@xxxxxxxxxx>
- Re: Gluster setup for virtualization cluster
- From: Darrell Budic <budic@xxxxxxxxxxxxxxxx>
- remove-brick seems to delete file content
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Re: Docker swarm on top of Replica 3?
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Intent to retire Heketi package in Fedora
- From: Niels de Vos <ndevos@xxxxxxxxxx>
- Docker swarm on top of Replica 3?
- From: Shareef Jalloq <shareef@xxxxxxxxxxxx>
- Re: Gluster setup for virtualization cluster
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Gluster setup for virtualization cluster
- From: Markus Kern <gluster@xxxxxxxxxxx>
- Re: GlusterFS problems & alternatives
- From: Stefan <gluster@xxxxxxxxxxxxxxxxx>
- Re: Strange Logs
- From: Christian Reiss <email@xxxxxxxxxxxxxxxxxx>
- Re: GlusterFS problems & alternatives
- From: Amar Tumballi <amar@xxxxxxxxx>
- Re: crashing a lot
- From: Joe Julian <joe@xxxxxxxxxxxxxxxx>
- Re: crashing a lot
- From: Mohit Agrawal <moagrawa@xxxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Amar Tumballi <amar@xxxxxxxxx>
- Re: crashing a lot
- From: Amar Tumballi <amar@xxxxxxxxx>
- crashing a lot
- From: Joe Julian <joe@xxxxxxxxxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: Strange Logs
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: [Gluster-devel] Community Meeting: Make it more reachable
- From: sankarshan <sankarshan@xxxxxxxxx>
- Strange Logs
- From: Christian Reiss <email@xxxxxxxxxxxxxxxxxx>
- Re: GlusterFS problems & alternatives
- From: Anoop C S <anoopcs@xxxxxxxxxxxxx>
- Re: Brick Goes Offline After server reboot.
- From: Jorick Astrego <jorick@xxxxxxxxxxx>
- Brick Goes Offline After server reboot.
- From: Cloud Udupi <udupi.cloud@xxxxxxxxx>
- Re: Geo-replication /var/lib space question
- From: Kotresh Hiremath Ravishankar <khiremat@xxxxxxxxxx>
- Re: gluster NFS hang observed mounting or umounting at scale
- From: Amar Tumballi <amar@xxxxxxxxx>
- Re: gluster NFS hang observed mounting or umounting at scale
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: multi petabyte gluster dispersed for archival?
- From: Douglas Duckworth <dod2014@xxxxxxxxxxxxxxx>
- Re: multi petabyte gluster dispersed for archival?
- From: Serkan Çoban <cobanserkan@xxxxxxxxx>
- Re: multi petabyte gluster dispersed for archival?
- From: Serkan Çoban <cobanserkan@xxxxxxxxx>
- multi petabyte gluster dispersed for archival?
- From: Douglas Duckworth <dod2014@xxxxxxxxxxxxxxx>
- Re: Permission denied at some directories/files after a split brain
- From: Alberto Bengoa <bengoa@xxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Thanks for your feedback on cifs
- From: Rinku Kothiya <rkothiya@xxxxxxxxxx>
- Re: Geo-replication /var/lib space question
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: GeoReplication takes too log in hybrid crawl and no sync happens in changelog mode for 2x3 volume.
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: GeoReplication takes too log in hybrid crawl and no sync happens in changelog mode for 2x3 volume.
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: GlusterFS problems & alternatives
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: GlusterFS problems & alternatives
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: GlusterFS problems & alternatives
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: GlusterFS problems & alternatives
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: GlusterFS problems & alternatives
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- GlusterFS problems & alternatives
- From: Stefan <gluster@xxxxxxxxxxxxxxxxx>
- Re: interpreting heal info and reported entries
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Permission denied at some directories/files after a split brain
- From: Renaud Fortier <Renaud.Fortier@xxxxxxxxxxxxxx>
- Re: Gluster Samba Options
- From: Renaud Fortier <Renaud.Fortier@xxxxxxxxxxxxxx>
- Re: interpreting heal info and reported entries
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Permission denied at some directories/files after a split brain
- From: Alberto Bengoa <bengoa@xxxxxxxxx>
- Re: Gluster Samba Options
- From: Stefan Kania <stefan@xxxxxxxxxxxxxxx>
- Release 5.12: Expected tagging on 13th February
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: Gluster Samba Options
- From: Anoop C S <anoopcs@xxxxxxxxxxxxx>
- Re: Healing entries get healed but there are constantly new entries appearing
- From: Karthik Subrahmanya <ksubrahm@xxxxxxxxxx>
- question on rebalance errors gluster 7.2 (adding to distributed/replicated)
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: NFS clients show missing files while gluster volume rebalanced
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Geo-replication /var/lib space question
- From: Alexander Iliev <ailiev+gluster@xxxxxxxxx>
- Re: It appears that readdir is not cached for FUSE mounts
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Healing entries get healed but there are constantly new entries appearing
- From: Ulrich Pötter <ulrich.poetter@xxxxxxxxxxxxx>
- Re: It appears that readdir is not cached for FUSE mounts
- From: Matthias Schniedermeyer <matthias-gluster-users@xxxxxxxxxxxxx>
- Re: Permission denied at some directories/files after a split brain
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: It appears that readdir is not cached for FUSE mounts
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Permission denied at some directories/files after a split brain
- From: Alberto Bengoa <bengoa@xxxxxxxxx>
- Re: Gluster Samba Options
- From: Stefan Kania <stefan@xxxxxxxxxxxxxxx>
- It appears that readdir is not cached for FUSE mounts
- From: Matthias Schniedermeyer <matthias-gluster-users@xxxxxxxxxxxxx>
- Re: Gluster Samba Options
- From: Anoop C S <anoopcs@xxxxxxxxxxxxx>
- Invitation: GlusterFS community meeting @ Tue Feb 11 2020 @ Tue Feb 11, 2020 11:30am - 12:30pm (IST) (gluster-users@xxxxxxxxxxx)
- Re: Gluster Samba Options
- From: Stefan Kania <stefan@xxxxxxxxxxxxxxx>
- Re: Gluster Samba Options
- From: Stefan Kania <stefan@xxxxxxxxxxxxxxx>
- Re: Gluster Samba Options
- From: Anoop C S <anoopcs@xxxxxxxxxxxxx>
- Gluster Samba Options
- From: Stefan Kania <stefan@xxxxxxxxxxxxxxx>
- Re: [ovirt-users] Re: ACL issue v6.6, v6.7, v7.1, v7.2
- From: Paolo Margara <paolo.margara@xxxxxxxxx>
- Re: [ovirt-users] Re: ACL issue v6.6, v6.7, v7.1, v7.2
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: [Gluster-devel] Community Meeting: Make it more reachable
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: [ovirt-users] ACL issue v6.6, v6.7, v7.1, v7.2
- From: Christian Reiss <email@xxxxxxxxxxxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: [ovirt-users] ACL issue v6.6, v6.7, v7.1, v7.2
- From: Paolo Margara <paolo.margara@xxxxxxxxx>
- Re: ACL issue v6.6, v6.7, v7.1, v7.2
- From: Christian Reiss <email@xxxxxxxxxxxxxxxxxx>
- ACL issue v6.6, v6.7, v7.1, v7.2
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Healing entries get healed but there are constantly new entries appearing
- From: Karthik Subrahmanya <ksubrahm@xxxxxxxxxx>
- Healing entries get healed but there are constantly new entries appearing
- From: Ulrich Pötter <ulrich.poetter@xxxxxxxxxxxxx>
- Re: [Gluster-devel] Community Meeting: Make it more reachable
- From: Yati Padia <ypadia@xxxxxxxxxx>
- SUSE Packages: debugsource and debuginfo RPMs for SLES15SP1 are missing
- From: David Spisla <spisla80@xxxxxxxxx>
- Cant read files >64mb
- From: Christian Reiss <email@xxxxxxxxxxxxxxxxxx>
- Re: Gluster Heal Issue
- From: Karthik Subrahmanya <ksubrahm@xxxxxxxxxx>
- time difference on glusterfs cluster
- From: Jorick Astrego <jorick@xxxxxxxxxxx>
- Re: Upgrade gluster v7.0 to 7.2 posix-acl.c:262:posix_acl_log_permit_denied
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Gluster client 4.1.5 with Gluster server 6.7
- From: Laurent Dumont <laurentfdumont@xxxxxxxxx>
- Gluster Heal Issue
- From: Christian Reiss <email@xxxxxxxxxxxxxxxxxx>
- Upgrade gluster v7.0 to 7.2 posix-acl.c:262:posix_acl_log_permit_denied
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: [ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: NFS clients show missing files while gluster volume rebalanced
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: [Errno 107] Transport endpoint is not connected
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Re: interpreting heal info and reported entries
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: NFS clients show missing files while gluster volume rebalanced
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: [Errno 107] Transport endpoint is not connected
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: interpreting heal info and reported entries
- From: Stefan <gluster@xxxxxxxxxxxxxxxxx>
- Re: Gluster client 4.1.5 with Gluster server 6.7
- From: Mahdi Adnan <mahdi@xxxxxxxxx>
- Troubles with sharding
- From: Stefan <gluster@xxxxxxxxxxxxxxxxx>
- Re: interpreting heal info and reported entries
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: interpreting heal info and reported entries
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Free space reported not consistent with bricks
- From: Aravinda VK <mail@xxxxxxxxxxxxx>
- NFS clients show missing files while gluster volume rebalanced
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- interpreting heal info and reported entries
- From: "Cox, Jason" <Jason.L.Cox@xxxxxxxxxxxx>
- Re: [Gluster-devel] Community Meeting: Make it more reachable
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: [Errno 107] Transport endpoint is not connected
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Community Meeting: Make it more reachable
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: [Errno 107] Transport endpoint is not connected
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Gluster client 4.1.5 with Gluster server 6.7
- From: Laurent Dumont <laurentfdumont@xxxxxxxxx>
- [Errno 107] Transport endpoint is not connected
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Re: Gluster client 4.1.5 with Gluster server 6.7
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Gluster client 4.1.5 with Gluster server 6.7
- From: Laurent Dumont <laurentfdumont@xxxxxxxxx>
- Re: Replicated volume load balancing
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Replicated volume load balancing
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Free space reported not consistent with bricks
- From: Stefan <gluster@xxxxxxxxxxxxxxxxx>
- Replicated volume load balancing
- From: Stefan <gluster@xxxxxxxxxxxxxxxxx>
- Invitation: Invitation: Invitation: GlusterFS community meeting @ Tue... @ Tue Jan 28, 2020 11:30am - 12:30pm (IST) (gluster-users@xxxxxxxxxxx)
- Re: gluster NFS hang observed mounting or umounting at scale
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: gluster NFS hang observed mounting or umounting at scale
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: gluster NFS hang observed mounting or umounting at scale
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- gluster NFS hang observed mounting or umounting at scale
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: No possible to mount a gluster volume via /etc/fstab?
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: Understanding gluster performance
- From: Gionatan Danti <g.danti@xxxxxxxxxx>
- Re: No possible to mount a gluster volume via /etc/fstab?
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: No possible to mount a gluster volume via /etc/fstab?
- From: Sherry Reese <s.reese4u@xxxxxxxxx>
- Re: Orphaned shard files
- From: Stefan <gluster@xxxxxxxxxxxxxxxxx>
- Re: No possible to mount a gluster volume via /etc/fstab?
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Orphaned shard files
- From: Stefan <gluster@xxxxxxxxxxxxxxxxx>
- Re: No possible to mount a gluster volume via /etc/fstab?
- From: Sherry Reese <s.reese4u@xxxxxxxxx>
- Re: No possible to mount a gluster volume via /etc/fstab?
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- No possible to mount a gluster volume via /etc/fstab?
- From: Sherry Reese <s.reese4u@xxxxxxxxx>
- moving from replica 2 to replica 3 arbiter 1
- From: Jim Laib <jlaib01@xxxxxxxxx>
- moving from replicate 2 to replicate 3 arbiter 1
- From: Jim Laib <jlaib01@xxxxxxxxx>
- hooks for quorum loss
- From: vlad f halilov <vfh@xxxxxxxxx>
- Re: Understanding gluster performance
- From: Gionatan Danti <g.danti@xxxxxxxxxx>
- Re: Understanding gluster performance
- From: Gionatan Danti <g.danti@xxxxxxxxxx>
- Deleting a broken directory
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Re: Understanding gluster performance
- From: Yaniv Kaul <ykaul@xxxxxxxxxx>
- Re: Understanding gluster performance
- From: Gionatan Danti <g.danti@xxxxxxxxxx>
- set-up advice
- From: Stijn Vanhandsaeme <stijn@xxxxxxxxxxx>
- Events API Quotas
- From: João Baúto <joao.bauto@xxxxxxxxxxxxxxxxxxxxxxx>
- Understanding gluster performance
- From: Gionatan Danti <g.danti@xxxxxxxxxx>
- FUSE: Client receives no error when triggering autocommit with WRITE FOP
- From: David Spisla <spisla80@xxxxxxxxx>
- Debian Repository changed urls
- From: Michael Böhm <dudleyperkins@xxxxxxxxx>
- [Gluster-devel] Announcing Gluster release 7.2
- From: Rinku Kothiya <rkothiya@xxxxxxxxxx>
- Re: Fwd: Compiling Gluster RPMs for v5.x on Suse SLES15
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: glusterfs performance issue with fio fdatasync
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Repo NFS-Ganesha for SLES 15 SP1
- From: Kaleb Keithley <kkeithle@xxxxxxxxxx>
- Repo NFS-Ganesha for SLES 15 SP1
- From: Christian Meyer <chrmeyer@xxxxxxxxxxx>
- glusterfs performance issue with fio fdatasync
- From: venky evr <venky.evr@xxxxxxxxx>
- Re: To RAID or not to RAID...
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: healing of big files
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: To RAID or not to RAID...
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: healing of big files
- From: vlad f halilov <vfh@xxxxxxxxx>
- Re: To RAID or not to RAID...
- From: Markus Kern <gluster@xxxxxxxxxxx>
- Re: To RAID or not to RAID...
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: healing of big files
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: Expanding distributed dispersed volumes
- From: Strahil <hunter86_bg@xxxxxxxxx>
- To RAID or not to RAID...
- From: Markus Kern <gluster@xxxxxxxxxxx>
- Compiling Gluster RPMs for v5.x on Suse SLES15
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: healing of big files
- From: Michael Böhm <dudleyperkins@xxxxxxxxx>
- Re: Expanding distributed dispersed volumes
- From: Ashish Pandey <aspandey@xxxxxxxxxx>
- healing of big files
- From: vlad f halilov <vfh@xxxxxxxxx>
- Re: Gluster Periodic Brick Process Deaths
- From: Nico van Royen <nico@xxxxxxxxxxxx>
- Expanding distributed dispersed volumes
- From: Markus Kern <gluster@xxxxxxxxxxx>
- Invitation: GlusterFS community meeting @ Tue Jan 14, 2020 11:30am - 12:30pm (IST) (gluster-users@xxxxxxxxxxx)
- From: srakonde@xxxxxxxxxx
- Re: Gluster Periodic Brick Process Deaths
- From: Xavi Hernandez <jahernan@xxxxxxxxxx>
- Re: Gluster Periodic Brick Process Deaths
- From: Ben Tasker <btasker@xxxxxxxxxxxxxx>
- Re: healing does not heal
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: healing does not heal
- From: lejeczek <peljasz@xxxxxxxxxxx>
- Re: healing does not heal
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: healing does not heal
- From: lejeczek <peljasz@xxxxxxxxxxx>
- Re: remove arbiter to add new brick - possible?
- From: Xavi Hernandez <jahernan@xxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: Replacing brick in replicated volume withoutreducing redundancy?
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: healing does not heal
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: healing does not heal
- From: lejeczek <peljasz@xxxxxxxxxxx>
- Re: healing does not heal
- From: lejeczek <peljasz@xxxxxxxxxxx>
- Re: [External] Re: Problems with gluster distributed mode and numpy memory mapped files
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Re: [External] Re: Problems with gluster distributed mode and numpy memory mapped files
- From: "Jewell, Paul" <Paul.Jewell@xxxxxxxxxxxxxxxxxxxxxx>
- Re: healing does not heal
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: healing does not heal
- From: lejeczek <peljasz@xxxxxxxxxxx>
- Re: Replacing brick in replicated volume without reducing redundancy?
- From: Stefan <gluster@xxxxxxxxxxxxxxxxx>
- Re: Replacing brick in replicated volume without reducing redundancy?
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Replacing brick in replicated volume without reducing redundancy?
- From: Stefan <gluster@xxxxxxxxxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: healing does not heal
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: No gluster NFS server on localhost
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: Performance tuning suggestions for nvme on aws (Strahil)
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Announcing Gluster release 6.7
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- healing does not heal
- From: lejeczek <peljasz@xxxxxxxxxxx>
- Re: No gluster NFS server on localhost
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: No gluster NFS server on localhost
- From: Kaleb Keithley <kkeithle@xxxxxxxxxx>
- Re: No gluster NFS server on localhost
- From: Yaniv Kaul <ykaul@xxxxxxxxxx>
- Re: No gluster NFS server on localhost
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: No gluster NFS server on localhost
- From: Xie Changlong <zgrep@xxxxxxx>
- Re: No gluster NFS server on localhost
- From: Jim Kinney <jim.kinney@xxxxxxxxx>
- No gluster NFS server on localhost
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: Performance tuning suggestions for nvme on aws (Strahil)
- From: Mohit Agrawal <moagrawa@xxxxxxxxxx>
- Re: Performance tuning suggestions for nvme on aws (Strahil)
- From: Michael Richardson <hello@xxxxxxxxxxxxxxxxxxxxx>
- Re: Performance tuning suggestions for nvme on aws (Strahil)
- From: Mohit Agrawal <moagrawa@xxxxxxxxxx>
- Re: Performance tuning suggestions for nvme on aws
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Performance tuning suggestions for nvme on aws
- From: Michael Richardson <hello@xxxxxxxxxxxxxxxxxxxxx>
- Is it possible to experience data loss while rebalancing a volume?
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: How can I remove a wrong information in "Other names" while checking gluster peer status?
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: How can I remove a wrong information in "Other names" while checking gluster peer status?
- From: Atin Mukherjee <atin.mukherjee83@xxxxxxxxx>
- Re: How can I remove a wrong information in "Other names" while checking gluster peer status?
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: How can I remove a wrong information in "Other names" while checking gluster peer status?
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- How can I remove a wrong information in "Other names" while checking gluster peer status?
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: periodical warnings in brick-log after upgrading to gluster 7.1
- From: Michael Böhm <dudleyperkins@xxxxxxxxx>
- Re: periodical warnings in brick-log after upgrading to gluster 7.1
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Re: [Gluster-devel] Announcing Gluster release 7.1
- From: Michael Böhm <dudleyperkins@xxxxxxxxx>
- Re: [Gluster-devel] Announcing Gluster release 7.1
- From: Rinku Kothiya <rkothiya@xxxxxxxxxx>
- Re: Gluster 7.0 (CentOS7) issue with hooks
- From: Kaleb Keithley <kkeithle@xxxxxxxxxx>
- Gluster 7.0 (CentOS7) issue with hooks
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- periodical warnings in brick-log after upgrading to gluster 7.1
- From: Michael Böhm <dudleyperkins@xxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: remove arbiter to add new brick - possible?
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: GFS performance under heavy traffic
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: remove arbiter to add new brick - possible?
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: [Gluster-devel] Announcing Gluster release 7.1
- From: Michael Böhm <dudleyperkins@xxxxxxxxx>
- Re: Announcing Gluster release 5.11
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Announcing Gluster release 5.11
- From: Shwetha Acharya <sacharya@xxxxxxxxxx>
- Re: Announcing Gluster release 5.11
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: [Gluster-devel] Announcing Gluster release 7.1
- From: Rinku Kothiya <rkothiya@xxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Strahil <hunter86_bg@xxxxxxxxx>
- remote operation failed [Permission denied] every 10 minutes after upgrading from 5.10 to 7.0
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- [Gluster-devel] Announcing Gluster release 7.1
- From: Rinku Kothiya <rkothiya@xxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Release 6.7: Expected tagging on 26th December
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Invitation: Invitation: GlusterFS community meeting @ Tue Dec 24, 201... @ Tue Dec 24, 2019 11:30am - 12:30pm (IST) (gluster-users@xxxxxxxxxxx)
- Re: gluster volume heal info takes a long time
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: Jorick Astrego <jorick@xxxxxxxxxxx>
- gluster volume heal info takes a long time
- From: Sander Hoentjen <sander@xxxxxxxxxxx>
- Re: Entries in heal pending
- From: Szilágyi Balázs <szilagyi.balazs@xxxxxxxxxx>
- Re: remove arbiter to add new brick - possible?
- From: Xavi Hernandez <jahernan@xxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Re: Entries in heal pending
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Entries in heal pending
- From: Szilágyi Balázs <szilagyi.balazs@xxxxxxxxxx>
- remove arbiter to add new brick - possible?
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: Gluster v7 in CentOS7
- From: Kaleb Keithley <kkeithle@xxxxxxxxxx>
- Gluster v7 in CentOS7
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Announcing Gluster release 5.11
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: GFS performance under heavy traffic
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- GFS performance under heavy traffic
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Bug: storage.reserve ignored by self-heal so that bricks are 100% full
- From: David Spisla <spisla80@xxxxxxxxx>
- Bugreport: Pending self-heal when bricks are full
- From: David Spisla <spisla80@xxxxxxxxx>
- Replicated volume does not work if comprised of more than 16 bricks
- From: Vitaly Pyslar <vpyslar@xxxxxx>
- Re: Fwd: VM freeze issue on simple gluster setup.
- From: WK <wkmail@xxxxxxxxx>
- Re: Problems with gluster distributed mode and numpy memory mapped files
- From: "Jewell, Paul" <Paul.Jewell@xxxxxxxxxxxxxxxxxxxxxx>
- How to update quota usage?
- From: Eduardo Mayoral <emayoral@xxxxxxxx>
- Re: Fwd: VM freeze issue on simple gluster setup.
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Problem with heal operation con replica 2: "Launching heal operation to perform full self heal on volume gv0 has been unsuccessful on bricks that are down. Please check if all brick processes are running."
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- Fwd: VM freeze issue on simple gluster setup.
- From: WK <wkmail@xxxxxxxxx>
- Problem with heal operation con replica 2: "Launching heal operation to perform full self heal on volume gv0 has been unsuccessful on bricks that are down. Please check if all brick processes are running."
- From: woz woz <thewoz10@xxxxxxxxx>
- Re: Gluster Periodic Brick Process Deaths
- From: Ben Tasker <btasker@xxxxxxxxxxxxxx>
- Re: Gluster Periodic Brick Process Deaths
- From: Xavi Hernandez <jahernan@xxxxxxxxxx>
- Gluster Periodic Brick Process Deaths
- From: Ben Tasker <btasker@xxxxxxxxxxxxxx>
- Re: Trying to fix files that don't want to heal
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Invitation: GlusterFS community meeting @ Tue Dec 10, 2019 11:30am - 12:30pm (IST) (gluster-users@xxxxxxxxxxx)
- Canceled event: GlusterFS community meeting @ Tue Dec 10, 2019 11:30am - 12:30pm (IST) (gluster-users@xxxxxxxxxxx)
- Re: glusterfind can't find modul utils.py
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: glusterfind can't find modul utils.py
- From: Shwetha Acharya <sacharya@xxxxxxxxxx>
- Release 5.11: Expected tagging on 9th December
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- VM freeze issue on simple gluster setup.
- From: WK <wkmail@xxxxxxxxx>
- Re: In-place volume type conversion
- From: Vijay Bellur <vbellur@xxxxxxxxxx>
- In-place volume type conversion
- From: Dmitry Antipov <dmantipov@xxxxxxxxx>
- glusterfind can't find modul utils.py
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: Trying to fix files that don't want to heal
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Re: mix of replicated and distributed bricks
- From: Strahil <hunter86_bg@xxxxxxxxx>
- mix of replicated and distributed bricks
- From: Jim Kinney <jim.kinney@xxxxxxxxx>
- Re: glusterfs7 client memory leak found
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: [ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: [ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing
- From: Jiffin Thottan <jthottan@xxxxxxxxxx>
- Re: Trying to fix files that don't want to heal
- From: Ashish Pandey <aspandey@xxxxxxxxxx>
- Re: Healing completely loss file on replica 3 volume
- From: Karthik Subrahmanya <ksubrahm@xxxxxxxxxx>
- Re: [ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing
- From: Krutika Dhananjay <kdhananj@xxxxxxxxxx>
- Re: Unable to setup geo replication
- From: "Tan, Jian Chern" <jian.chern.tan@xxxxxxxxx>
- Re: Unable to setup geo replication
- From: Kotresh Hiremath Ravishankar <khiremat@xxxxxxxxxx>
- Re: [Gluster-devel] "rpc_clnt_ping_timer_expired" errors
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Healing completely loss file on replica 3 volume
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Geo-Replication Issue while upgrading
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: Trying to fix files that don't want to heal
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Healing completely loss file on replica 3 volume
- From: Dmitry Antipov <dmantipov@xxxxxxxxx>
- Re: Trying to fix files that don't want to heal
- From: Ashish Pandey <aspandey@xxxxxxxxxx>
- Gluster Release 8.0: Call for proposals
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Re: Geo-Replication Issue while upgrading
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: Geo-Replication Issue while upgrading
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Trying to fix files that don't want to heal
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Re: Stale File Handle Errors During Heavy Writes
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Re: Unable to setup geo replication
- From: "Tan, Jian Chern" <jian.chern.tan@xxxxxxxxx>
- Re: Stale File Handle Errors During Heavy Writes
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Fwd: Re: [ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: Stale File Handle Errors During Heavy Writes
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Re: Stale File Handle Errors During Heavy Writes
- From: Timothy Orme <torme@xxxxxxxxxxxx>
- Re: Stale File Handle Errors During Heavy Writes
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Re: Use GlusterFS as storage for images of virtual machines - available issues
- From: Gregor Burck <gregor@xxxxxxxxxxxxx>
- Re: Use GlusterFS as storage for images of virtual machines - available issues
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Use GlusterFS as storage for images of virtual machines - available issues
- From: Sankarshan Mukhopadhyay <sankarshan.mukhopadhyay@xxxxxxxxx>
- Re: Use GlusterFS as storage for images of virtual machines - available issues
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Use GlusterFS as storage for images of virtual machines - available issues
- From: Dmitry Melekhov <dm@xxxxxxxxxx>
- Re: Use GlusterFS as storage for images of virtual machines - available issues
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Use GlusterFS as storage for images of virtual machines - available issues
- From: Gregor Burck <gregor@xxxxxxxxxxxxx>
- Re: Use GlusterFS as storage for images of virtual machines - available issues
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Use GlusterFS as storage for images of virtual machines - available issues
- From: Gregor Burck <gregor@xxxxxxxxxxxxx>
- Re: Unable to setup geo replication
- From: Kotresh Hiremath Ravishankar <khiremat@xxxxxxxxxx>
- Re: Unable to setup geo replication
- From: "Tan, Jian Chern" <jian.chern.tan@xxxxxxxxx>
- Re: [Gluster-devel] Modifying gluster's logging mechanism
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Stale File Handle Errors During Heavy Writes
- From: Timothy Orme <torme@xxxxxxxxxxxx>
- Re: heal info reporting not connected after replacebrick
- From: Alan <alan@xxxxxxxxxxx>
- Re: heal info reporting not connected after replacebrick
- From: Strahil <hunter86_bg@xxxxxxxxx>
- heal info reporting not connected after replace brick
- From: Alan <alan@xxxxxxxxxxx>
- Re: Unable to setup geo replication
- From: Kotresh Hiremath Ravishankar <khiremat@xxxxxxxxxx>
- Re: Unable to setup geo replication
- From: "Tan, Jian Chern" <jian.chern.tan@xxxxxxxxx>
- Minutes of Gluster Community Meeting (APAC) 26th Nov 2019
- From: Shwetha Acharya <sacharya@xxxxxxxxxx>
- Re: Unable to setup geo replication
- From: Kotresh Hiremath Ravishankar <khiremat@xxxxxxxxxx>
- Unable to setup geo replication
- From: "Tan, Jian Chern" <jian.chern.tan@xxxxxxxxx>
- Re: [Gluster-Maintainers] Proposal to change gNFS status
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Updated invitation: Gluster community meeting APAC @ Tue Nov 26, 2019 11:30am - 12:30pm (IST) (gluster-users@xxxxxxxxxxx)
- From: sacharya@xxxxxxxxxx
- Re: Client Handling of Elastic Clusters
- From: Vlad Kopylov <vladkopy@xxxxxxxxx>
- Re: how to downgrade GlusterFS from version 7 to 3.13?
- From: Vlad Kopylov <vladkopy@xxxxxxxxx>
- Re: [Gluster-Maintainers] [Gluster-devel] Modifying gluster's logging mechanism
- From: Barak Sason Rofman <bsasonro@xxxxxxxxxx>
- Re: [Gluster-Maintainers] Proposal to change gNFSstatus
- From: Xie Changlong <zgrep@xxxxxxx>
- Re: gNFS vs NFS Ganesha performance
- From: Xie Changlong <zgrep@xxxxxxx>
- Re: Modifying gluster's logging mechanism
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: [Gluster-Maintainers] Proposal to change gNFS status
- From: Niels de Vos <ndevos@xxxxxxxxxx>
- Re: [Gluster-Maintainers] [Gluster-devel] Modifying gluster's logging mechanism
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: [Gluster-devel] Modifying gluster's logging mechanism
- From: Yaniv Kaul <ykaul@xxxxxxxxxx>
- Re: [Gluster-devel] Modifying gluster's logging mechanism
- From: Xie Changlong <zgrep@xxxxxxx>
- gNFS vs NFS Ganesha performance (was: Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFSstatus
- From: Yaniv Kaul <ykaul@xxxxxxxxxx>
- Re: [Gluster-devel] Modifying gluster's logging mechanism
- From: Barak Sason Rofman <bsasonro@xxxxxxxxxx>
- Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFSstatus
- From: Xie Changlong <zgrep@xxxxxxx>
- Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFSstatus
- From: Yaniv Kaul <ykaul@xxxxxxxxxx>
- Re: [Gluster-devel] Modifying gluster's logging mechanism
- From: Xie Changlong <zgrep@xxxxxxx>
- Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFSstatus
- From: Aravinda Vishwanathapura Krishna Murthy <avishwan@xxxxxxxxxx>
- Re: [Gluster-devel] Modifying gluster's logging mechanism
- From: Atin Mukherjee <atin.mukherjee83@xxxxxxxxx>
- Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change gNFSstatus
- From: Xie Changlong <zgrep@xxxxxxx>
- Re: [Gluster-Maintainers] Proposal to change gNFS status
- From: Kaleb Keithley <kkeithle@xxxxxxxxxx>
- Re: [Gluster-Maintainers] Proposal to change gNFS status
- From: Kaleb Keithley <kkeithle@xxxxxxxxxx>
- Re: Proposal to change gNFS status
- From: Ivan Rossi <rouge2507@xxxxxxxxx>
- Modifying gluster's logging mechanism
- From: Barak Sason Rofman <bsasonro@xxxxxxxxxx>
- Fwd: Proposal to change gNFS status
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Re: [Gluster-devel] Proposal to change gNFS status
- From: Yaniv Kaul <ykaul@xxxxxxxxxx>
- Proposal to change gNFS status
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Re: Thin-arbiter questions
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Corrupted Data From Rsync
- From: Timothy Orme <torme@xxxxxxxxxxxx>
- Re: Client disconnections, memory use
- From: Jamie Lawrence <jlawrence@xxxxxxxxxxxxxxx>
- Re: Thin-arbiter questions
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Mysterious volume unmounts
- From: Jamie Lawrence <jlawrence@xxxxxxxxxxxxxxx>
- Re: Geo_replication to Faulty
- From: Kotresh Hiremath Ravishankar <khiremat@xxxxxxxxxx>
- Re: Geo_replication to Faulty
- From: Kotresh Hiremath Ravishankar <khiremat@xxxxxxxxxx>
- Re: Geo_replication to Faulty
- From: Kotresh Hiremath Ravishankar <khiremat@xxxxxxxxxx>
- Re: Thin-arbiter questions
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Re: Thin-arbiter questions
- From: David Cunningham <dcunningham@xxxxxxxxxxxxx>
- Issues occurred to start glusterfsd with no free space brick
- From: "Kay K." <kkay.jp@xxxxxxxxx>
- Re: Geo_replication to Faulty
- From: Aravinda Vishwanathapura Krishna Murthy <avishwan@xxxxxxxxxx>
- Re: Gluster v6.6 replica 2 arbiter 1 - gfid on arbiter is different
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Gluster v6.6 replica 2 arbiter 1 - gfid on arbiter is different
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: "Transport endpoint is not connected" error + long list of files to be healed
- From: Ashish Pandey <aspandey@xxxxxxxxxx>
- "Transport endpoint is not connected" error + long list of files to be healed
- From: Mauro Tridici <mauro.tridici@xxxxxxx>
- Announcing Gluster release 7
- From: Rinku Kothiya <rkothiya@xxxxxxxxxx>
- Re: Client disconnections, memory use
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Client disconnections, memory use
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Client disconnections, memory use
- From: Jamie Lawrence <jlawrence@xxxxxxxxxxxxxxx>
- Re: socket.so: undefined symbol: xlator_api - bd.so: cannot open shared object file & crypt.so: cannot open shared object file: No such file or directory
- From: Paolo Margara <paolo.margara@xxxxxxxxx>
- Upgrade testing to Gluster 7
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Gluster Community Meeting Minutes APAC - 12 Nov 2019
- From: Sheetal Pamecha <spamecha@xxxxxxxxxx>
- Invitation: Gluster community meeting APAC @ Tue Nov 12, 2019 11:30am... @ Tue Nov 12, 2019 11:30am - 12:20pm (IST) (gluster-users@xxxxxxxxxxx)
- From: spamecha@xxxxxxxxxx
- Long post about gluster issue after 6.5 to 6.6 upgrade and recovery steps
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: hook script question related to ctdb, shared storage, and bind mounts
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Strange gluster behaviour after snapshot restore
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: Sudden, dramatic performance drops with Glusterfs
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: Transport Endpoint Not Connected When Writing a Lot of Files
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Re: Sudden, dramatic performance drops with Glusterfs
- From: Michael Rightmire <Michael.Rightmire@xxxxxxx>
- Re: Sudden, dramatic performance drops with Glusterfs
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: Symbolic Links under .glusterfs deleted
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Symbolic Links under .glusterfs deleted
- From: Shreyansh Shah <shreyansh.shah@xxxxxxxxxxxxxx>
- Hot tiering and data writes
- From: Green Lantern <greenlntrn@xxxxxxxxx>
- Re: how to downgrade GlusterFS from version 7 to 3.13?
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Re: Performance is falling rapidly when updating from v5.5 to v7.0
- From: David Spisla <spisla80@xxxxxxxxx>
- Add existing data to new glusterfs install
- From: Michael Rightmire <Michael.Rightmire@xxxxxxx>
- Re: Sudden, dramatic performance drops with Glusterfs
- From: DUCARROZ Birgit <birgit.ducarroz@xxxxxxxx>
- Sudden, dramatic performance drops with Glusterfs
- From: Michael Rightmire <Michael.Rightmire@xxxxxxx>
- Re: Performance is falling rapidly when updating from v5.5 to v7.0
- From: RAFI KC <rkavunga@xxxxxxxxxx>
- Re: Performance is falling rapidly when updating from v5.5 to v7.0
- From: David Spisla <spisla80@xxxxxxxxx>
- how to downgrade GlusterFS from version 7 to 3.13?
- From: Riccardo Murri <riccardo.murri@xxxxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Riccardo Murri <riccardo.murri@xxxxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: RAFI KC <rkavunga@xxxxxxxxxx>
- Re: Performance is falling rapidly when updating from v5.5 to v7.0
- From: RAFI KC <rkavunga@xxxxxxxxxx>
- Re: Performance is falling rapidly when updating from v5.5 to v7.0
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: hook script question related to ctdb, shared storage, and bind mounts
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: hook script question related to ctdb, shared storage, and bind mounts
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: Performance is falling rapidly when updating from v5.5 to v7.0
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: Announcing Gluster release 6.6
- From: Niels de Vos <ndevos@xxxxxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Riccardo Murri <riccardo.murri@xxxxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Riccardo Murri <riccardo.murri@xxxxxxxxx>
- Re: Performance is falling rapidly when updating from v5.5 to v7.0
- From: RAFI KC <rkavunga@xxxxxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: RAFI KC <rkavunga@xxxxxxxxxx>
- Re: Announcing Gluster release 6.6
- From: Niels de Vos <ndevos@xxxxxxxxxx>
- backup-volfile-server on kubernetes
- From: pankaj kumar <pankaj@datacabinet.systems>
- Re: hook script question related to ctdb, shared storage, and bind mounts
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: Announcing Gluster release 6.6
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Performance is falling rapidly when updating from v5.5 to v7.0
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: hook script question related to ctdb, shared storage, and bind mounts
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Riccardo Murri <riccardo.murri@xxxxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Riccardo Murri <riccardo.murri@xxxxxxxxx>
- Re: [Samba] Gluster Dispersed Volume via SMB/CIFS
- From: Anoop C S <anoopcs@xxxxxxxxxxxxx>
- Re: hook script question related to ctdb, shared storage, and bind mounts
- From: Strahil <hunter86_bg@xxxxxxxxx>
- hook script question related to ctdb, shared storage, and bind mounts
- From: Erik Jacobson <erik.jacobson@xxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Riccardo Murri <riccardo.murri@xxxxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Riccardo Murri <riccardo.murri@xxxxxxxxx>
- Possible to Export Dispersed Volume via SMB/CIFS
- From: Felix Kölzow <felix.koelzow@xxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Amar Tumballi <amarts@xxxxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Riccardo Murri <riccardo.murri@xxxxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: Performance drop when upgrading from 3.8 to 6.5
- From: Riccardo Murri <riccardo.murri@xxxxxxxxx>
- Re: Announcing Gluster release 6.6
- From: Niels de Vos <ndevos@xxxxxxxxxx>
[Index of Archives]
[Gluster Development]
[Linux Filesytems Development]
[Linux Kernel Development]
[Linux ARM Kernel]
[Linux MIPS]
[Linux S390]
[Bugtraq]
[Fedora ARM]