Gluster Users - Date Index
[Prev Page][Next Page]
- Re: Version uplift query
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: Version uplift query
- From: Poornima Gurusiddaiah <pgurusid@xxxxxxxxxx>
- Re: Fwd: Added bricks with wrong name and now need to remove them without destroying volume.
- From: Jim Kinney <jim.kinney@xxxxxxxxx>
- Re: Fwd: Added bricks with wrong name and now need to remove them without destroying volume.
- From: Tami Greene <tmgreene364@xxxxxxxxx>
- Re: Fwd: Added bricks with wrong name and now need to remove them without destroying volume.
- From: Jim Kinney <jim.kinney@xxxxxxxxx>
- Fwd: Added bricks with wrong name and now need to remove them without destroying volume.
- From: Tami Greene <tmgreene364@xxxxxxxxx>
- Added bricks with wrong name and now need to remove them without destroying volume.
- From: Tami Greene <tmgreene364@xxxxxxxxx>
- Re: Version uplift query
- From: Ingo Fischer <ingo@xxxxxxxxxxxxx>
- Re: Version uplift query
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: Version uplift query
- From: ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx>
- Re: Gluster and bonding
- From: Jorick Astrego <jorick@xxxxxxxxxxx>
- Re: Gluster and bonding
- From: Vincent Royer <vincent@xxxxxxxxxxxxx>
- Re: Gluster and bonding
- From: Alex K <rightkicktech@xxxxxxxxx>
- Version uplift query
- From: ABHISHEK PALIWAL <abhishpaliwal@xxxxxxxxx>
- Re: [ovirt-users] Tracking down high writes in GlusterFS volume
- From: Krutika Dhananjay <kdhananj@xxxxxxxxxx>
- Re: Gluster and bonding
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Code of Conduct Update
- From: Amye Scavarda <amye@xxxxxxxxxx>
- Geo-Replication in "FAULTY" state after files are added to master volume: gsyncd worker crashed in syncdutils with "OSError: [Errno 22] Invalid argument
- From: Boubacar Cisse <cboubacar@xxxxxxxxx>
- GlusterFS - 6.0RC - Test days (27th, 28th Feb)
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: Gluster and bonding
- From: Alvin Starr <alvin@xxxxxxxxxx>
- Re: Gluster and bonding
- From: Boris Zhmurov <bb@xxxxxxxxxxxxxx>
- Re: [Gluster-Maintainers] glusterfs-6.0rc0 released
- From: Shyam Ranganathan <srangana@xxxxxxxxxx>
- Re: Gluster and bonding
- From: Jorick Astrego <jorick@xxxxxxxxxxx>
- Re: Gluster and bonding
- From: Martin Toth <snowmailer@xxxxxxxxx>
- Re: Gluster and bonding
- From: Jorick Astrego <jorick@xxxxxxxxxxx>
- Re: Gluster and bonding
- From: Dmitry Melekhov <dm@xxxxxxxxxx>
- Re: Gluster and bonding
- From: Jim Kinney <jim.kinney@xxxxxxxxx>
- Re: Gluster and bonding
- From: Martin Toth <snowmailer@xxxxxxxxx>
- Re: Gluster and bonding
- From: Alex K <rightkicktech@xxxxxxxxx>
- Re: Gluster and bonding
- From: Jorick Astrego <jorick@xxxxxxxxxxx>
- Re: Gluster and bonding
- From: Dmitry Melekhov <dm@xxxxxxxxxx>
- Gluster and bonding
- From: Alex K <rightkicktech@xxxxxxxxx>
- Re: glusterfsd Ubuntu 18.04 high iowait issues
- From: Kartik Subbarao <subbarao@xxxxxxxxxxxx>
- Gluster geo replication failing to sealheal with the below errors
- From: ajay s <ajays20078@xxxxxxxxx>
- Re: glusterfsd Ubuntu 18.04 high iowait issues
- From: Kartik Subbarao <subbarao@xxxxxxxxxxxx>
- Re: glusterfsd Ubuntu 18.04 high iowait issues
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- glusterfsd Ubuntu 18.04 high iowait issues
- From: Kartik Subbarao <subbarao@xxxxxxxxxxxx>
- Re: GlusterFS Scale
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- Re: GlusterFS Scale
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: gluster 5.3: file or directory not read-/writeable, although it exists - cache?
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: gluster 5.3: file or directory not read-/writeable, although it exists - cache?
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: gluster 5.3: file or directory not read-/writeable, although it exists - cache?
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: gluster 5.3: file or directory not read-/writeable, although it exists - cache?
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- gluster 5.3: file or directory not read-/writeable, although it exists - cache?
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Tracking down high writes in GlusterFS volume
- From: Jayme <jaymef@xxxxxxxxx>
- Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- GlusterFS Scale
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- High network traffic with performance.readdir-ahead on
- From: Alberto Bengoa <bengoa@xxxxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: (PLEASE UNDERSTAND our concern as TOP PRIORITY) : Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Gluster Container Storage: Release Update
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: Files on Brick not showing up in ls command
- From: Patrick Nixon <pnixon@xxxxxxxxx>
- Re: Files on Brick not showing up in ls command
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Disabling read-ahead and io-cache for native fuse mounts
- From: Darrell Budic <budic@xxxxxxxxxxxxxxxx>
- Re: Disabling read-ahead and io-cache for native fuse mounts
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Disabling read-ahead and io-cache for native fuse mounts
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Disabling read-ahead and io-cache for native fuse mounts
- From: Manoj Pillai <mpillai@xxxxxxxxxx>
- Re: Disabling read-ahead and io-cache for native fuse mounts
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Disabling read-ahead and io-cache for native fuse mounts
- From: Darrell Budic <budic@xxxxxxxxxxxxxxxx>
- Re: Disabling read-ahead and io-cache for native fuse mounts
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Disabling read-ahead and io-cache for native fuse mounts
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: João Baúto <joao.bauto@xxxxxxxxxxxxxxxxxxxxxxx>
- Re: Client failover question
- From: Poornima Gurusiddaiah <pgurusid@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: Client failover question
- From: Jim Laib <jlaib01@xxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Glusterfs server
- From: John Quinoz <demic198@xxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Getting timedout error while rebalancing
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- Re: Web Ui for gluster
- From: Nico van Royen <nico@xxxxxxxxxxxx>
- Re: glusterfs 4.1.7 + nfs-ganesha 2.7.1 freeze during write
- From: Soumya Koduri <skoduri@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- glusterfs 4.1.7 + nfs-ganesha 2.7.1 freeze during write
- From: Maurits Lamers <mauritslamers@xxxxxxxxx>
- Inter switching master slave in Gluster Geo Replication
- From: deepu srinivasan <sdeepugd@xxxxxxxxx>
- Web Ui for gluster
- From: deepu srinivasan <sdeepugd@xxxxxxxxx>
- Re: glusterfs 4.1.7 + nfs-ganesha 2.7.1 freeze during write
- From: Maurits Lamers <mauritslamers@xxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- KubeCon Shanghai CFP open through 2019-02-22
- From: Amye Scavarda <amye@xxxxxxxxxx>
- Re: glusterfs 4.1.7 + nfs-ganesha 2.7.1 freeze during write
- From: Maurits Lamers <mauritslamers@xxxxxxxxx>
- Re: glusterfs 4.1.7 + nfs-ganesha 2.7.1 freeze during write
- From: Soumya Koduri <skoduri@xxxxxxxxxx>
- glusterfs 4.1.7 + nfs-ganesha 2.7.1 freeze during write
- From: Maurits Lamers <mauritslamers@xxxxxxxxx>
- Mounting Gluster volume from "old" Ubuntu 14
- From: Nicolas SCHREVEL <nicolas.schrevel@xxxxxxx>
- Re: Getting timedout error while rebalancing
- From: deepu srinivasan <sdeepugd@xxxxxxxxx>
- Re: Getting timedout error while rebalancing
- From: deepu srinivasan <sdeepugd@xxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: usage of harddisks: each hdd a brick? raid?
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Getting timedout error while rebalancing
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Help analise statedumps
- From: Pedro Costa <pedro@pmc.digital>
- Re: Corrupted File readable via FUSE?
- From: FNU Raghavendra Manjunath <rabhat@xxxxxxxxxx>
- Re: Getting timedout error while rebalancing
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Getting timedout error while rebalancing
- From: deepu srinivasan <sdeepugd@xxxxxxxxx>
- RSYNC files renaming issue and timeout errors
- From: Mauro Tridici <mauro.tridici@xxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: 0-epoll: Failed to dispatch handler
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: gluster remove-brick
- From: mohammad kashif <kashif.alig@xxxxxxxxx>
- Re: gluster remove-brick
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Help analise statedumps
- From: Pedro Costa <pedro@pmc.digital>
- Re: gluster remove-brick
- From: mohammad kashif <kashif.alig@xxxxxxxxx>
- Memory management, OOM kills and glusterfs
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- BOF Session - FOSDEM Today
- From: Armin Weißer <armin.weisser@xxxxxxxxxxxx>
- Re: Corrupted File readable via FUSE?
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: Help analise statedumps
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- Re: gluster remove-brick
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- gluster remove-brick
- From: mohammad kashif <kashif.alig@xxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Help analise statedumps
- From: Pedro Costa <pedro@pmc.digital>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Corrupted File readable via FUSE?
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Gluster Monthly Newsletter, January 2019
- From: Amye Scavarda <amye@xxxxxxxxxx>
- Re: Files losing permissions in GlusterFS 3.12
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- glusterfs 4.1.6 improving folder listing
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: chrome / chromium crash on gluster
- From: "Dr. Michael J. Chudobiak" <mjc@xxxxxxxxxxxxxxx>
- Re: Files losing permissions in GlusterFS 3.12
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Default Port Range for Bricks
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: chrome / chromium crash on gluster
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: Is it required for a node to meet quorum over all the nodes in storage pool?
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: Default Port Range for Bricks
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fd9671deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fd9731ea218] ) 2-dict: dict is NULL [Invalid argument]
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- chrome / chromium crash on gluster
- From: "Dr. Michael J. Chudobiak" <mjc@xxxxxxxxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: Files losing permissions in GlusterFS 3.12
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Re: VolumeOpt Set fails of a freshly created volume
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: query about glusterfs 3.12-3 write-behind.c coredump
- From: "Li, Deqian (NSB - CN/Hangzhou)" <deqian.li@xxxxxxxxxxxxxxx>
- Re: query about glusterfs 3.12-3 write-behind.c coredump
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: query about glusterfs 3.12-3 write-behind.c coredump
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: query about glusterfs 3.12-3 write-behind.c coredump
- From: "Li, Deqian (NSB - CN/Hangzhou)" <deqian.li@xxxxxxxxxxxxxxx>
- Re: query about glusterfs 3.12-3 write-behind.c coredump
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- query about glusterfs 3.12-3 write-behind.c coredump
- From: "Li, Deqian (NSB - CN/Hangzhou)" <deqian.li@xxxxxxxxxxxxxxx>
- Default Port Range for Bricks
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: query about glusterd epoll thread get stuck
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Improvements to Gluster upstream documentation
- From: Sunil Kumar Heggodu Gopala Acharya <sheggodu@xxxxxxxxxx>
- query about glusterd epoll thread get stuck
- From: "Zhou, Cynthia (NSB - CN/Hangzhou)" <cynthia.zhou@xxxxxxxxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- Re: Files losing permissions in GlusterFS 3.12
- From: Frank Ruehlemann <f.ruehlemann@xxxxxxxxxxxxxx>
- Re: Max length for filename
- From: Serkan Çoban <cobanserkan@xxxxxxxxx>
- Max length for filename
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Files losing permissions in GlusterFS 3.12
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Files losing permissions in GlusterFS 3.12
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Re: Brick stays offline after update from 4.1.6-1.el7 to 4.1.7-1.el7
- From: <max.degraaf@xxxxxxx>
- Re: Отн: Gluster performance issues - need advise
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Brick stays offline after update from 4.1.6-1.el7 to 4.1.7-1.el7
- From: <max.degraaf@xxxxxxx>
- Brick stays offline after update from 4.1.6-1.el7 to 4.1.7-1.el7
- From: <max.degraaf@xxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Is it required for a node to meet quorum over all the nodes in storage pool?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: gluster 5, Centos 7.6 and nfs
- From: Jiffin Thottan <jthottan@xxxxxxxxxx>
- gluster 5, Centos 7.6 and nfs
- From: Thing <thing.thing@xxxxxxxxx>
- Re: Can't write to volume using vim/nano
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Madhu Rajanna <mrajanna@xxxxxxxxxx>
- Re: Access to Servers hangs after stop one server...
- From: Gilberto Nunes <gilberto.nunes32@xxxxxxxxx>
- Re: Отн: Gluster performance issues - need advise
- From: Darrell Budic <budic@xxxxxxxxxxxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Access to Servers hangs after stop one server...
- From: Scott Worthington <scott.c.worthington@xxxxxxxxx>
- Re: Access to Servers hangs after stop one server...
- From: Gilberto Nunes <gilberto.nunes32@xxxxxxxxx>
- Re: Access to Servers hangs after stop one server...
- From: Gilberto Nunes <gilberto.nunes32@xxxxxxxxx>
- Re: Access to Servers hangs after stop one server...
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: Access to Servers hangs after stop one server...
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: Access to Servers hangs after stop one server...
- From: Gilberto Nunes <gilberto.nunes32@xxxxxxxxx>
- Re: Access to Servers hangs after stop one server...
- From: Scott Worthington <scott.c.worthington@xxxxxxxxx>
- Re: Access to Servers hangs after stop one server...
- From: Gilberto Nunes <gilberto.nunes32@xxxxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Mohit Agrawal <moagrawa@xxxxxxxxxx>
- Re: Can't write to volume using vim/nano
- From: Jim Kinney <jim.kinney@xxxxxxxxx>
- Re: Access to Servers hangs after stop one server...
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- Отн: Gluster performance issues - need advise
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Madhu Rajanna <mrajanna@xxxxxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Madhu Rajanna <mrajanna@xxxxxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Madhu Rajanna <mrajanna@xxxxxxxxxx>
- gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: server busy
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Can't write to volume using vim/nano
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: Can't write to volume using vim/nano
- From: Jim Kinney <jim.kinney@xxxxxxxxx>
- Re: Can't write to volume using vim/nano
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- Re: Can't write to volume using vim/nano
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- Re: Can't write to volume using vim/nano
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- Re: Can't write to volume using vim/nano
- From: Jim Kinney <jim.kinney@xxxxxxxxx>
- Can't write to volume using vim/nano
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- Access to Servers hangs after stop one server...
- From: Gilberto Nunes <gilberto.nunes32@xxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: Gluster performance issues - need advise
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: Gluster performance issues - need advise
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Gluster performance issues - need advise
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: writev: Transport endpoint is not connected
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- Отн: Performance issue, need guidance
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: writev: Transport endpoint is not connected
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: writev: Transport endpoint is not connected
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Performance issue, need guidance
- From: Strahil <hunter86_bg@xxxxxxxxx>
- Re: writev: Transport endpoint is not connected
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Performance issue, need guidance
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- writev: Transport endpoint is not connected
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- Announcing Gluster release 5.3 and 4.1.7
- From: Shyam Ranganathan <srangana@xxxxxxxxxx>
- Re: File renaming not geo-replicated
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: File renaming not geo-replicated
- From: Arnaud Launay <asl@xxxxxxxxxx>
- Re: usage of harddisks: each hdd a brick? raid?
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Self/Healing process after node maintenance
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Self/Healing process after node maintenance
- From: Martin Toth <snowmailer@xxxxxxxxx>
- Re: Increasing Bitrot speed glusterfs 4.1.6
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- Re: [External] Re: Samba+Gluster: Performance measurements for small files
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: Increasing Bitrot speed glusterfs 4.1.6
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: Samba+Gluster: Performance measurements for small files
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Samba+Gluster: Performance measurements for small files
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: usage of harddisks: each hdd a brick? raid?
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: [Bugs] Bricks are going offline unable to recover with heal/start force commands
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: [External] Re: Self Heal Confusion
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: Bricks are going offline unable to recover with heal/start force commands
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: Glusterfs backup and restore
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Re: 'dirfingerprint' to get glusterfs directory stats
- From: Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx>
- Bricks are going offline unable to recover with heal/start force commands
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume
- From: Shaik Salam <shaik.salam@xxxxxxx>
- trouble moving files
- From: "Jose V. Carrion" <jocarbur@xxxxxxxxx>
- Re: [Bugs] Unable to create new volume due to pending operations
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Bricks are going offline unable to recover with heal/start force commands
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Unable to create new volume due to pending operations
- From: Shaik Salam <shaik.salam@xxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: To good to be truth speed improvements?
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: To good to be truth speed improvements?
- From: Andreas Davour <ante@xxxxxxxxxxxx>
- Re: To good to be truth speed improvements?
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: To good to be truth speed improvements?
- From: Andreas Davour <ante@xxxxxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: invisible files in some directory
- From: Mauro Tridici <mauro.tridici@xxxxxxx>
- Re: invisible files in some directory
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- invisible files in some directory
- From: Mauro Tridici <mauro.tridici@xxxxxxx>
- Re: To good to be truth speed improvements?
- From: Artem Russakovskii <archon810@xxxxxxxxx>
- Re: Unable to create new files or folders using samba and vfs_glusterfs
- From: Matt Waymack <mwaymack@xxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: VolumeOpt Set fails of a freshly created volume
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- VolumeOpt Set fails of a freshly created volume
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: glusterfs 4.1.6 error in starting glusterd service
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- glusterfs 4.1.6 error in starting glusterd service
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Community Meeting Host Needed, 16 Jan at 15:00 UTC
- From: Amye Scavarda <amye@xxxxxxxxxx>
- Re: [External] To good to be truth speed improvements?
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: [External] To good to be truth speed improvements?
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: [External] To good to be truth speed improvements?
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: [External] To good to be truth speed improvements?
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- To good to be truth speed improvements?
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- HELP: Commit failed on localhost. Please check the log file for more details
- From: Mauro Gatti <mauro.list@xxxxxxxxx>
- Re: [Stale file handle] in shard volume
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Re: Increasing Bitrot speed glusterfs 4.1.6
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: [Stale file handle] in shard volume
- From: Krutika Dhananjay <kdhananj@xxxxxxxxxx>
- Re: [Stale file handle] in shard volume
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Increasing Bitrot speed glusterfs 4.1.6
- From: Amudhan P <amudhan83@xxxxxxxxx>
- GCS 0.5 release
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: [External] Re: A broken file that can not be deleted
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- Re: Input/output error on FUSE log
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: [External] Re: A broken file that can not be deleted
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: A broken file that can not be deleted
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: usage of harddisks: each hdd a brick? raid?
- From: Mike Lykov <combr@xxxxx>
- Re: A broken file that can not be deleted
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: usage of harddisks: each hdd a brick? raid?
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: usage of harddisks: each hdd a brick? raid?
- From: Serkan Çoban <cobanserkan@xxxxxxxxx>
- Re: usage of harddisks: each hdd a brick? raid?
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Input/output error on FUSE log
- From: Matt Waymack <mwaymack@xxxxxxxxx>
- Re: usage of harddisks: each hdd a brick? raid?
- A broken file that can not be deleted
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- usage of harddisks: each hdd a brick? raid?
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: replace-brick operation issue...
- From: Anand Malagi <amalagi@xxxxxxxxxxxxx>
- Gluster Monthly Newsletter, December 2018
- From: Amye Scavarda <amye@xxxxxxxxxx>
- Glusterfs backup and restore
- From: Kannan V <kannanv06@xxxxxxxxx>
- 'dirfingerprint' to get glusterfs directory stats
- From: Manhong Dai <daimh@xxxxxxxxx>
- Re: Input/output error on FUSE log
- From: Matt Waymack <mwaymack@xxxxxxxxx>
- Re: [External] Re: Input/output error on FUSE log
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: [External] Re: Input/output error on FUSE log
- From: Matt Waymack <mwaymack@xxxxxxxxx>
- Re: [External] Re: Input/output error on FUSE log
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: [External] Re: Input/output error on FUSE log
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: [External] Re: Input/output error on FUSE log
- From: Matt Waymack <mwaymack@xxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- Re: [External] Re: Input/output error on FUSE log
- From: Matt Waymack <mwaymack@xxxxxxxxx>
- Re: update to 4.1.6-1 and fix-layout failing
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Glusterfs 4.1.6
- From: Ashish Pandey <aspandey@xxxxxxxxxx>
- Re: Glusterfs 4.1.6
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Glusterfs 4.1.6
- From: Ashish Pandey <aspandey@xxxxxxxxxx>
- Re: Glusterfs 4.1.6
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: [External] Re: Input/output error on FUSE log
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: Input/output error on FUSE log
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Input/output error on FUSE log
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Input/output error on FUSE log
- From: Matt Waymack <mwaymack@xxxxxxxxx>
- Re: [External] Re: Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- Re: update to 4.1.6-1 and fix-layout failing
- From: mohammad kashif <kashif.alig@xxxxxxxxx>
- Re: update to 4.1.6-1 and fix-layout failing
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- update to 4.1.6-1 and fix-layout failing
- From: mohammad kashif <kashif.alig@xxxxxxxxx>
- Re: [Stale file handle] in shard volume
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Glusterfs 4.1.6
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: Glusterfs 4.1.6
- From: Ashish Pandey <aspandey@xxxxxxxxxx>
- Glusterfs 4.1.6
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: Error in Installing Glusterfs-4.1.6 from tar
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Error in Installing Glusterfs-4.1.6 from tar
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- Multiple versions on the same machine, errors on glusterd startup
- From: Raphaël Yancey <raphael@xxxxxxxxxxx>
- Re: [Stale file handle] in shard volume
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Re: [Stale file handle] in shard volume
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: [Stale file handle] in shard volume
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Re: On making ctime generator enabled by default in stack
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: [External] Re: Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- “Faulty” error while setting up glusterfs georeplication
- From: "Abhilash Mannathanil (amannath)" <amannath@xxxxxxxxx>
- [DHT] serialized readdir(p) across subvols and effect on performance
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Replacing arbiter with thin-arbiter
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Replacing arbiter with thin-arbiter
- From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
- Re: [External] Re: Self Heal Confusion
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: [External] Re: Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- Re: replace-brick operation issue...
- From: Anand Malagi <amalagi@xxxxxxxxxxxxx>
- Re: [External] Re: Self Heal Confusion
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: [Stale file handle] in shard volume
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- Re: Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- Re: Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- Re: Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- Re: WG: Gluster 4.1.6 slow
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- WG: Gluster 4.1.6 slow
- From: "Prof. Dr. Michael Schefczyk" <michael@xxxxxxxxxxxxx>
- Re: Self Heal Confusion
- From: Ashish Pandey <aspandey@xxxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- Re: Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: java application crushes while reading a zip file
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- Re: Unable to create new files or folders using samba and vfs_glusterfs
- From: Matt Waymack <mwaymack@xxxxxxxxx>
- Re: Error in Installing Glusterfs-4.1.6 from tar
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: Error in Installing Glusterfs-4.1.6 from tar
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Error in Installing Glusterfs-4.1.6 from tar
- From: Amudhan P <amudhan83@xxxxxxxxx>
- Re: Self Heal Confusion
- From: Ashish Pandey <aspandey@xxxxxxxxxx>
- Re: Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- Re: Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- Re: Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- Re: Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- java application crushes while reading a zip file
- From: Dmitry Isakbayev <isakdim@xxxxxxxxx>
- Re: replace-brick operation issue...
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Update on GCS 0.5 release
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- ML server migration this weekend
- From: Michael Scherer <mscherer@xxxxxxxxxx>
- replace-brick operation issue...
- From: Anand Malagi <amalagi@xxxxxxxxxxxxx>
- Re: Self Heal Confusion
- From: John Strunk <jstrunk@xxxxxxxxxx>
- Self Heal Confusion
- From: Brett Holcomb <biholcomb@xxxxxxxxxx>
- Re: Unable to create new files or folders using samba and vfs_glusterfs
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: Unable to create new files or folders using samba and vfs_glusterfs
- From: Matt Waymack <mwaymack@xxxxxxxxx>
- Re: Heketi error: Server busy. Retry operation later.
- From: John Mulligan <phlogistonjohn@xxxxxxxxxxxxx>
- Re: [Stale file handle] in shard volume
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- GlusterFS's HA: After one node is down, opened file can still be written without reopened
- From: "Liu, Dan" <liud.fnst@xxxxxxxxxxxxxx>
- [Stale file handle] in shard volume
- From: Olaf Buitelaar <olaf.buitelaar@xxxxxxxxx>
- Re: Finding my bottle neck
- From: Carl Sirotic <csirotic@xxxxxxxxx>
- Re: Finding my bottle neck
- From: Renaud Fortier <Renaud.Fortier@xxxxxxxxxxxxxx>
- Re: gluster 4.1.6 brick problems: 2 processes for one brick, performance problems
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: Finding my bottle neck
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: distribute remove-brick has started migrating the wrong brick (glusterfs 3.8.13)
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Finding my bottle neck
- From: csirotic <csirotic@xxxxxxxxx>
- Re: Add Arbiter Brick to Existing Distributed Replicated Volume
- From: Lazuardi Nasution <mrxlazuardin@xxxxxxxxx>
- Re: distribute remove-brick has started migrating the wrong brick (glusterfs 3.8.13)
- From: Stephen Remde <stephen.remde@xxxxxxxxxxx>
- Re: distribute remove-brick has started migrating the wrong brick (glusterfs 3.8.13)
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Samba+Gluster: Performance measurements for small files
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: Failed to get fd context for a non-anonymous fd
- From: <max.degraaf@xxxxxxx>
- Re: distribute remove-brick has started migrating the wrong brick (glusterfs 3.8.13)
- From: Stephen Remde <stephen.remde@xxxxxxxxxxx>
- Re: Invisible files
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: [rhgs-devel] Gluster meetup: India
- From: Amye Scavarda <amye@xxxxxxxxxx>
- Re: File renaming not geo-replicated
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: File renaming not geo-replicated
- From: Arnaud Launay <asl@xxxxxxxxxx>
- Re: File renaming not geo-replicated
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: File renaming not geo-replicated
- From: Arnaud Launay <asl@xxxxxxxxxx>
- Gluster meetup: India
- From: Sunny Kumar <sunkumar@xxxxxxxxxx>
- Re: Unable to create new files or folders using samba and vfs_glusterfs
- From: Matt Waymack <mwaymack@xxxxxxxxx>
- about bug 1659378
- From: "Li, Deqian (NSB - CN/Hangzhou)" <deqian.li@xxxxxxxxxxxxxxx>
- Re: Unable to create new files or folders using samba and vfs_glusterfs
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: Invisible files
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- Re: Invisible files
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Invisible files
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- S3-compatbile object storage on top of GlusterFS volume
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Failed to get fd context for a non-anonymous fd
- From: <max.degraaf@xxxxxxx>
- Re: Failed to get fd context for a non-anonymous fd
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Failed to get fd context for a non-anonymous fd
- From: <max.degraaf@xxxxxxx>
- Re: Unable to create new files or folders using samba and vfs_glusterfs
- From: Anoop C S <anoopcs@xxxxxxxxxxxxx>
- Re: Announcing Gluster release 5.2
- From: "Kaleb S. KEITHLEY" <kkeithle@xxxxxxxxxx>
- Re: Announcing Gluster release 5.2
- From: Shyam Ranganathan <srangana@xxxxxxxxxx>
- Re: Announcing Gluster release 5.2
- From: Lindolfo Meira <meira@xxxxxxxxxxxxxx>
- Announcing Gluster release 5.2
- From: Shyam Ranganathan <srangana@xxxxxxxxxx>
- Re: VMs paused - unknown storage error - Stale file handle - distribute 2 - replica 3 volume with sharding
- From: Marco Lorenzo Crociani <marcoc@xxxxxxxxxxxxxxxxxxxxxxxx>
- Unable to create new files or folders using samba and vfs_glusterfs
- From: Matt Waymack <mwaymack@xxxxxxxxx>
- Re: Geo-replication error=12 (rsync) [solverd]
- From: Stefan Kania <stefan@xxxxxxxxxxxxxxx>
- GCS 0.4 release
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- 'No data available' when using disk encryption on volume
- From: Theodotos Andreou <theo@xxxxxxxxxxxxxxxx>
- Re: Geo-replication error=12 (rsync)
- From: Stefan Kania <stefan@xxxxxxxxxxxxxxx>
- Geo-replication error=12 (rsync)
- From: Stefan Kania <stefan@xxxxxxxxxxxxxxx>
- gluster 4.1.6 brick problems: 2 processes for one brick, performance problems
- From: Hu Bert <revirii@xxxxxxxxxxxxxx>
- Re: distribute remove-brick has started migrating the wrong brick (glusterfs 3.8.13)
- From: Stephen Remde <stephen.remde@xxxxxxxxxxx>
- Re: Gluster 4.1.6 slow
- From: "Prof. Dr. Michael Schefczyk" <michael@xxxxxxxxxxxxx>
- Re: distribute remove-brick has started migrating the wrong brick (glusterfs 3.8.13)
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- distribute remove-brick has started migrating the wrong brick (glusterfs 3.8.13)
- From: Stephen Remde <stephen.remde@xxxxxxxxxxx>
- Glusterd2 project updates (github.com/gluster/glusterd2)
- From: Aravinda <avishwan@xxxxxxxxxx>
- Re: Gluster 4.1.6 slow
- From: "Prof. Dr. Michael Schefczyk" <michael@xxxxxxxxxxxxx>
- Update from GlusterFS project (November -2018)
- From: Amar Tumballi <atumball@xxxxxxxxxx>
- File renaming not geo-replicated
- From: Arnaud Launay <asl@xxxxxxxxxx>
- Re: glusterd keeps resyncing shards over and over again
- From: Chris Drescher <info@xxxxxxxxxxxxxx>
- Re: glusterd keeps resyncing shards over and over again
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Gluster 4.1.6 slow
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: glusterd keeps resyncing shards over and over again
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: glusterd keeps resyncing shards over and over again
- From: Chris Drescher <info@xxxxxxxxxxxxxx>
- glusterd keeps resyncing shards over and over again
- From: Chris Drescher <info@xxxxxxxxxxxxxx>
- Gluster 4.1.6 slow
- From: "Prof. Dr. Michael Schefczyk" <michael@xxxxxxxxxxxxx>
- Re: Missing glusterfs.so (Gluster + Samba-VFS under Ubuntu)
- From: Neil Richardson <neilr@xxxxxxxx>
- Re: Add Arbiter Brick to Existing Distributed Replicated Volume
- From: Dave Sherohman <dave@xxxxxxxxxxxxx>
- Heketi error: Server busy. Retry operation later.
- From: Guillermo Alvarado <guillermoalvarado89@xxxxxxxxx>
- Re: Community Meeting, Tomorrow - Dec 5 at 15:00 UTC
- From: Amye Scavarda <amye@xxxxxxxxxx>
- Re: Community Meeting, Tomorrow - Dec 5 at 15:00 UTC
- From: Vijay Bellur <vbellur@xxxxxxxxxx>
- Gluster Monthly Newsletter, November 2018
- From: Amye Scavarda <amye@xxxxxxxxxx>
- Re: Community Meeting, Tomorrow - Dec 5 at 15:00 UTC
- From: Amar Tumballi <atumball@xxxxxxxxxx>
- Re: Community Meeting, Tomorrow - Dec 5 at 15:00 UTC
- From: Amye Scavarda <amye@xxxxxxxxxx>
- Re: [External] Seeding geo-replication slaves
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- glusterfs.avoid.overwrite [Invalid argument]
- From: Rusty Bower <rusty@xxxxxxxxxxxxxx>
- Community Meeting, Tomorrow - Dec 5 at 15:00 UTC
- From: Amye Scavarda <amye@xxxxxxxxxx>
- Add Arbiter Brick to Existing Distributed Replicated Volume
- From: Lazuardi Nasution <mrxlazuardin@xxxxxxxxx>
- Seeding geo-replication slaves
- From: Conrad Lawes <pillage1791@xxxxxxxxx>
- Corruption dangers growing the bricks of a dist-rep volume w/ sharding, on v3.8.8?
- From: Gambit15 <dougti+gluster@xxxxxxxxx>
- "gfid differs on subvolume"
- From: Gambit15 <dougti+gluster@xxxxxxxxx>
- What are the difference network communications that happen in gluster?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Why commit and force options in replace-brick options
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Announcing Gluster release 4.1.6 and 5.1
- From: Shyam Ranganathan <srangana@xxxxxxxxxx>
- Re: Missing glusterfs.so (Gluster + Samba-VFS under Ubuntu)
- From: Anoop C S <anoopcs@xxxxxxxxxxxxx>
- nginx process become Uninterruptible state on glusterfs client
- From: kiwizhang618 <kiwizhang618@xxxxxxxxx>
- What's the correct way to know other bricks of a replica sets?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: Corresponding op-version for each release?
- From: Shyam Ranganathan <srangana@xxxxxxxxxx>
- Re: [External] Re: Geo Replication / Error: bash: gluster: command not found
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: Geo Replication / Error: bash: gluster: command not found
- From: m0rbidini <m0rbidini@xxxxxxxxx>
- Re: Shared Storage is unmount after stopping glusterd
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: Corresponding op-version for each release?
- From: Nico van Royen <nico@xxxxxxxxxxxx>
- Re: gluster task id is not cleared causing tier start failure
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: How to check running transactions in gluster?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: Restricting NFS-Ganesha to use NFSv4.0 only
- From: Nico van Royen <nico@xxxxxxxxxxxx>
- Re: Restricting NFS-Ganesha to use NFSv4.0 only
- From: "Kaleb S. KEITHLEY" <kkeithle@xxxxxxxxxx>
- Re: Restricting NFS-Ganesha to use NFSv4.0 only
- From: Nico van Royen <nico@xxxxxxxxxxxx>
- Missing glusterfs.so (Gluster + Samba-VFS under Ubuntu)
- From: Neil Richardson <neilr@xxxxxxxx>
- Corresponding op-version for each release?
- From: Gambit15 <dougti+gluster@xxxxxxxxx>
- Re: Gluster 3.12.14: wrong quota in Distributed Dispersed Volume
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: Gluster 3.12.14: wrong quota in Distributed Dispersed Volume
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Re: Gluster 3.12.14: wrong quota in Distributed Dispersed Volume
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: Gluster 3.12.14: wrong quota in Distributed Dispersed Volume
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Re: How to check running transactions in gluster?
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- Re: Gluster 3.12.14: wrong quota in Distributed Dispersed Volume
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- gluster task id is not cleared causing tier start failure
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: Gluster 3.12.14: wrong quota in Distributed Dispersed Volume
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Re: [ovirt-users] VMs paused - unknown storage error - Stale file handle - distribute 2 - replica 3 volume with sharding
- From: Sahina Bose <sabose@xxxxxxxxxx>
- Re: Restricting NFS-Ganesha to use NFSv4.0 only
- From: Jiffin Thottan <jthottan@xxxxxxxxxx>
- Re: How to check running transactions in gluster?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: op-version compatibility with older clients
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: How to check running transactions in gluster?
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: How to check running transactions in gluster?
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: Can glusterd be restarted running on all nodes at once while clients are mounted?
- From: Serkan Çoban <cobanserkan@xxxxxxxxx>
- Re: Can glusterd be restarted running on all nodes at once while clients are mounted?
- From: Andreas Davour <ante@xxxxxxxxxxxx>
- How to check running transactions in gluster?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: Can glusterd be restarted running on all nodes at once while clients are mounted?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: Can glusterd be restarted running on all nodes at once while clients are mounted?
- From: Andreas Davour <ante@xxxxxxxxxxxx>
- Re: Can glusterd be restarted running on all nodes at once while clients are mounted?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: Can glusterd be restarted running on all nodes at once while clients are mounted?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Can glusterd be restarted running on all nodes at once while clients are mounted?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: Gluster distributed replicated setup does not serve read from all bricks belonging to the same replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Gluster distributed replicated setup does not serve read from all bricks belonging to the same replica
- From: Anh Vo <vtqanh@xxxxxxxxx>
- Re: Restricting NFS-Ganesha to use NFSv4.0 only
- From: "Kaleb S. KEITHLEY" <kkeithle@xxxxxxxxxx>
- Restricting NFS-Ganesha to use NFSv4.0 only
- From: Nico van Royen <nico@xxxxxxxxxxxx>
- Re: Gluster distributed replicated setup does not serve read from all bricks belonging to the same replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Geo Replication Stale File Handle with Reached Maximum Retries
- From: Lazuardi Nasution <mrxlazuardin@xxxxxxxxx>
- Re: Gluster snapshot & geo-replication
- From: Marcus Pedersén <marcus.pedersen@xxxxxx>
- Gluster trying to heal /
- From: Pablo Schandin <pablo.schandin@xxxxxxxxxxx>
- Re: Gluster distributed replicated setup does not serve read from all bricks belonging to the same replica
- From: Anh Vo <vtqanh@xxxxxxxxx>
- VMs paused - unknown storage error - Stale file handle - distribute 2 - replica 3 volume with sharding
- From: Marco Lorenzo Crociani <marcoc@xxxxxxxxxxxxxxxxxxxxxxxx>
- Re: Gluster 3.12.14: wrong quota in Distributed Dispersed Volume
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: Gluster distributed replicated setup does not serve read from all bricks belonging to the same replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: samba client gets mount error(5): Input/output error
- From: Anoop C S <anoopcs@xxxxxxxxxxxxx>
- Gluster distributed replicated setup does not serve read from all bricks belonging to the same replica
- From: Anh Vo <vtqanh@xxxxxxxxx>
- op-version compatibility with older clients
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Gluster 3.12.14: wrong quota in Distributed Dispersed Volume
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Re: gluster connection interrupted during transfer
- From: Richard Neuboeck <hawk@xxxxxxxxxxxxxxxx>
- remote operation failed [Transport endpoint is not connected]
- From: hsafe <hsafe@xxxxxxxxxx>
- Re: Gluster snapshot & geo-replication
- From: FNU Raghavendra Manjunath <rabhat@xxxxxxxxxx>
- Gluster Community Meeting, Nov 21 15:00 UTC
- From: Amye Scavarda <amye@xxxxxxxxxx>
- Re: Gluster 3.12.14: wrong quota in Distributed Dispersed Volume
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Re: Gluster 3.12.14: wrong quota in Distributed Dispersed Volume
- From: Gudrun Mareike Amedick <g.amedick@xxxxxxxxxxxxxx>
- Re: Deleted file sometimes remains in .glusterfs/unlink
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: Deleted file sometimes remains in .glusterfs/unlink
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Gluster 3.12.14: wrong quota in Distributed Dispersed Volume
- From: Hari Gowtham <hgowtham@xxxxxxxxxx>
- Gluster 3.12.14: wrong quota in Distributed Dispersed Volume
- From: Frank Ruehlemann <f.ruehlemann@xxxxxxxxxxxxxx>
- Deleted file sometimes remains in .glusterfs/unlink
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Gluster snapshot & geo-replication
- From: Marcus Pedersén <marcus.pedersen@xxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Is it recommended for Glustereventsd be running on all nodes?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- GCS milestone 0.3
- From: John Strunk <jstrunk@xxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Geo Replication / Error: bash: gluster: command not found
- From: Christos Tsalidis <chtsalid@xxxxxxxxx>
- Re: Is it recommended for Glustereventsd be running on all nodes?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Is it recommended for Glustereventsd be running on all nodes?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: Anoop C S <anoopcs@xxxxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: Time Machine network backup on a Gluster volume
- From: Andrew Spott <andrew.spott@xxxxxxxxx>
- Re: Time Machine network backup on a Gluster volume
- From: Andrew Spott <andrew.spott@xxxxxxxxx>
- glusterfs performance report
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Directory selfheal failed: Unable to form layout for directory on 4.1.5 fuse client
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Directory selfheal failed: Unable to form layout for directory on 4.1.5 fuse client
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Elasticsearch on gluster
- From: Vlad Kopylov <vladkopy@xxxxxxxxx>
- Re: latency limits of glusterfs for replicated mod
- From: Vlad Kopylov <vladkopy@xxxxxxxxx>
- Re: Time Machine network backup on a Gluster volume
- From: "Kaleb S. KEITHLEY" <kkeithle@xxxxxxxxxx>
- Re: rdma.management: could not create QP [Permission denied]
- From: Mike Lykov <combr@xxxxx>
- Re: Time Machine network backup on a Gluster volume
- From: "Kaleb S. KEITHLEY" <kkeithle@xxxxxxxxxx>
- Time Machine network backup on a Gluster volume
- From: Andrew Spott <andrew.spott@xxxxxxxxx>
- Re: Elasticsearch on gluster
- From: Andreas Davour <ante@xxxxxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: Anoop C S <anoopcs@xxxxxxxxxxxxx>
- Updates on cockpit-gluster!
- From: Parth Dhanjal <dparth@xxxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- latency limits of glusterfs for replicated mod
- From: Oğuz Yarımtepe <oguzyarimtepe@xxxxxxxxx>
- Elasticsearch on gluster
- From: Kotresh Hiremath Ravishankar <khiremat@xxxxxxxxxx>
- Re: [External] Re: duplicate performance.cache-size with different values
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: Anoop C S <anoopcs@xxxxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: duplicate performance.cache-size with different values
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: lejeczek <peljasz@xxxxxxxxxxx>
- Re: Ceph or Gluster for implementing big NAS
- From: Vlad Kopylov <vladkopy@xxxxxxxxx>
- duplicate performance.cache-size with different values
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: Ceph or Gluster for implementing big NAS
- From: Alex Crow <acrow@xxxxxxxxxxxxxxxx>
- Ceph or Gluster for implementing big NAS
- From: Premysl Kouril <premysl.kouril@xxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: On making ctime generator enabled by default in stack
- From: Amar Tumballi <atumball@xxxxxxxxxx>
- Re: On making ctime generator enabled by default in stack
- From: Vijay Bellur <vbellur@xxxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: Anoop C S <anoopcs@xxxxxxxxxxxxx>
- Re: On making ctime generator enabled by default in stack
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: On making ctime generator enabled by default in stack
- From: Vijay Bellur <vbellur@xxxxxxxxxx>
- Re: rdma.management: could not create QP [Permission denied]
- From: Mike Lykov <combr@xxxxx>
- rdma.management: could not create QP [Permission denied]
- From: Thomas Simmons <twsnnva@xxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: "Kaleb S. KEITHLEY" <kkeithle@xxxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: does your samba work with 4.1.x (centos 7.5)
- From: "Kaleb S. KEITHLEY" <kkeithle@xxxxxxxxxx>
- does your samba work with 4.1.x (centos 7.5)
- From: lejeczek <peljasz@xxxxxxxxxxx>
- mixing 3.12.x and 4.1.x - allowed?
- From: lejeczek <peljasz@xxxxxxxxxxx>
- Re: nfs mounts as bricks
- From: Niels de Vos <ndevos@xxxxxxxxxx>
- Re: nfs mounts as bricks
- From: Jiffin Thottan <jthottan@xxxxxxxxxx>
- nfs mounts as bricks
- From: Oğuz Yarımtepe <oguzyarimtepe@xxxxxxxxx>
- Re: Shared Storage is unmount after stopping glusterd
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: Failed to mount automatically Gluster Volume on Ubuntu 18.04.1 and GFS v5.0
- From: MOISY Jérôme <jerome.moisy@xxxxxxxxxxxx>
- Re: Shared Storage is unmount after stopping glusterd
- From: Rafi Kavungal Chundattu Parambil <rkavunga@xxxxxxxxxx>
- Re: [External] Re: anyone using gluster-block?
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Re: anyone using gluster-block?
- From: Vijay Bellur <vbellur@xxxxxxxxxx>
- Re: Can't enable shared_storage with Glusterv5.0
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: Failed to mount automatically Gluster Volume on Ubuntu 18.04.1 and GFS v5.0
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: resetted node peers OK but say no volume
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Gluster-users Digest, Vol 126, Issue 40
- From: Igor Cicimov <igorc@xxxxxxxxxxxxxxxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Update from 3.10.12 to 4.1.5 Seeing many Directory selfheal failed entries in volume log
- From: Diego Remolina <dijuremo@xxxxxxxxx>
- Re: Failed to mount automatically Gluster Volume on Ubuntu 18.04.1 and GFS v5.0
- From: MOISY Jérôme <jerome.moisy@xxxxxxxxxxxx>
- glusterd SIGSEGV crash when create volume with transport=rdma
- From: Mike Lykov <combr@xxxxx>
- Re: glusterd SIGSEGV crash when create volume with transport=rdma
- From: Mike Lykov <combr@xxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Failed to mount automatically Gluster Volume on Ubuntu 18.04.1 and GFS v5.0
- From: MOISY Jérôme <jerome.moisy@xxxxxxxxxxxx>
- Failed to mount automatically Gluster Volume on Ubuntu 18.04.1 and GFS v5.0
- From: MOISY Jérôme <jerome.moisy@xxxxxxxxxxxx>
- Gluster Monthly Newsletter, October 2018
- From: Amye Scavarda <amye@xxxxxxxxxx>
- resetted node peers OK but say no volume
- From: "fsoyer" <fsoyer@xxxxxxxxx>
- Re: Should I be using gluster 3 or gluster 4?
- From: Vlad Kopylov <vladkopy@xxxxxxxxx>
- Re: Can't enable shared_storage with Glusterv5.0
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: Can't enable shared_storage with Glusterv5.0
- From: Sanju Rakonde <srakonde@xxxxxxxxxx>
- anyone using gluster-block?
- From: Davide Obbi <davide.obbi@xxxxxxxxxxx>
- Can't enable shared_storage with Glusterv5.0
- From: David Spisla <spisla80@xxxxxxxxx>
- is Samba blind to quotas.
- From: lejeczek <peljasz@xxxxxxxxxxx>
- Shared Storage is unmount after stopping glusterd
- From: David Spisla <spisla80@xxxxxxxxx>
- Re: Should I be using gluster 3 or gluster 4?
- From: Nithya Balachandran <nbalacha@xxxxxxxxxx>
- Re: Should I be using gluster 3 or gluster 4?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: Should I be using gluster 3 or gluster 4?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: posix_handle_hard [file exists]
- From: Krutika Dhananjay <kdhananj@xxxxxxxxxx>
- Re: posix_handle_hard [file exists]
- From: Krutika Dhananjay <kdhananj@xxxxxxxxxx>
- Re: On making ctime generator enabled by default in stack
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: On making ctime generator enabled by default in stack
- From: Vijay Bellur <vbellur@xxxxxxxxxx>
- On making ctime generator enabled by default in stack
- From: Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: posix_handle_hard [file exists]
- From: Jorick Astrego <jorick@xxxxxxxxxxx>
- Re: posix_handle_hard [file exists]
- From: Jorick Astrego <jorick@xxxxxxxxxxx>
- Re: [Gluster-devel] Consolidating Feature Requests in github
- From: Shyam Ranganathan <srangana@xxxxxxxxxx>
- Consolidating Feature Requests in github
- From: Vijay Bellur <vbellur@xxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: Ravishankar N <ravishankar@xxxxxxxxxx>
- GCS release 0.2
- From: John Strunk <jstrunk@xxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Q'apla brick does not come online with gluster 5.0, even with fresh install
- From: Atin Mukherjee <amukherj@xxxxxxxxxx>
- Re: quota: error returned while attempting to connect to host:(null), port:0
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Q'apla brick does not come online with gluster 5.0, even with fresh install
- From: Computerisms Corporation <bob@xxxxxxxxxxxxxxx>
- Re: brick does not come online with gluster 5.0, even with fresh install
- From: Computerisms Corporation <bob@xxxxxxxxxxxxxxx>
- Re: brick does not come online with gluster 5.0, even with fresh install
- From: Computerisms Corporation <bob@xxxxxxxxxxxxxxx>
- Re: posix_handle_hard [file exists]
- From: Krutika Dhananjay <kdhananj@xxxxxxxxxx>
- Re: Should I be using gluster 3 or gluster 4?
- From: "Kaleb S. KEITHLEY" <kkeithle@xxxxxxxxxx>
- Re: posix_handle_hard [file exists]
- From: Jorick Astrego <jorick@xxxxxxxxxxx>
- Re: How to use system.affinity/distributed.migrate-data on distributed/replicated volume?
- Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: quota: error returned while attempting to connect to host:(null), port:0
- From: mabi <mabi@xxxxxxxxxxxxx>
- Re: Should I be using gluster 3 or gluster 4?
- From: Jeevan Patnaik <g1patnaik@xxxxxxxxx>
- Re: client glusterfs connection problem
- From: Oğuz Yarımtepe <oguzyarimtepe@xxxxxxxxx>
- brick does not come online with gluster 5.0, even with fresh install
- From: Computerisms Corporation <bob@xxxxxxxxxxxxxxx>
[Index of Archives]
[Gluster Development]
[Linux Filesytems Development]
[Linux Kernel Development]
[Linux ARM Kernel]
[Linux MIPS]
[Linux S390]
[Bugtraq]
[Fedora ARM]