That might be the reason. Perhaps the volfiles were not regenerated after upgrading to the version with the fix.
There is a workaround detailed in [2] for the time being (you will need to copy the shell script into the correct directory for your Gluster release).
On 17 April 2018 at 09:58, Artem Russakovskii <archon810@xxxxxxxxx> wrote:
To clarify, I was on 3.13.2 previously, recently updated to 4.0.1, and the bug seems to persist in 4.0.1.On Mon, Apr 16, 2018 at 9:27 PM, Artem Russakovskii <archon810@xxxxxxxxx> wrote:pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data .vol 3: option shared-brick-count 3dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data .vol 3: option shared-brick-count 3dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data .vol 3: option shared-brick-count 3On Mon, Apr 16, 2018 at 9:22 PM, Nithya Balachandran <nbalacha@xxxxxxxxxx> wrote:Hi Artem,Was the volume size correct before the bricks were expanded?This sounds like [1] but that should have been fixed in 4.0.0. Can you let us know the values of shared-brick-count in the files in /var/lib/glusterd/vols/dev_apkmirror_data/ ? On 17 April 2018 at 05:17, Artem Russakovskii <archon810@xxxxxxxxx> wrote:Hi Nithya,I'm on Gluster 4.0.1.I don't think the bricks were smaller before - if they were, maybe 20GB because Linode's minimum is 20GB, then I extended them to 25GB, resized with resize2fs as instructed, and rebooted many times over since. Yet, gluster refuses to see the full disk size.Here's the status detail output:gluster volume status dev_apkmirror_data detailStatus of volume: dev_apkmirror_data------------------------------------------------------------ ------------------ Brick : Brick pylon:/mnt/pylon_block1/dev_apkmirror_data TCP Port : 49152RDMA Port : 0Online : YPid : 1263File System : ext4Device : /dev/sddMount Options : rw,relatime,data="">Inode Size : 256Disk Space Free : 23.0GBTotal Disk Space : 24.5GBInode Count : 1638400Free Inodes : 1625429------------------------------------------------------------ ------------------ Brick : Brick pylon:/mnt/pylon_block2/dev_apkmirror_data TCP Port : 49153RDMA Port : 0Online : YPid : 1288File System : ext4Device : /dev/sdcMount Options : rw,relatime,data="">Inode Size : 256Disk Space Free : 24.0GBTotal Disk Space : 25.5GBInode Count : 1703936Free Inodes : 1690965------------------------------------------------------------ ------------------ Brick : Brick pylon:/mnt/pylon_block3/dev_apkmirror_data TCP Port : 49154RDMA Port : 0Online : YPid : 1313File System : ext4Device : /dev/sdeMount Options : rw,relatime,data="">Inode Size : 256Disk Space Free : 23.0GBTotal Disk Space : 24.5GBInode Count : 1638400Free Inodes : 1625433What's interesting here is that the gluster volume size is exactly 1/3 of the total (8357M * 3 = 25071M). Yet, each block device is separate, and the total storage available is 25071M on each brick.The fstab is as follows:/dev/disk/by-id/scsi-0Linode_Volume_pylon_block1 /mnt/pylon_block1 ext4 defaults 0 2 /dev/disk/by-id/scsi-0Linode_Volume_pylon_block2 /mnt/pylon_block2 ext4 defaults 0 2 /dev/disk/by-id/scsi-0Linode_Volume_pylon_block3 /mnt/pylon_block3 ext4 defaults 0 2 localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data1 glusterfs defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0 localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data2 glusterfs defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0 localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data3 glusterfs defaults,_netdev,fopen-keep-cache,direct-io-mode=enable 0 0 localhost:/dev_apkmirror_data /mnt/dev_apkmirror_data_ganesha nfs4 defaults,_netdev,bg,intr,soft, timeo=5,retrans=5,actimeo=10,r etry=5 0 0 The latter entry is for an nfs ganesha test, in case it matters (which, btw, fails miserably with all kinds of stability issues about broken pipes).Note: this is a test server, so all 3 bricks are attached and mounted on the same server.On Sun, Apr 15, 2018 at 10:56 PM, Nithya Balachandran <nbalacha@xxxxxxxxxx> wrote:What version of Gluster are you running? Were the bricks smaller earlier?Regards,NithyaOn 15 April 2018 at 00:09, Artem Russakovskii <archon810@xxxxxxxxx> wrote:______________________________Hi,I have a 3-brick replicate volume, but for some reason I can't get it to expand to the size of the bricks. The bricks are 25GB, but even after multiple gluster restarts and remounts, the volume is only about 8GB.I believed I could always extend the bricks (we're using Linode block storage, which allows extending block devices after they're created), and gluster would see the newly available space and extend to use it.Multiple Google searches, and I'm still nowhere. Any ideas?df | ack "block|data"Filesystem 1M-blocks Used Available Use% Mounted on/dev/sdd 25071M 1491M 22284M 7% /mnt/pylon_block1/dev/sdc 26079M 1491M 23241M 7% /mnt/pylon_block2/dev/sde 25071M 1491M 22315M 7% /mnt/pylon_block3localhost:/dev_apkmirror_data 8357M 581M 7428M 8% /mnt/dev_apkmirror_data1localhost:/dev_apkmirror_data 8357M 581M 7428M 8% /mnt/dev_apkmirror_data2localhost:/dev_apkmirror_data 8357M 581M 7428M 8% /mnt/dev_apkmirror_data3gluster volume infoVolume Name: dev_apkmirror_dataType: ReplicateVolume ID: cd5621ee-7fab-401b-b720-08863717ed56 Status: StartedSnapshot Count: 0Number of Bricks: 1 x 3 = 3Transport-type: tcpBricks:Brick1: pylon:/mnt/pylon_block1/dev_apkmirror_data Brick2: pylon:/mnt/pylon_block2/dev_apkmirror_data Brick3: pylon:/mnt/pylon_block3/dev_apkmirror_data Options Reconfigured:disperse.eager-lock: offcluster.lookup-unhashed: autocluster.read-hash-mode: 0performance.strict-o-direct: oncluster.shd-max-threads: 12performance.nl-cache-timeout: 600performance.nl-cache: oncluster.quorum-count: 1cluster.quorum-type: fixednetwork.ping-timeout: 5network.remote-dio: enableperformance.rda-cache-limit: 256MBperformance.parallel-readdir: onnetwork.inode-lru-limit: 500000performance.md-cache-timeout: 600performance.cache-invalidation: on performance.stat-prefetch: onfeatures.cache-invalidation-timeout: 600 features.cache-invalidation: onperformance.io-thread-count: 32server.event-threads: 4client.event-threads: 4performance.read-ahead: offcluster.lookup-optimize: onperformance.client-io-threads: onperformance.cache-size: 1GBtransport.address-family: inetperformance.readdir-ahead: onnfs.disable: oncluster.readdir-optimize: onThank you._________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users
- Follow-Ups:
- Re: Getting glusterfs to expand volume size to brick size
- From: Atin Mukherjee
- References:
- Getting glusterfs to expand volume size to brick size
- From: Artem Russakovskii
- Re: Getting glusterfs to expand volume size to brick size
- From: Nithya Balachandran
- Re: Getting glusterfs to expand volume size to brick size
- From: Artem Russakovskii
- Re: Getting glusterfs to expand volume size to brick size
- From: Nithya Balachandran
- Re: Getting glusterfs to expand volume size to brick size
- From: Artem Russakovskii
- Re: Getting glusterfs to expand volume size to brick size
- From: Artem Russakovskii
- Prev by Date: Re: Getting glusterfs to expand volume size to brick size
- Next by Date: Re: Getting glusterfs to expand volume size to brick size
- Previous by thread: Re: Getting glusterfs to expand volume size to brick size
- Next by thread: Re: Getting glusterfs to expand volume size to brick size
- Index(es):
[Index of Archives] [Gluster Development] [Linux Filesytems Development] [Linux ARM Kernel] [Linux ARM] [Linux Omap] [Fedora ARM] [IETF Annouce] [Bugtraq] [Linux OMAP] [Linux MIPS] [eCos] [Asterisk Internet PBX] [Linux API]