Nithya, Responding to an earlier question: Before the upgrade, we were at 3.103 on these servers, but some of the clients were 3.7.6. From below, does this mean that “shared-brick-count”
needs to be set to 1 for all bricks. All of the bricks are on separate xfs partitions composed hardware of RAID 6 volumes. LVM is not used. The current setting for cluster.min-free-inodes was 5%.
I changed it to 6% per your instructions below. The df output is still the same, but I haven’t done the find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option shared-brick-count [0-9]*/option shared-brick-count 1/g' Should I go ahead and do this? Output of stat –f for all the bricks: [root@jacen ~]# stat -f /bricks/data_A* File: "/bricks/data_A1" ID: 80100000000 Namelen: 255 Type: xfs Block size: 4096 Fundamental block size: 4096 Blocks: Total: 15626471424 Free: 4530515093 Available: 4530515093 Inodes: Total: 1250159424 Free: 1250028064 File: "/bricks/data_A2" ID: 81100000000 Namelen: 255 Type: xfs Block size: 4096 Fundamental block size: 4096 Blocks: Total: 15626471424 Free: 3653183901 Available: 3653183901 Inodes: Total: 1250159424 Free: 1250029262 File: "/bricks/data_A3" ID: 82100000000 Namelen: 255 Type: xfs Block size: 4096 Fundamental block size: 4096 Blocks: Total: 15626471424 Free: 15134840607 Available: 15134840607 Inodes: Total: 1250159424 Free: 1250128031 File: "/bricks/data_A4" ID: 83100000000 Namelen: 255 Type: xfs Block size: 4096 Fundamental block size: 4096 Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604 Inodes: Total: 1250159424 Free: 1250153857 [root@jaina dataeng]# stat -f /bricks/data_B* File: "/bricks/data_B1" ID: 80100000000 Namelen: 255 Type: xfs Block size: 4096 Fundamental block size: 4096 Blocks: Total: 15626471424 Free: 5689640723 Available: 5689640723 Inodes: Total: 1250159424 Free: 1250047934 File: "/bricks/data_B2" ID: 81100000000 Namelen: 255 Type: xfs Block size: 4096 Fundamental block size: 4096 Blocks: Total: 15626471424 Free: 6623312785 Available: 6623312785 Inodes: Total: 1250159424 Free: 1250048131 File: "/bricks/data_B3" ID: 82100000000 Namelen: 255 Type: xfs Block size: 4096 Fundamental block size: 4096 Blocks: Total: 15626471424 Free: 15106888485 Available: 15106888485 Inodes: Total: 1250159424 Free: 1250122139 File: "/bricks/data_B4" ID: 83100000000 Namelen: 255 Type: xfs Block size: 4096 Fundamental block size: 4096 Blocks: Total: 15626471424 Free: 15626461604 Available: 15626461604 Inodes: Total: 1250159424 Free: 1250153857 Thanks, Eva (865) 574-6894 From:
Nithya Balachandran <nbalacha@xxxxxxxxxx> Thank you Eva. From the files you sent: dataeng.jacen.bricks-data_A1-dataeng.vol: option shared-brick-count 2 dataeng.jacen.bricks-data_A2-dataeng.vol: option shared-brick-count 2 dataeng.jacen.bricks-data_A3-dataeng.vol: option shared-brick-count 1 dataeng.jacen.bricks-data_A4-dataeng.vol: option shared-brick-count 1 dataeng.jaina.bricks-data_B1-dataeng.vol: option shared-brick-count 0 dataeng.jaina.bricks-data_B2-dataeng.vol: option shared-brick-count 0 dataeng.jaina.bricks-data_B3-dataeng.vol: option shared-brick-count 0 dataeng.jaina.bricks-data_B4-dataeng.vol: option shared-brick-count 0 Are all of these bricks on separate Filesystem partitions? If yes, can you please try running the following on one of the gluster nodes and see if the df output works post that? gluster v set dataeng cluster.min-free-inodes 6% If it doesn;t work, please send us the stat -f output for each brick. Regards, Nithya On 31 January 2018 at 20:41, Freer, Eva B. <freereb@xxxxxxxx> wrote:
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users