Re: Removing Brick in Distributed GlusterFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Tue, Mar 12, 2019 at 8:48 PM Taste-Of-IT <kontakt@xxxxxxxxxxxxxx> wrote:
Hi,

i found a Bug about this in Version 3.10. I run 3.13.2 - for your Information. As far as i can see, the default of 1% rule is active and not configure 0 = for disable storage.reserve.

Let me verify this bug on release 6 and will update you. (But my recommendation will be to not disable it as that could lead to other problems.)
 
So what can i do? Finish remove brick? Upgrade to newer Version and rerun rebalance?

thx
Taste

Am 12.03.2019 12:45:51, schrieb Taste-Of-IT:
Hi Susant,

and thanks for your fast reply and pointing me to that log. So i was able to find the problem: "dht-rebalance.c:1052:__dht_check_free_space] 0-vol4-dht: Could not find any subvol with space accomodating the file"

But Volume Detail and df -h show xTB of free Disk Space and also Free Inodes.

Options Reconfigured:
performance.client-io-threads: on
storage.reserve: 0
performance.parallel-readdir: off
performance.readdir-ahead: off
auth.allow: 192.168.0.*
nfs.disable: off
transport.address-family: inet

Ok since there is enough disk space on other Bricks and i actually didnt complete brick-remove, can i rerun brick-remove to rebalance last Files and Folders?

Thanks
Taste


Am 12.03.2019 10:49:13, schrieb Susant Palai:
Would it be possible for you to pass the rebalance log file on the node from which you want to remove the brick? (location : /var/log/glusterfs/<volume-name-rebalance.log>)

+ the following information:
 1 - gluster volume info 
 2 - gluster volume status
 2 - df -h output on all 3 nodes


Susant

On Tue, Mar 12, 2019 at 3:08 PM Taste-Of-IT <kontakt@xxxxxxxxxxxxxx> wrote:
Hi,
i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes / Bricks.  I want to remove one Brick and run gluster volume remove-brick <vol> <brickname> start. The Job completes and shows 11960 failures and only transfers 5TB out of 15TB Data. I have still files and folders on this volume on the brick to remove. I actually didnt run the final command  with "commit". Both other Nodes have each over 6TB of free Space, so it can hold the remaininge Data from Brick3 theoretically.

Need help.
thx
Taste
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux