re-use a brick from an old gluster volume to create a new one.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Gluster Gurus,

Due to some hasty decisions and inadequate planning/testing, I find myself with a single-brick Distributed gluster volume. I had initially intended to extend it to a replicated setup with an arbiter based on a post I found that said that was possible, but I clearly messed up in the creation of the gluster as I have since come to understand that a distributed brick cannot be converted to replicated.

The system is using gluster 5.4 from Debian Repos.

So it seems I will have to delete the existing volume and create a new one, and I am now thinking it would be more future-proof to go with a 2x2 distributed-replica anyway. Regardless, I am trying to find a path from old gluster volume to new gluster volume with a minimum of downtime. In the worst case scenario, I can wipe the existing gluster, make a new one and restore from backup. But I am hoping I can re-use the existing brick in a new gluster configuration and avoid that much downtime.

So I synced the whole setup into a test environment, and thanks to a helpful post on this list I found this article:

https://joejulian.name/post/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/

so I tried wiping the gluster and recreating, and removed the attributes and the .gluster directory from the brick, and it initially seems to work in my test environment, kinda. When I do the gluster create command and include the existing brick as the first one and leave it for a couple days, the replicated brick ends up with only about 80% of the data. tested a few times and that is pretty consistent.

if I try with straight 2 replicated bricks, that never really changes after triggering multiple heals, and when I list files in gluster mount the file attributes such as owner/group/perms/data are replaced with question marks on a significant amount of files and those files are not ls'able except as part of a directory.

If I try with the 2x2 setup, the replicated brick also has only about 80% of the data initially, and after a few days of rebalancing df shows the two new distributed bricks to be almost exactly the same size, but the replica of the original/reused brick still end up being 5-7% less than the original, and the same symptoms of files not being accessible and having question marks for permissions/owner/data/etc persist.

And this takes days, so definitely not faster than restoring from backup.

I have been looking for other solutions, but if they exist so far I have not found them. Wondering if someone could provide some guidance or point me at a solution, or inform me if restoring from backup is really the best way forward?


--
Bob Miller
Cell: 867-334-7117
Office: 867-633-3760
Office: 867-322-0362
www.computerisms.ca
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux