Re: Replication logic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just take the slow brick offline during the initial sync and then bring it online.
The heal will go in background, while the volume stays operational.

Any change on Gluster will be slowed down due to lattency and slow bandwidth to the third brick , but it will be ok.

Have you thought for a 'replica 3' volume with a thin arbiter being in the slow location ? You will have the data at 2 data bricks, but you still have the split-brain protection.

Best Regards,
Strahil Nikolov






В понеделник, 28 декември 2020 г., 23:14:37 Гринуич+2, Zenon Panoussis <oracle@xxxxxxxxxxxxxxx> написа: 






>  And you always got the option to reduce the quorum statically to "1" 


This is a very interesting tidbit of information. I was
wondering if there was some way to preload data on a brick,
and I think you might have just given me one.

I have a volume of three peers, one brick each. Two peers
have a fast connection, the third one has a very slow
connection. In normal operation this doesn't matter,
because there will only be fairly small changes to the
filesystem over time. However, when loading the initial
data on the volume before it becomes operative, the one
slow connection becomes a bottleneck for two fast ones.
So I'm thinking now whether I could

1. join the three peers and build the empty volume,
2. take the slow peer off-line,
3. load the data on the crippled volume, so that it is
  written to the two fast peers that are still online,
4. take the two fast peers offline and put the slow peer
  online,
5. reduce quorum to 1,
6. load the exact same data locally to the slow peer, and
7. put the two fast peers back online and increase quorum
  to 2.

This would lead to all three bricks having the exact same
data without the delay of the slow transfer, but it will
only work if the exact same metadata are created for the
same files during the two separate loads. That is, if a
given file foo always produces the exact same metadata,
after loading foo to different bricks on different
occasions, the metadata of all bricks will be identical
and no healing would be needed.

Is that so, or am imagining impossible acrobatics?

Z


________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux