Gluster on EC2 - how to replace failed EBS volume?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

Apologies if this has been asked and answered, however I couldn't find the answer anywhere.

Here's my situation: I am trying to make a highly available 1TB data volume on EC2.  I'm using Gluster 3.1.3 on EC2 and have a replicated volume consisting of two bricks.  Each brick is in a separate Availability Zone and consists of eight 125GB EBS volumes in a RAID0 array.  (Total usable space presented to Gluster client is 1TB.)  My question is what is the best practice for how to replace a failing/failed EBS volume?  It seems that I have two choices:

1. Remove the brick from the Gluster volume, stop the array, detach the 8 vols, make new vols from last good snapshot, attach new vols, restart array, re-add brick to volume, perform self-heal.

or

2. Remove the brick from the Gluster volume, stop the array, detach the 8 vols, make brand new empty volumes, attach new vols, restart array, re-add brick to volume, perform self-heal.  Seems like this one would take forever and kill performance.

Or maybe there's a third option that's even better?

Thanks so much,
Don


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux