some years ago I had a similar tasks. I did: - We had disk arrays with 24 slots, with optional 4 JBODS (each 24 slots) stacked on top, dual LWL controller 4GB (costs ;-) - creating raids (6) with not more than 7 disks each - as far as I remember I had one hot spare per each 4 raids - connecting as many of this raid bricks together with striped glusterfs as needed - as for replication, I was planing for an offside duplicate of this architecture and because losing data was REALLY not an option, writing it all off at a second offside location onto LTFS tapes. As the original version for the LTFS library edition was far to expensive for us I found an alternative solution that does the same thing but fort a much reasonable prize. LTFS is still a big thing in digital Archiving. Give me a note if you like more details on that. - This way I could fsck all (not to big) raids in parallel (sped things up) - proper robustness against disk failure - space that could grow infinite in size (add more and bigger disks) and keep up with access speed (ad more server) at a pretty foreseeable prize - LTFS in the vault provided just the finishing having data accessible even if two out three sides are down, reasonable prize, (for instance no heat problem at the tape location) Nowadays I would go for the same approach except zfs raidz3 bricks (at least do a thorough test on it) instead of (small) hardware raid bricks. As for simplicity and robustness I wouldn't like to end up with several hundred glusterfs bricks, each on one individual disk, but rather leaving disk failure prevention either to hardware raid or zfs and using gluster to connect this bricks into the fs size I need( - and for mirroring the whole thing to a second side if needed) hth Bernhard On Dec 25, 2013, at 8:47 PM, Fredrik Häll <hall.fredrik@xxxxxxxxx> wrote:
|
Attachment:
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users