GlusterFS Spare Bricks?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Are there plans to add provisioning of spare bricks in a replicated (or distributed-replicated) configuration? E.g., when a brick in a mirror set dies, the system rebuilds it automatically on a spare, similar to how it’d done by RAID controllers.

 

Nor would it only improve the practical reliability, especially of large clusters, but it’d also make it possible to make better-performing clusters off less expensive components. For example, instead of having slow RAID5 bricks on expensive RAID controllers one uses cheap HBA-s and stripes a few disks per brick in RAID0 – that’s faster for writes than RAID 5/6 by an order of magnitude (and, by the way, should improve rebuild times in Gluster many are complaining about.).  A failure of one such striped brick is not catastrophic in a mirrored Gluster – but it’s better to have spare bricks standing by strewn across cluster heads.

 

A more advanced setup at a hardware level involves creating “hybrid disks” whereas HDD vdisks are cached by enterprise-class SSD-s.  It works beautifully and makes HDD-s amazingly fast for random transactions.  The technology’s become widely available for many $500 COTS controllers.  However, it is not widely known that the results with HDD-s in RAID0 under SSD cache are 10 to 20 (!!) times better than with RAID 5 or 6.

 

There is no way to use RAID0 in commercial storage, the main reason being the absence of hot-spares.  If on the other hand the spares are handled by Gluster in a form of (cached hardware-RAID0) pre-fabricated bricks both very good performance and reasonably sufficient redundancy should be easily achieved.


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux