Re: Is it safe to run RAID0 on a replicate cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2017-02-28 04:01 PM, Lindsay Mathieson wrote:

On 1 March 2017 at 09:20, Ernie Dunbar <maillist@xxxxxxxxxxxxx> wrote:
Every node in the Gluster array has their RAID array configured as RAID5, so I'd like to improve the performance on each node by changing that to RAID0 instead.

Hi Ernie, sorry saw your question before and meant to reply but "stuff" kept happening ... :)

Presuming you're running Replica 3 I don't see any issues with converting from RAID5 to RAID0, there should be quite a local performance boost and I would think its actually safer - the rebuild times for RAID5 are horrendous and a performance killer to boot. With RAID0 you'll loose the whole brick if you lose a disk but depending on your network, healing from the other nodes would probably be quicker.

nb. What is your raid controller? network setup?

Alternatively I believe the general recommendation is to actually run all your disks in JBOD mode and create a brick per disk, that way individual disk failures won't effect the other bricks on the node. However that would require the same number of disks per node.

For myself, I actually run 4 disks per node, setup as RAID10 with ZFS. One ZFS Pool and Brick per node. I use it for VM Hosting though which is quite a different usecase, a few very large files.


We're running Gluster on 3 Dell 2950s, using the PERC6i controller. There's only one brick so far, but I think I'm going to have to keep it that way, although some of our data on that brick isn't mail - VM hosting is something that we'll be doing with this very soon.

Considering that this is our mail store, I don't think that setting up JBOD and a brick per disk is really reasonable, or we'd have to go around creating new e-mail accounts on random gluster shares, nevermind the tiny detail of what happens when any set of mailboxes outgrows the brick. This makes it look like this more efficient scheme would be highly impractical to us.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux