Re: Use single large bring or several smaller ones?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Mar 22, 2014 at 3:18 PM, Carlos Capriotti
<capriotti.carlos@xxxxxxxxx> wrote:
> James, can we agree to disagree on that ?

Of course! There are lots of different possible configurations, each
with different advantages and disadvantages.

>
> First, one of the main ideas of having a replicated/distributed filesystem
> is to have your data safe, and avoiding a single point of failure or, at the
> very least, minimizing the points of failure.
>
> By adding multiple bricks per server, you are increasing your risk.

Not sure I agree unilaterally.

>
> Regarding performance, glusterd will run on a single core. The more work you
> push to it (multiple volumes) the quicker it will saturate and cap your
> performance. (will, in all fairness, it will happen also with several
> connections, so, let's call this even).

Actually you get a different glusterfsd for each brick, so this can
increase the parallelism.

>
> Again, on performance, splitting volumes on a RAID6 will only make yo
> literally lose diskspace IF you divide you volumes on the controller.
> Remember: 6 disks on raid6 means having the available space of 4 disks,
> since the other two are "spares" (this is a simplification). On the other
> hand, if you use all of the 12 disks, you will have the available space of
> 10 disks, instead of 4x2=8.

There is some balance between performance and max available space.
It's up to the individual to pick that spot :)

>
> If the alternative create a single volume on the RAID controller, and the
> create two logical volumes on the OS, STILL, you have one single RAID
> controller, thus, you might still lose, since the OS would have some
> overhead to control two volumes.
>
> Now, REALLY important factors is, indeed, having "symmetrical" settings,
> like same processors, same disk configuration for bricks, same mount of ram,
> same NIC configurations, just to rule out all of those potential problems
> that would make one node wait for the other.
>
> Remember: gluster  write, as it would be expected, is only as fast as the
> slowest node, thus affecting performance greatly.
>
> Hope this helps.
>
>
>
>
> On Fri, Mar 21, 2014 at 7:20 PM, James <purpleidea@xxxxxxxxx> wrote:
>>
>> On Fri, Mar 21, 2014 at 2:02 PM, Justin Dossey <jbd@xxxxxxxxxxxxx> wrote:
>> > The more bricks you allocate, the higher your operational complexity.
>> > One
>> > brick per server is perfectly fine.
>>
>> I don't agree that you necessarily have a "higher operational
>> complexity" by adding more bricks per host. Especially if you're using
>> Puppet-Gluster [1] to manage it all ;) I do think you'll have higher
>> complexity if your cluster isn't homogenous or your bricks per host
>> isn't symmetrical across the cluster, or if you're using chaining.
>> Otherwise, I think it's recommended to use more than one brick.
>>
>> There are a number of reasons why you might want more than one brick per
>> host.
>>
>> * Splitting of large RAID sets (might be better to have 2x12 drives
>> per RAID6, instead of one giant RAID6 set)
>>
>> * More parallel IO workload (you might want to see how much
>> performance gains you get from more bricks and your workload. keep
>> adding until you plateau. Puppet-Gluster is a useful tool for
>> deploying a cluster (vm's or iron) to test a certain config, and then
>> using it again to re-deploy and test a new brick count).
>>
>> * More than one brick per server is required if you want to do volume
>> chaining. (Advanced, unsupported feature, but has cool implictions.)
>>
>> And so on...
>>
>> The famous "semiosis" has given at least one talk, explaining how he
>> chose his brick count, and detailing his method. I believe he uses 6
>> or 8 bricks per host. If there's a reference, maybe he can chime in
>> and add some context.
>>
>> HTH,
>> James
>>
>> [1] https://github.com/purpleidea/puppet-gluster
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux