Re: Cluster vs Distributed? & MySQL Cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It only works for the MyISAM table type.

InnoDB, for instance, will not work.

-- -- Tom Mornini

On Oct 27, 2006, at 6:50 AM, Johannes russek wrote:

i'm sorry to hit just into this, but did i get it right that active/ active
mysql does actually work?
regards, johannes

-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx]On Behalf Of David Brieck Jr.
Sent: Thursday, October 26, 2006 4:11 PM
To: linux clustering
Subject: Re:  Cluster vs Distributed? & MySQL Cluster?


On 10/25/06, Michael Will <mwill@xxxxxxxxxxxxxxxxxxxx> wrote:
Are the actual data files shared in this setup between the active mysql
daemons?

Last time I looked into this it seemed that with shared-nothing
model each
mysql daemon would have to keep it's own copy of the data and updates would be propagated from active to passive daemons (master-slave model)
or between active daemons (ndb in-ram database model)

Are the mysql daemons running on the GFS I/O nodes that have access to
shared
storage via SAN or iSCSI and coordinate locking through GFS
infrastructure, or are
the mysql daemons running on client nodes that use GFS to remotely
access storage
that is provided by other  GFS I/O nodes that in turn have
access to shared
storage via SAN or iSCSI?

Michael


We're using GNBD for the nodes to connect to the storage. We don't
have the fastest storage setup right now, but I'm hopeful that if
everything works well we'll be purchasing a faster storage setup.

As far as MySQL using GFS (excluding anything with active-active) and
using DLM to do locks, here are some comparisons:

Benchmark on GFS

Benchmark DBD suite: 2.15
Date of test:        2006-10-26  9:49:43
Running tests on:    Linux 2.6.9-42.0.2.ELhugemem i686
Arguments:           --small-test --tcpip --fast --fast-insert
--lock-tables
Comments:
Limits from:
Server version:      MySQL 4.1.20/
Optimization:        None
Hardware:

alter-table: Total time: 94 wallclock secs ( 0.02 usr  0.01 sys +
0.00 cusr  0.00 csys =  0.03 CPU)
big-tables: Total time: 4 wallclock secs ( 0.13 usr 0.14 sys + 0.00
cusr  0.00 csys =  0.27 CPU)
connect: Total time:  5 wallclock secs ( 0.38 usr  0.53 sys +  0.00
cusr  0.00 csys =  0.91 CPU)
create: Total time:  8 wallclock secs ( 0.02 usr  0.01 sys +  0.00
cusr  0.00 csys =  0.03 CPU)
insert: Total time: 17 wallclock secs ( 2.19 usr  1.99 sys +  0.00
cusr  0.00 csys =  4.18 CPU)
select: Total time: 13 wallclock secs ( 2.36 usr  1.03 sys +  0.00
cusr  0.00 csys =  3.39 CPU)

Benchmark on Local

alter-table: Total time: 70 wallclock secs ( 0.02 usr  0.00 sys +
0.00 cusr  0.00 csys =  0.02 CPU)
big-tables: Total time: 2 wallclock secs ( 0.11 usr 0.14 sys + 0.00
cusr  0.00 csys =  0.25 CPU)
connect: Total time:  4 wallclock secs ( 0.37 usr  0.55 sys +  0.00
cusr  0.00 csys =  0.92 CPU)
create: Total time:  1 wallclock secs ( 0.01 usr  0.00 sys +  0.00
cusr  0.00 csys =  0.01 CPU)
insert: Total time: 13 wallclock secs ( 2.27 usr  1.95 sys +  0.00
cusr  0.00 csys =  4.22 CPU)
select: Total time: 12 wallclock secs ( 2.21 usr  0.97 sys +  0.00
cusr  0.00 csys =  3.18 CPU)

It's pretty darn close and I'm willing to take a small performance hit.

Here's some relevant info: local storage is RAID5 and GFS is RAID10
and shared using CLVM, multipath, and GNBD. So the speed of the test
locally would probably be faster if it were either RAID1 or 10, not 5.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
-- Tom Mornini, CTO
-- Engine Yard, Ruby on Rails Hosting
-- Reliability, Ease of Use, Scalability
-- (866) 518-YARD (9273)


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux