And what is the benefit for having Ceph run on top of that? If you have all
the disks available to all the nodes, why not run ZFS?
ZFS would give you better performance since what you are building would
actually be a local filesystem.
There is no high availability here. Yes... You can try to do old school
magic with SAN file systems, complicated clustering, and synchronous
replication, but a RAIN approach appeals to me. That is what I see in Ceph.
Don't get me wrong... I love ZFS... but am trying to figure out a scalable
HA solution that looks like RAIN. (Am I missing a feature of ZFS)?
For risk spreading you should not interconnect all the nodes.
I do understand this. However, our operational setup will not allow
multiple racks at the beginning. So... given the constraints of 1 rack
(with dual power and dual WAN links), I do not see that a pair of cross
connected SAS switches is any less reliable than a pair of cross connected
ethernet switches...
As storage scales and we outgrow the single rack at a location, we can
overflow into a second rack etc.
The more complexity you add to the whole setup, the more likely it's to go
down completely at some point in time.
I'm just trying to understand why you would want to run a distributed
filesystem on top of a bunch of direct attached disks.
I guess I don't consider a SAN a bunch of direct attached disks. The SAS
infrastructure is a SAN with SAS interconnects (versus fiber, iscsi or
infiniband)... The disks are accessed via JBOD if desired... or you can put
RAID on top of a group of them. The multiple shelves of drives are a way to
attempt to reduce the dependence on a single piece of hardware (i.e. it
becomes RAIN).
Again, if all the disks are attached locally you'd be better of by using
ZFS.
This is not highly available, and AFAICT, the compute load would not scale
with the storage.
My goal is to be able to scale without having to draw the enormous
power of lots of 1U devices or buy lots of disks and shelves each time
I wasn't to add a little capacity.
You can do that, scale by adding a 1U node with 2, 3 of 4 disks at the
time, depending on your crushmap you might need to add 3 machines at a once.
Adding three machines at once is what I was trying to avoid (I believe that
I need 3 replicas to make things reasonably redundant). From first glance,
it does not seem like a very dense solution to try to add a bunch of 1U
servers with a few disks. The associated cost of a bunch of 1U Servers over
JBOD, plus (and more importantly) the rack space and power draw, can cause
OPEX problems. I can purchase multiple enclosures, but not fully populate
them with disks/cpus. This gives me a redundant array of nodes (RAIN).
Then. as needed, I can add drives or compute cards to the existing
enclosures for little incremental cost.
In your 3 1U server case above, I can add 12 disks to existing 4 enclosures
(in groups of three) instead of three 1U servers with 4 disks each. I can
then either run more OSDs on existing compute nodes or I can add one more
compute node and it can handle the new drives with one or more OSDs. If I
run out of space in enclosures, I can add one more shelf (just one) and
start adding drives. I can then "include" the new drives into existing OSDs
such that each existing OSD has a little more storage it needs to worry
about. (The specifics of growing an existing OSD by adding a disk is still
a little fuzzy to me).
Anybody looked at atom processors?
Yes, I have..
I'm running Atom D525 (SuperMicro X7SPA-HF) nodes with 4GB of RAM and 4 2TB
disks and a 80GB SSD (old X25-M) for journaling.
That works, but what I notice is that under heavy recover the Atoms can't
cope with it.
I'm thinking about building a couple of nodes with the AMD Brazos
mainboard, somelike like an Asus E35M1-I.
That is not a serverboard, but it would just be a reference to see what it
does.
One of the problems with the Atoms is the 4GB memory limitation, with the
AMD Brazos you can use 8GB.
I'm trying to figure out a way to have a really large amount of small nodes
for a low price to have
a massive cluster where the impact of loosing one node is very small.
Given that "massive" is a relative term, I am as well... but I'm also trying
to reduce the footprint (power and space) of that "massive" cluster. I also
want to start small (1/2 rack) and scale as needed.