RE: Fwd: cluster post

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Wow. Sorry all - I didn't mean to offend Shawn. I have been on this list for
years and have only mailed 2 people from it (isplist@logicore - hi Mike!) to
ask for ideas on what I'm working on. Shawn, you owe me an apology - telling
the world what I sent you in a private email is pretty bad. You simply
sounded like someone who would be a good tester for what I'm working on, and
I was hoping you'd be interested. Also, Mike - I'm closer now to a beta if
you want to do any testing with us. Thanks!

Chris

-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Shawn Hood
Sent: Friday, November 16, 2007 1:34 PM
To: linux-cluster@xxxxxxxxxx
Subject:  Fwd: cluster post

You guys may want to unsubscribe this fellow.  He spammed me after my first
post to the mailing list.

Shawn


---------- Forwarded message ----------
From: Christopher Hawkins <chawkins@xxxxxxxxxxx>
Date: Nov 16, 2007 5:03 PM
Subject: RE: cluster post
To: shawnlhood@xxxxxxxxx


Hello Shawn,

I saw your post on RedHat's mailing list. You are exactly the kind of
professional we're looking for!

I'm a network engineer who specializes in Linux clusters, and for years I
have been doing projects just like the one you described in your post. About
two years ago I decided to develop my own compilation of cluster tools
because the existing technology left a lot to be desired... sparse
documentation, trying to get code from 5 different projects to work
together, no unified control interface, etc. I'm sure you know the headaches
I'm talking about. :-)

We're a couple months from having a beta version ready for testing, and we
are looking for feedback from potential resellers and customers. Can we have
a discussion about whether this cluster software would work for you? It's a
shared root, load balanced, highly available cluster that can automatically
scale itself it up and down in response to demand by PXE booting additional
diskless nodes. New nodes take about 60 seconds from power-on to joining the
load balancing pool, with no local configuration on the node at all. There
is simply no faster or easier way to scale a cluster.

No SAN is required, it does everything over gigabit ethernet. It has
web-based graphical monitoring and configuration, does email notification
for significant events, can run most standard linux services (anything IP
based - we are focusing on Tomcat clusters at first, but it will work for
email, web, etc. just fine), is compatible with virtualization, and is
distribution / kernel agnostic; no kernel patches here. While we are
standardizing on RedHat / Centos and IBM hardware, there is nothing
preventing this from running on any normal platform. The cluster is designed
to compete with big iron solutions at a fraction of the price, and to be a
huge scalability and ease of use improvement over something like RH Cluster
Suite. We would sell the software pre-installed on a single or dual (for
high availability) master node and then you can scale it up with as many
child nodes as you want (gig-e being the limiting factor in large
deployments, but we are infiniband compatible as well).

Can't wait to hear back from you and see what you think about it - we are
hoping to find some resellers who want to partner up early and exchange
product feedback for generous reseller discounts and the opportunity to
influence the design of the system. If there are features you want or need,
or if we can make it easier for you to sell this to your customers in any
way, we want to hear about it so we can design it in for you.

Thank you for your time, Shawn. Look forward to your response!

Chris Hawkins
President, Bulletproof Linux


-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Shawn Hood
Sent: Tuesday, November 13, 2007 9:59 PM
To: linux-cluster@xxxxxxxxxx
Subject:  howdy

Hey folks,

Just thought I'd introduce myself.  I found this list while perusing some
information RHCS/GFS.  I figured that it would behoove me to join up,
especially as many of my upcoming projects are HA cluster related.
 I recently relocated to the DC metro area from Chapel Hill.  I was working
in blade development at IBM in RTP.  I now work for a small consulting firm
and service a number of clients with Linux infrastructures.

That said, my primary client is a medical coding application service
provider that is doing some really fascinating stuff involving natural
language processing.  These NLP applications are very computationally
expensive--cpu, memory, STORAGE.  My current project is implement GFS across
multiple Dell PowerEdges running RHEL4.4/4.5/5, connecting to 3 Apple
xraids, and soon a larger first-tier storage device.  Upcoming projects
include implementing clustering to provide redundancy for critical
applications (e-mail, jabber, JBoss, etc), across two datacenters (our
corporate HQ and Rackspace).

Anyhow, you guys will probably be seeing some traffic from me.  I guess I'll
go ahead a question:  Are there any must-reads (apart from the sparse RH
documentation) related to RHCS/GFS?  Are there any books that are imperative
reads on the concepts of highly-available infrastructure?

Shawn Hood

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster




--
Shawn Hood
(910) 670-1819 Mobile

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux