Re: Does gluster suit my need?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

If you just want hot backup, use DRBD, as far as I'm concerned re-access is
as fast as pure local, and write access is as fast as your network .. and
you get to use your native fs. (ext3 for example) I use it here en-mass and
it's 100% .. and you can sit NFS on top.

Thank you for your information. I might look into the DRBD a little bit
later.

I'm a little out of my depth with the HPC stuff .. but I had thouht all the
cluster type tools HPC included were aimed at migrating processes across
processors, rather than joining remote processors into one large virtual
processor ... which is what it sounds like you are after.

I would be both impressed and rather surprised if you could run an actual VM
across multiple cluster nodes .. as opposed to threads of a single
application ... we can switch running VM's between cluster nodes with about
100ms outage, but that's about as far as we've been able to go.

Yes, you're correct, and I might have a misunderstanding before you point me
out the truth. After review the info on hand, I get a more clear picture. As
my understanding, EnSpeed is doing what you described here. It could migrate
the whole VM process to another node processor. Therefore it needs the VT
support by the node CPU. Unfortunately, I don't have so much CPU with VT
support :(

BTW, thank you for your info, I get a lot from you.

On Sun, Apr 27, 2008 at 6:00 AM, Gareth Bult <gareth@xxxxxxxxxxxxx> wrote:

> Hi,
>
> If you just want hot backup, use DRBD, as far as I'm concerned re-access is
> as fast as pure local, and write access is as fast as your network .. and
> you get to use your native fs. (ext3 for example) I use it here en-mass and
> it's 100% .. and you can sit NFS on top.
>
> I'm a little out of my depth with the HPC stuff .. but I had thouht all the
> cluster type tools HPC included were aimed at migrating processes across
> processors, rather than joining remote processors into one large virtual
> processor ... which is what it sounds like you are after.
>
> I would be both impressed and rather surprised if you could run an actual
> VM across multiple cluster nodes .. as opposed to threads of a single
> application ... we can switch running VM's between cluster nodes with about
> 100ms outage, but that's about as far as we've been able to go.
>
> The really funny thing (well, it's not really funny .. figure of speech..)
> is, GlusterFS is actually the most "reliable" of the platforms we've tried
> when it comes to live migration of VM's between cluster nodes .. it's just
> unfortunate that Gluster's other issues make it unusable for this purpose in
> a production environment.
>
> Gareth.
>
>
> ----- Original Message -----
> From: "Alsan Wong" <alsan@xxxxxxxxxxxxxxxx>
> To: "Gareth Bult" <gareth@xxxxxxxxxxxxx>
> Cc: gluster-devel@xxxxxxxxxx
> Sent: Saturday, April 26, 2008 10:28:52 PM GMT +00:00 GMT Britain, Ireland,
> Portugal
> Subject: Re: Does gluster suit my need?
>
> Hi,
>
> Thank you for your clarification.
>
> Yes, I don't need the GlusterFS actually. All I need from the GlusterFS is
> the AFR, and this seems to have to pay a high price. Using NFS + some batch
> backup script may work more efficiently.
>
> The interesting part is GlusterHPC. The computing cluster host should have
> sufficient power to run multiple VMs. So, can the GlusterHPC helps to share
> the processing power across the VMs is the point. I can't find too much
> information on GlusterHPC, or what scenario is the best for GlusterHPC.
>
> On Sat, Apr 26, 2008 at 8:47 PM, Gareth Bult <gareth@xxxxxxxxxxxxx> wrote:
>
>> Hi,
>>
>> I've not done anything with GlusterHPC, but from a storage point of view,
>> if your storage is already centralised, I'm not sure there's a lot to be
>> gained by using GlusterFS ... ??
>>
>> Technically, if you have local storage on each node then GlusterFS/Unify
>> is a useful solution, but the performance overhead compared to local
>> storeage can be notable.
>>
>> Gareth.
>>
>> ----- Original Message -----
>> From: "Alsan Wong" <alsan@xxxxxxxxxxxxxxxx>
>> To: "Gareth Bult" <gareth@xxxxxxxxxxxxx>
>> Cc: gluster-devel@xxxxxxxxxx
>> Sent: Saturday, April 26, 2008 4:30:31 AM GMT +00:00 GMT Britain, Ireland,
>> Portugal
>> Subject: Re: Does gluster suit my need?
>>
>> Hi,
>>
>> Sorry, I can't fully get your point. Do you mean:
>>
>>    1. I can gain processing power (to help running the VMs) from other
>>    nodes of the computing cluster by using GlusterHPC.
>>    2. In this case, using NFS instead of GlusterFS would be better.
>>
>> No, I didn't try NFS, because I need the AFR.
>>
>> On Sat, Apr 26, 2008 at 10:30 AM, Gareth Bult <gareth@xxxxxxxxxxxxx>
>> wrote:
>>
>>> Sure.
>>>
>>> If you're just after processing power, have you tried NFS?
>>>
>>> --
>>> Managing Director, Encryptec Limited
>>> Tel: 0845 5082719, Mob: 0785 3305393
>>> Email: gareth@xxxxxxxxxxxxx
>>>
>>> Statements made are at all times subject to Encryptec's Terms and Conditions of Business, which are available upon request.
>>>
>>>
>>> ----- Original Message -----
>>> From: "Alsan Wong" <alsan@xxxxxxxxxxxxxxxx>
>>> To: "Gareth Bult" <gareth@xxxxxxxxxxxxx>
>>> Cc: gluster-devel@xxxxxxxxxx
>>> Sent: Friday, April 25, 2008 11:51:11 PM GMT +00:00 GMT Britain, Ireland,
>>> Portugal
>>> Subject: Re: Does gluster suit my need?
>>>
>>> Hi,
>>>
>>> Not exactly. I'd like to run the VMs at the computing cluster host (cause
>>> it does have dual quad core CPU with VT support), and gain processing power
>>> from other nodes of the computing cluster by using GlusterHPC. The VMs image
>>> remains on the computing cluster host, and the data would goes to the
>>> GlusterFS.
>>>
>>> Does it feasible?
>>>
>>> On Sat, Apr 26, 2008 at 6:26 AM, Gareth Bult <gareth@xxxxxxxxxxxxx>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Are you talking about sitting the VM storage on the GlusterFS in the
>>>> form of disk images?
>>>>
>>>> Gareth.
>>>>
>>>> ----- Original Message -----
>>>> From: "Alsan Wong" <alsan.wong@xxxxxxxxx>
>>>> To: gluster-devel@xxxxxxxxxx
>>>> Sent: Friday, April 25, 2008 11:01:38 PM GMT +00:00 GMT Britain,
>>>> Ireland, Portugal
>>>> Subject: Does gluster suit my need?
>>>>
>>>> Hi,
>>>>
>>>> I get several machines, some of them are old (actually, not that old,
>>>> just 2
>>>> to 3 years). I would like to build a big cluster like this:
>>>>
>>>> I'll divide those machines into three groups, computing, storage, backup
>>>> storage. As their name, the computing group would be a cluster of
>>>> physical
>>>> machines which is responsible to run application.
>>>>
>>>> Best of all, the computing cluster would run several virtual machines
>>>> across
>>>> the cluster. Thus, the server of the computing cluster would run vmware
>>>> or
>>>> xen, and hosting serveral virtual machines inside (note: the no. of
>>>> virtual
>>>> machines may more then physical machines). The other machines in the
>>>> cluster
>>>> would be diskless (all storage goes to the storage and backup storage
>>>> cluster), and act as the processing power supplier and the being the
>>>> terminal to the virtual machines.
>>>>
>>>> As I know, such similar architecture can be constructed by using EnSpeed
>>>> (another OSS project on clustering, http://www.enspeed.com) easily, but
>>>> one
>>>> of the requirement for me is hard to fulfill - they require all CPUs of
>>>> the
>>>> cluster nodes support VT, and that is not possible in my situation.
>>>>
>>>> I've notice that another project, Kerrighed (http://kerrighed.org) do
>>>> also
>>>> fit the needs of my computing cluster, I might try it later.
>>>>
>>>> So, can I build a such cluster by using GlusterHPC + GlusterFS? or do I
>>>> need
>>>> to combine another solution(s)?
>>>>
>>>> --
>>>> Best Regards.
>>>> Alsan Wong
>>>> _______________________________________________
>>>> Gluster-devel mailing list
>>>> Gluster-devel@xxxxxxxxxx
>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards.
>>> Alsan Wong
>>>
>>
>>
>>
>> --
>> Best Regards.
>> Alsan Wong
>>
>
>
>
> --
> Best Regards.
> Alsan Wong
>



-- 
Best Regards.
Alsan Wong


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux