Re: Lab Newbie Here: Where do I start?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

firstly as a self proclaimed newbie, you start by reading. A LOT.

Then, when you think you have a good grip on how Ceph works, come here and
we shall strive to dissuade you from that notion. ^o^

On Mon, 2 May 2016 15:29:37 -0400 Michael Ferguson wrote:

> G'Day All,
> 
>  
> 
> I have two old Promise Vtrak E310s JBOBs (still with support) each with 4
> 600GB Seagate SAS HDD and 8 2TB SATA HDD and two old HP DL360.
> 
A pure HDD setup will have poor performance.

> While I am seeing so many ceph-deploy this and ceph-deploy that, I have
> not found any help that starts with the hardware.
> 
> There seem to be lots of assumption about the hardware.
>
There is also a HUGE amount of guides, examples and suggestions if you
google for "Ceph Hardware"...
 
> Can anyone provide some directional oversight on getting the hardware
> going so as to accept ceph in a HA setting?
> 
>                 For example, how should all these drives be provisioned
> on each VTrak example and served up to the HP DL360, RAID or no Raid?
> 
Once you've done all the reading mentioned above, you'd know that Ceph
(not unlike ZFS) prefers raw, individual disk, so no RAID.

But with the HW cache of the Vtraks still enabled, if that's possible.
This will somewhat offset the lack of SSD journals.

Alas you only have 2 of each, your data will NOT be safe in this
scenario  with a replication of 2 (dual disk failures DO happen). 

So your safe choices are either to use RAID (1 for IOPS speed) or to get
a 3rd JBOD and storage server.

Or to forget about Ceph, unless this is for learning about it and if
total data loss is acceptable to you.

And we haven't touch on MON servers yet, of which you will also want at
least 3 (can be shared if need be) for a stable service.

>                 I plan to use CentOS
> 
>                 Once ceph is installed and has control of the storage
> from the VTraks I plan to install VirtualBox or Oracle VM or VMware or
> anything else that I can use at zero or minimum cost.
> 
And and the end you finally get to what you actually want to achieve.
I find that to be very common, people asking specific questions without
mentioning the end goal, which would likely result in getting better
advise.

That said, neither of these are good fits for Ceph, as they don't
support it (especially RBD). 

In no particular order:

OpenStack 
OpenNebula
ganeti 
Qemu/KVM w/o any cluster manager (or Pacemaker as CRM) 

do support RBD.

Also on which HW do you plan to run those VMs?
Your 2 DL360s will probably be maxed out by running Ceph.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux