Re: running Qemu / Hypervisor AND Ceph on the same nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There's probably a middle ground where you get the best of both worlds.
Maybe 2-4 OSD's per compute node alongside dedicated Ceph nodes. That way
you get a bit of extra storage and can still use lower end CPU's, but don't
have to worry so much about resource contention.

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Martin Millnert
> Sent: 29 March 2015 19:58
> To: Mark Nelson
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  running Qemu / Hypervisor AND Ceph on the same
> nodes
> 
> On Thu, Mar 26, 2015 at 12:36:53PM -0500, Mark Nelson wrote:
> > Having said that, small nodes are
> > absolutely more expensive per OSD as far as raw hardware and
> > power/cooling goes.
> 
> The smaller volume manufacturers have on the units, the worse the margin
> typically (from buyers side).  Also, CPUs typically run up a premium the
higher
> you go.  I've found a lot of local maximas, optimization-wise, over the
past
> years both in 12 OSD/U vs 18 OSD/U dedicated storage node setups, for
> instance.
>   There may be local maximas along colocated low-scale storage/compute
> nodes, but the one major problem with colocating storage with compute is
> that you can't scale compute independently from storage efficiently, on
> using that building block alone.  There may be temporal optimizations in
> doing so however (e.g. before you have reached sufficient scale).
> 
> There's no single optimal answer when you're dealing with 20+ variables to
> consider... :)
> 
> BR,
> Martin




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux