Re: Utilizing DAS on XEN or XCP hosts for Openstack Cinder

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kyle,


Thanks for your prompt reply. I have been doing some further reading and planning after receiving your valuable input.

 

>> 1.       Is it possible to install Ceph and Ceph monitors on the the XCP

>> (XEN) Dom0 or would we need to install it on the DomU containing the

>> Openstack components?

>I'm not a Xen guru but in the case of KVM I would run the OSDs on the hypervisor to avoid virtualization overhead.

 

As you have suggested, our plan is to install Ceph at the hypervisor level, ie: Dom0

 

>> 2.       Is Ceph server aware, or Rack aware so that replicas are not stored

>> on the same server?

>Yes, placement is defined with your crush map and placement rules.

 

 

>> 3.       Are 4Tb OSD’s too large? We are attempting to restrict the qty of

>> OSD’s per server to minimise system overhead

>Nope!

 

 

>> Any other feedback regarding our plan would also be welcomed.

>I would probably run each disk as it's own OSD, which means you need a bit more memory per host. Networking could certainly be a bottleneck with 8 to 16 spindle nodes. YMMV.

 

I had contemplated having 1 OSD per spindle, but my worry was both processing and RAM overhead as well as network bottlenecks. (ie: no budget for 10Gbe)

 

 

5.            Will 2 x Bonded 1Gbe be sufficient for block storage for 7 – 10 hypervisors with OSD’s on each made up of 4 x RAID0 7200RPM SAS drives & from user experience, what sort of data throughput would I expect to see?

Thanks

Paul

 

-----Original Message-----
From: Kyle Bader [mailto:kyle.bader@xxxxxxxxx]
Sent: Wednesday, 12 March 2014 7:56 AM
To: Paul Mitchener
Cc: ceph-users
Subject: Re: Utilizing DAS on XEN or XCP hosts for Openstack Cinder

 

> 1.       Is it possible to install Ceph and Ceph monitors on the the XCP

> (XEN) Dom0 or would we need to install it on the DomU containing the

> Openstack components?

 

I'm not a Xen guru but in the case of KVM I would run the OSDs on the hypervisor to avoid virtualization overhead.

 

> 2.       Is Ceph server aware, or Rack aware so that replicas are not stored

> on the same server?

 

Yes, placement is defined with your crush map and placement rules.

 

> 3.       Are 4Tb OSD’s too large? We are attempting to restrict the qty of

> OSD’s per server to minimise system overhead

 

Nope!

 

> Any other feedback regarding our plan would also be welcomed.

 

I would probably run each disk as it's own OSD, which means you need a bit more memory per host. Networking could certainly be a bottleneck with 8 to 16 spindle nodes. YMMV.

 

--

 

Kyle

##############################################################################

This e-mail message has been scanned for Viruses and Content and cleared by AUS-IP SecurMail (http://www.ausip.net.au/securmail) at 8:56:28 PM on 11 Mar 2014 ##############################################################################

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux