some Ceph questions for new install - newbie warning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are looking to implement a small setup in Ceph+Openstack+kvm for a college that teaches IT careers.We want to empower teachers and students to self-provision resources and to develop skills to extend and/or build Multi-Tenant portals.

Currently:

45VMs (90% Linux and 10% Wndows) using 70vCPUs from 4 physical servers (real cores no more than 2ghz) and 96GB RAM total.
Three Xen Hypervisor with two bonded 1Gb ethernet nics for public traffic and another bond for connection to the NFS storage.
One NAS/Debian 24x2TB SATA HDD w/128GB RAM serving NFS v3 to Xen. No usable data on IOPS.
NAS serves its content from bonded 1Gbps. No 10GB Ethernet AT ALL.

The goal:
1- Use Openstack with KVM in 3 physical nodes to run Linux VMs and Windows VMs.
2- Use 3 physical hosts (or 5, we still doing $$$ math) for Ceph. Still struggling between damn expensive HP SAS drives or cheaper HP SATA.
3- Use Cinder/Ceph to provide block storage for the windows VMs, 
4- Shared File System(SFS) to mount serveral read-only FS to several hosts/labs (is it a good idea?)
5- Swift to connect to some storage we have on Amazon.

Questions:
1- I know I need RBD/block to present storage to Nova/Cinder for my Windows VMs. But if the other VMs are Linux and Im using containers? Block too? NFS?
2- On the 3 physical compute nodes that we are getting, can I run ceph-mon as VMs there or do I really need another 3 physical machines to do ceph-mon only?
3- Since we got a pair of 10Gb switches, I wonder if ceph-mon instances do heavy traffic? do they sync stuff? or the heavy lifting on the 10Gb network will be performed by the storage nodes when crush tables are updated and or OSD changes/fail/etc.

Thanks for your comments.

Erick.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux