design guidance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've built 'my-first-ceph-cluster' with two of the 4-node, 12 drive Supermicro servers and dual 10Gb interfaces(one cluster, one public)

I now have 9x 36-drive supermicro StorageServers made available to me, each with dual 10GB and a single Mellanox IB/40G nic. No 1G interfaces except IPMI. 2x 6-core 6-thread 1.7ghz xeon processors (12 cores total) for 36 drives. Currently 32GB of ram. 36x 1TB 7.2k drives. 
 
Early usage will be CephFS, exported via NFS and mounted on ESXi 5.5 and 6.0 hosts(migrating from a VMWare environment), later to transition to qemu/kvm/libvirt using native RBD mapping. I tested iscsi using lio and saw much worse performance with the first cluster, so it seems this may be the better way, but I'm open to other suggestions.

Considerations:
Best practice documents indicate .5 cpu per OSD, but I have 36 drives and 12 CPUs. Would it be better to create 18x 2-drive raid0 on the hardware raid card to present a fewer number of larger devices to ceph? Or run multiple drives per OSD?

There is a single 256gb SSD which i feel would be a bottleneck if I used it as a journal for all 36 drives, so I believe bluestore with a journal on each drive would be the best option.

Is 1.7Ghz too slow for what I'm doing?

I like the idea of keeping the public and cluster networks separate. Any suggestions on which interfaces to use for what? I could theoretically push 36Gb/s, figuring 125MB/s for each drive, but in reality will I ever see that? Perhaps bond the two 10GB and use them as the public, and the 40gb as the cluster network? Or split the 40gb in to 4x10gb and use 3x10GB bonded for each?


If there is a more appropriate venue for my request, please point me in that direction. 

Thanks,
Dan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux