quick questions about a 5-node homelab setup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(Crossposting this from Reddit /r/ceph , since likely to have more technical audience present here).

I've scrounged up 5 old Atom Supermicro nodes and would like to run them 365/7 for limited production as RBD with Bluestore (ideally latest 13.2.4 Mimic), triple copy redundancy. Underlying OS is a Debian 9 64 bit, minimal install.

Specs: 

3x Atom 330, 2 GB RAM, 1x SSD, 1x 2 TB HDD, dual 1G NICs (4x 1G but for one node actually which only has 2x Realtek, 2x Realtek, 2x Intel) - 1 NIC for private storage network, one front-facing

2x Atom D510, 4 GB RAM, 1x SSD, 1x 4 TB HDD, quad 1G NICs (4x Intel) - 1 NIC for private storage network, one front facing, one management (IPMI)

Jumbo frames enabled.

Question: can I use the following role distribution for the nodes?

OSD on every node (Bluestore), journal on SSD (do I need a directory, or a dedicated partition? How large, assuming 2 TB and 4 TB Bluestore HDDs?)

Can I run ceph-mon instances on the two D510, or would that already overload them? No sense to try running 2x monitors on D510 and one on the 330, right?

I've just realized that I'll also need ceph-mgr daemons on the hosts running ceph-mon. I don't see the added system resource requirements for these.

Assuming BlueStore is too fat for my crappy nodes, do I need to go to FileStore? If yes, then with xfs as the file system? Journal on the SSD as a directory, then?

Thanks!

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux