Questions about using existing HW for PoC cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

Kind of new to Ceph (have been using 10.2.11 on a 3-node Proxmox 4.x cluster [hyperconverged], works great!) and now I'm thinking of perhaps using it for a bigger data storage project at work, a PoC at first, but built as correctly as possible for performance and availability. I have the following server equipment available to use for the PoC; if it all goes well, I'd think new hardware for an actual production installation would be in order :)

For the OSD servers, I have:

(5) Intel R2312GL4GS 2U servers (c. 2013) with the following specs --
  - (2) Intel Xeon E5-2660 CPUs (8-core, dual-threaded)
  - 64GB memory
  - (1) dual-port 10Gbase-T NIC (Intel X540-AT2)
  - (1) dual-port Infiniband HBA (Mellanox MT27500 ConnectX-3) (probably won't use, and would remove)
  - (4) Intel 1Gbase-T NICs (on mobo)
  - (1) Intel 240GB SATA SSD (OS)
  - (8) Hitachi 2TB SATA drives

I am not bound to using the existing disk in these servers, but also want to keep the price down, as this is only a PoC. Was thinking of either putting an Intel Optane 900P PCIe SSD (480G) in for journal, or else some sort of SATA SSD in one of the available front bays (it's a 12 hotswap-bay machine, + two internal SSD mounts.) I also could get some higher capacity (and newer!) SATA drives, so as to keep the number of OSDs down for a given capacity (shooting for 25-50TB to start.) However, I'd love it if I didn't have to ask for any money ;)

For monitor machines, I have available three Supermicro (c.2011) 1U servers with:
  - (2) Intel Xeon X5680 CPUs
  - 48GB memory
  - (2) 1Gbase-T NICs (on mobo)
  - (1) WD 2TB SATA drive

I am considering also the rack placement; the 5 servers I'd use for OSD all currently live in one rack, and the Mon servers in another. I could move them if necessary.

So, a few questions to start ;)

- Is the above an acceptable collection of useful equipment for a PoC of modern Ceph? (thinking of installing Mimic with Bluestore)
- Is putting the journal on a partition of the SATA drives a real I/O killer? (this is how my Proxmox boxes are set up)
- If YES to the above, then is a SATA SSD acceptable for journal device, or should I definitely consider PCIe SSD? (I'd have to limit to one per server, which I know isn't optimal, but price prevents otherwise...)
- Should I spread the servers out over racks, which would probably force me to use 3 out of the 5 avail OSD servers, and put bigger disks in them to get the desired capacity (I only have three racks to work with), or is it OK for a PoC to keep all OSD servers in one rack?
- Are the platforms I'm proposing to use for monitor servers acceptable as-is, or do they need more memory, SSD drives, or 10GbE NICs?

OK, enough q's for now - thanks for helping a new Ceph'r out :)

Best,
Will



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux