I’d say them CPU’s should more than be fine for your use case and requirements then.
You have more than one thread per an OSD which seems to be the ongoing recommendation.
On Tue, 13 Nov 2018 at 10:12 PM, Michal Zacek <zacekm@xxxxxxxxxx> wrote:
Hi,
The server support up to 128GB RAM, so upgrade RAM will not be problem. The storage will be used for storing data from a microscopes. Users will download data from the storage to local PC, make some changes and then will upload data back to the storage. We want use the cluster for direct computing in the future, but now we only need separate data from the microscopes from normal office data. We are expecting up to 10TB upload/download per day.
Michal
Dne 13. 11. 18 v 14:50 Ashley Merrick napsal(a):
_______________________________________________Not sure about CPU, but I would definitely suggest more than 64GB of ram.
With the next release of Mimic the default memory will be set to 4GB per a OSD (if I am correct), this only includes the bluestore layer, so id easily expect to see you getting close to 64GB after OS cache's e.t.c, and the last thing you want on a CEPH OSD box is a OOM.
Are you looking at near to cold storage for these photos? Or is it storage for designers working out of programs which require low latency and quick performance?
On Tue, Nov 13, 2018 at 9:43 PM Michal Zacek <zacekm@xxxxxxxxxx> wrote:
_______________________________________________Hello,
what do you think about this Supermicro server: http://www.supermicro.com/products/system/1U/5019/SSG-5019D8-TR12P.cfm ? We are considering about eight or ten server each with twelve 10TB SATA drives, one m.2 SSD and 64GB RAM. Public and cluster network will be 10Gbit/s. The question is if one Intel XEON D-2146NT wit eight cores (16 with HT) will be enough for 12 SAT disks. Cluster will be used for storing pictures. File size from 1MB to 2TB ;-).
Thanks,
Michal
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com