> Op 23 december 2016 om 14:34 schreef Eugen Leitl <eugen@xxxxxxxxx>: > > > > Hi Wido, > > thanks for your comments. > > On Fri, Dec 23, 2016 at 02:00:44PM +0100, Wido den Hollander wrote: > > > > My original layout was using 2x single Xeon nodes with 24 GB RAM each > > > under Proxmox VE for the test application and two metadata servers, > > > each as a VM guest. Each VM woud be about 8 GB, 16 GB max. > > > > > > > The VM size doesn't really matter for BlueStore nor did it for XFS/FileStore. > > The designated metadata servers are LXC VMs, so the question is whether > I should give them 8 or 16 MB RAM. I also wonder whether the underlying > data store (hardware RAID 1, 7200 rpm SATA) would profit from using > SSDs there. Is good I/O important for metadata servers? > Do you want to run CephFS? Since you only need metadata servers when using CephFS. And the MDS does not store any data locally, so no need for RAID. > > > rest as raw partitions for rocksdb and object data. I could boot > > > the nodes from an USB memory stick, of course. Would that work, > > > or too much I/O still on the slow USB device? > > > > > > > As long as you disable logging you could run on USB sticks. > > Thanks, I'll try that. > > > > Before I was limited due to 8 GB RAM to max 8 TB/node, > > > so e.g. 2x 4 TB disks. Is this still the case for Bluestore? > > > > > > > Well, the 1GB per 1TB was mainly a PG concern. Storing more data doesn't pers?? use more memory. Placement Groups are the main CPU and Memory consumers. > > > > Since BlueStore hasn't been used that much the only output comes from devs and what they have tested. > > > > What I'd say, more memory is always better. When you have memory to spare, put it in there. > > Would love to, but unbuffered ECC DDR2 is effectively unobtainium these days. > It would be better to spend money on used power-efficient servers like R710 > which already come with memory and plenty of disk slots. > Yes, that would be better indeed :) Wido _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com