I use the same configuration you have, and I plan on using bluestore. My SSDs are only 240GB and it worked with filestore all this time, I suspect bluestore should be fine too.
On Wed, Oct 3, 2018 at 4:25 AM Massimo Sgaravatto <massimo.sgaravatto@xxxxxxxxx> wrote:
_______________________________________________HiI have a ceph cluster, running luminous, composed of 5 OSD nodes, which is using filestore.Each OSD node has 2 E5-2620 v4 processors, 64 GB of RAM, 10x6TB SATA disk + 2x200GB SSD disk (then I have 2 other disks in RAID for the OS), 10 Gbps. So each SSD disk is used for the journal for 5 OSDs. With this configuration everything is running smoothly ...We are now buying some new storage nodes, and I am trying to buy something which is bluestore compliant. So the idea is to consider a configuration something like:- 10 SATA disks (8TB / 10TB / 12TB each. TBD)- 2 processor (~ 10 core each)- 64 GB of RAM- 2 SSD to be used for WAL+DB- 10 GbpsFor what concerns the size of the SSD disks I read in this mailing list that it is suggested to have at least 10GB of SSD disk/10TB of SATA disk.So, the questions:1) Does this hardware configuration seem reasonable ?2) Are there problems to live (forever, or until filestore deprecation) with some OSDs using filestore (the old ones) and some OSDs using bluestore (the old ones) ?3) Would you suggest to update to bluestore also the old OSDs, even if the available SSDs are too small (they don't satisfy the "10GB of SSD disk/10TB of SATA disk" rule) ?Thanks, Massimo
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com