On Fri, 13 May 2016 12:38:05 +0530 gjprabu wrote: Hello, > Hi All, > > > > We need some clarification on CEPH OSD and MON and MDS. It will > be very helpful and better understand to know below details. > You will want to spend more time reading the documentation and hardware guides, as well as finding similar threads in the ML archives. > > > Per OSD Recommended SIZE ( Both scsi and ssd ). > With SCSI I suppose you mean HDDs? And there is not good answer, it depends on your needs and use case. For example if your main goal is space and not performance, fewer but larger HDDs will be a better fit. > > Which is recommended one (per machine = per OSD) or (Per machine = many > OSD.) > The first part makes no sense, I suppose you mean one or few OSD per server? And again, it all depends on your goals and budget. Find and read the hardware guides, there are other considerations like RAM and CPU. Many OSDs per server can be complicated and challenging, unless you know very well what you're doing. The usual compromise between cost and density tend to be 2U servers with 12-14 drives. > > > Do we need run separate machine for monitoring. > If your OSDs are powerful enough (CPU/RAM/fast SSD for leveldb), not necessarily. You will want at least 3 MONs for production. > > MDS where we need to run, is it separate machine or OSD itself is better. > Again, it can be shared if you have enough resources on the OSDs. What would be a safe recommendation is to have 1-2 dedicated MON and MDS hosts and the rest of the MONs on OSDs. These dedicated hosts need to have the lowest IPs in your cluster to become MON leader. > > > CEPHFS file system we are going use for production. > The most important statement/question last. You will want to build a test cluster and verify that your application(s) are actually working well with CephFS, because if you read the ML there are cases when this may not be true. Christian -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Rakuten Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com