Hi Christian,
Please share your suggestion.
Regards
Prabu GJ
---- On Sat, 21 May 2016 17:33:33 +0530 gjprabu <gjprabu@xxxxxxxxxxxx>wrote ----
HI Christian,Typo in my previous mail.Thanks for your reply, It will be very helpful if we get the details on osd per server configuration and scenarios is below.As of now 6 TB data usage, in feature it will be increase to 10 TBPer ceph client read and writeRead :- kB/s 57726Write :- kB/s 100144We will be use around 10 clientsRead :- kB/s 57726 x 10 clientsWrite :- kB/s 100144 x 10 clientsWe have servers for OSD8 Core x 2 CPU total hyper-threading 32 CPUHDD : 16TBRAM : 96 GBRegardsPrabu GJ---- On Fri, 20 May 2016 18:50:48 +0530 gjprabu <gjprabu@xxxxxxxxxxxx>wrote ----Hi Christian,Thanks for your reply, our performance requirement will be like below. It will be very helpful is your provide the details for below scenario.As of now 6 TB data usage, in feature size will be 10 TB.Per ceph client read and writeRead :- kB/s 57726Write :- kB/s 100144We will be use around 10 clientsRead :- kB/s 57726 x 10 clientsWrite :- kB/s 100144 x 10 clientsRegardsPrabu GJ_______________________________________________ceph-users mailing listOn Fri, 13 May 2016 12:38:05 +0530 gjprabu wrote:Hello,> Hi All,>>>> We need some clarification on CEPH OSD and MON and MDS. It will> be very helpful and better understand to know below details.>You will want to spend more time reading the documentation and hardwareguides, as well as finding similar threads in the ML archives.>>> Per OSD Recommended SIZE ( Both scsi and ssd ).>With SCSI I suppose you mean HDDs?And there is not good answer, it depends on your needs and use case.For example if your main goal is space and not performance, fewer butlarger HDDs will be a better fit.>> Which is recommended one (per machine = per OSD) or (Per machine = many> OSD.)>The first part makes no sense, I suppose you mean one or few OSD perserver?And again, it all depends on your goals and budget.Find and read the hardware guides, there are other considerations likeRAM and CPU.Many OSDs per server can be complicated and challenging, unless you knowvery well what you're doing.The usual compromise between cost and density tend to be 2U servers with12-14 drives.>>> Do we need run separate machine for monitoring.>If your OSDs are powerful enough (CPU/RAM/fast SSD for leveldb), notnecessarily.You will want at least 3 MONs for production.>> MDS where we need to run, is it separate machine or OSD itself is better.>Again, it can be shared if you have enough resources on the OSDs.What would be a safe recommendation is to have 1-2 dedicated MON and MDShosts and the rest of the MONs on OSDs.These dedicated hosts need to have the lowest IPs in your cluster to becomeMON leader.>>> CEPHFS file system we are going use for production.>The most important statement/question last.You will want to build a test cluster and verify that your application(s)are actually working well with CephFS, because if you read the ML thereare cases when this may not be true.Christian--Christian Balzer Network/Systems Engineerchibi@xxxxxxx Global OnLine Japan/Rakuten Communications
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com