Re: Recommended OSD size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



HI Christian,

Typo in my previous mail.

Thanks for your reply, It will be very helpful if we get the details on osd per server configuration and scenarios is below.

As of now 6 TB data usage, in feature it will be increase to 10 TB

Per ceph client read and write 

Read  :- kB/s  57726
Write  :- kB/s  100144

We will be use around 10 clients 

Read  :- kB/s  57726   x 10 clients
Write  :- kB/s  100144 x 10 clients


We have servers for OSD

8 Core x 2 CPU total hyper-threading 32 CPU
HDD : 16TB
RAM  : 96 GB

Regards
Prabu GJ


---- On Fri, 20 May 2016 18:50:48 +0530 gjprabu <gjprabu@xxxxxxxxxxxx>wrote ----

Hi Christian,

        Thanks for your reply, our performance requirement will be like below. It will be very helpful is your provide the details for below scenario.

As of now 6 TB data usage, in feature size will be 10 TB.

Per ceph client read and write 

Read  :- kB/s  57726
Write  :- kB/s  100144

We will be use around 10 clients 

Read  :- kB/s  57726   x 10 clients
Write  :- kB/s  100144 x 10 clients

Regards
Prabu GJ


---- On Fri, 13 May 2016 13:23:56 +0530 Christian Balzer <chibi@xxxxxxx>wrote ----



_______________________________________________
ceph-users mailing list
On Fri, 13 May 2016 12:38:05 +0530 gjprabu wrote:

Hello,

> Hi All,
>
>
>
> We need some clarification on CEPH OSD and MON and MDS. It will
> be very helpful and better understand to know below details.
>
You will want to spend more time reading the documentation and hardware
guides, as well as finding similar threads in the ML archives.

>
>
> Per OSD Recommended SIZE ( Both scsi and ssd ).
>
With SCSI I suppose you mean HDDs?

And there is not good answer, it depends on your needs and use case.
For example if your main goal is space and not performance, fewer but
larger HDDs will be a better fit.

>
> Which is recommended one (per machine = per OSD) or (Per machine = many
> OSD.)
>
The first part makes no sense, I suppose you mean one or few OSD per
server?

And again, it all depends on your goals and budget.
Find and read the hardware guides, there are other considerations like
RAM and CPU.

Many OSDs per server can be complicated and challenging, unless you know
very well what you're doing.

The usual compromise between cost and density tend to be 2U servers with
12-14 drives.

>
>
> Do we need run separate machine for monitoring.
>
If your OSDs are powerful enough (CPU/RAM/fast SSD for leveldb), not
necessarily.
You will want at least 3 MONs for production.

>
> MDS where we need to run, is it separate machine or OSD itself is better.
>
Again, it can be shared if you have enough resources on the OSDs.

What would be a safe recommendation is to have 1-2 dedicated MON and MDS
hosts and the rest of the MONs on OSDs.
These dedicated hosts need to have the lowest IPs in your cluster to become
MON leader.

>
>
> CEPHFS file system we are going use for production.
>
The most important statement/question last.

You will want to build a test cluster and verify that your application(s)
are actually working well with CephFS, because if you read the ML there
are cases when this may not be true.

Christian
--
Christian Balzer Network/Systems Engineer
chibi@xxxxxxx     Global OnLine Japan/Rakuten Communications

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux