Re: SSD disk distribution

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Christian,

I see... unfortunately we forgotten to take the CPU usage of OSDs into account in our calculation.

Thanks for your input and the link to the other thread.

Best,
Martin

On Sat, May 30, 2015 at 10:59 AM, Christian Balzer <chibi@xxxxxxx> wrote:

Hello,

see the current "Blocked requests/ops?" thread in this ML, especially the
later parts.
And a number of similar threads.

In short, the CPU requirement for SSD based pools are significantly higher
than for HDD or HDD/SSD journal pools.

So having dedicated SSD nodes with less OSDs, faster CPUs and potentially
faster network makes a lot of sense.
It also helps a bit to keep you and your CRUSH rules sane.

In your example you'd have 12 HDD based OSDs with journals, at 1.5-2GHz
CPU per OSD (things will get CPU bound with small write IOPS).
A SSD (I'm assuming something like DC S3700) based OSD will eat all the
CPU you can throw at it, 6-8GHZ would be a pretty conservative number.

Search the archives for the latest tests/benchmarks by others,  don't take
my (slightly dated) word for it.

Lastly you may find like other that cache-tiers currently aren't all great
performance wise.

Christian.

On Sat, 30 May 2015 10:36:39 +0200 Martin Palma wrote:

> Hello,
>
> We are planing to deploy our first Ceph cluster with 14 storage nodes
> and 3 monitor nodes. The storage node have 12 SATA disks and 4 SSDs. 2
> of the SSDs we plan to use as
> journal disks and 2 for cache tiering.
>
> Now the question raised in our team if it would be better to put all SSDs
> lets say in 2 storage nodes and consider them as fast nodes or to
> distribute the SSDs for the cache tiering over all 14 nodes (2 per node).
>
> In mine opinion, if I understood the concept of Ceph right (I'm still in
> the learning process ;-) distributing the SSDs across all storage nodes
> would be better since this also would distribute the network traffic
> (client access) across all 14 nodes and not only limit it to 2 nodes.
> Right?
>
> Any suggestion on that?
>
> Best,
> Martin


--
Christian Balzer        Network/Systems Engineer
chibi@xxxxxxx           Global OnLine Japan/Fusion Communications
http://www.gol.com/

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux