Re: Small Ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 1 Aug 2016 14:34:43 +0200 Tom T wrote:

> Hi Christian,
> 
> Thnx for your reply.
> 
> Case:
> CSE-825TQC-600LPB
> 
> I made a typo with the CPU, it's a E3-1240 v5 3.5Ghz.
> So a E5-2620 v4 is recommended when i want to add SSD for caching ?
> 
It's more a question of core count (and speed per core), an E5-1650 v3 for
example would do the trick (in all situations) with 4 HDD OSDs with SSD
journal and 1-2 SSD OSDs.

> With a caching tier, is the data on the caching tier a copy from data on
> the normal tier ?
You want to re-read the respective documentation and the various threads
on this ML about cache tiering, including my 
"Cache tier operation clarifications" one.

In a reasonably busy cluster a cache pool will be very different from the
base pool and some hot data may never reach the base pool, ever.

Meaning that your cache pool needs to be just as reliable as everything
else.
 
> Is a caching tier with one SSD recommended or should i always have two SSD
> in replicated mode ?
> 
See above.

Christian
> 
> Kind regards,
> Tom
> 
> 
> 
> On Mon, Aug 1, 2016 at 2:00 PM, Christian Balzer <chibi@xxxxxxx> wrote:
> 
> >
> > Hello,
> >
> > On Mon, 1 Aug 2016 11:09:00 +0200 Tom T wrote:
> >
> > > Hi Ceph users
> > >
> > > We are planning to setup a small ceph cluster, starting with 3 nodes for
> > > VM's.
> > > I have some question about CPU and caching
> > >
> > > We would like to start with the following config:
> > >
> > >
> > > Supermicro X11SSI-LN4F
> > In which case?
> >
> > > Intel E3-1246 v3 3.5Ghz
> > A bit dated, but fast enough.
> >
> > > 32GB RAM
> > While enough for 4 OSDs, don't skimp on RAM if you can afford, reads will
> > thank you for it.
> >
> > > S3500 80GB M.2 for OS
> > If you're short on money, maybe use a 535 (or 2!) for that purpose.
> >
> > > AOC-S3008L-L8e (LSI SAS3008)
> > > 4x 2TB ST2000NM0034 SAS12Gb
> >
> > I fail to see the need/point for 7.2k RPM HDDs with a mere 128MB of cache
> > hanging of a 12Gb/s bus, but maybe that's just me.
> >
> > > 1x Intel 200GB S3710 for journal (via onboard SATA)
> > Good enough.
> >
> > > 4x 1Gb for networking
> > >
> > Unless all your clients also are limited to GbE and you have no budget to
> > change that, don't.
> >
> > For VM's latency will be one of your biggest nemesis (nemesii?), use
> > faster (lower latency) networking.
> >
> > > Questions:
> > > Is the CPU enough ?
> > See above.
> >
> > > I would like to run the monitoring deamon on the same host, would this
> > be a
> > > problem ?
> > >
> > Just within the normal usage needs, more RAM in that case anyway.
> >
> > > Optionally i would like to add an extra SSD for caching
> > Not really recommended with that server and not particular helpful with
> > that network.
> > A single SSD of any caliber will/can eat one of your CPU cores by itself
> > and then ask for seconds.
> >
> > > Does write-back caching also optimize the reads ?
> > Yes, subject to "correct" configuration of course.
> >
> > > Do I need two SSD's per node
> > >
> > From a performance point of view, not so much.
> > Your network can't even saturate one 200GB DC S3710.
> >
> > From a redundancy point of view you might be better off with more nodes.
> >
> > Christian
> >
> > >
> > > Kind regards,
> > > Tom
> >
> >
> > --
> > Christian Balzer        Network/Systems Engineer
> > chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
> > http://www.gol.com/
> >


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux