Re: Ceph Blog Articles

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of William Josefsson
> Sent: 14 November 2016 14:46
> To: Nick Fisk <nick@xxxxxxxxxx>
> Cc: Ceph Users <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  Ceph Blog Articles
> 
> Hi Nick, I found the graph very useful explaining the concept. thx for sharing.
> 
> I'm currently planning to setup a new cluster and wanted to get low latency by using,
> 
> 2U server,
> 6xIntel P3700 400GB for journal and
> 18x1.8TB Hitachi Spinning 10k SAS. My OSD:Journal ratio would be 3:1.

I've found that with NVME you can probably stretch the OSD to Journal ratio. However lower ratios will help performance.

> All over 10Gbit copper network. For CPU I'm thinking of E5-2667v4 (Single socket, 8 cores, 3.2Ghz) Replica size=3, and 5 hosts.
> 
> Do you think it is possible to get a low direct sync write latency 3-5ms-avg with a configuration like this, or would the spinning
drives
> actually slow down this whole design? thx will

I would imagine that should be achievable, but of course depends on IO size and queue depth. But 4kb QD=1 should be easily do-able.
If you can give an IO example I'm happy to run a fio test on my cluster.

Nick

> 
> 
> 
> 
> On Sat, Nov 12, 2016 at 3:33 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:
> > Hi All,
> >
> > I've recently put together some articles around some of the performance testing I have been doing.
> >
> > The first explores the high level theory behind latency in a Ceph infrastructure and what we have managed to achieve.
> >
> > http://www.sys-pro.co.uk/ceph-write-latency/
> >
> > The second explores some of results we got from trying to work out how much CPU a Ceph IO uses.
> >
> > http://www.sys-pro.co.uk/how-many-mhz-does-a-ceph-io-need/
> >
> > I hope they are of interest to someone.
> >
> > I'm currently working on a couple more explaining the choices behind
> > the hardware that got us 700us write latency and what we finally built.
> >
> > Nick
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux