Re: Ceph Blog Articles

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Oh right, yes you would still see an increase in latency once the SSD's+CPU+Network start getting increased load. But I guess you
could scale out with more nodes/SSD's to combat this. This figure is more about finding out the minimum latency possible, to
maintain it under load probably just requires scaling out with more resource.

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Maged Mokhtar
> Sent: 14 November 2016 11:18
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Ceph Blog Articles
> 
> Hi Nick,
> 
> Actually I was referring to an all SSD cluster. I expect the latency to increase from when you have a low load / queue depth to
when
> you have a cluster under heavy load at/near its maximum iops throughput when the cpu cores are near peak utilization.
> 
> Cheers /Maged
> 
> --------------------------------------------------
> From: "Nick Fisk" <nick@xxxxxxxxxx>
> Sent: Monday, November 14, 2016 11:41 AM
> To: "'Maged Mokhtar'" <mmokhtar@xxxxxxxxxxx>; <ceph-users@xxxxxxxxxxxxxx>
> Subject: RE:  Ceph Blog Articles
> 
> > Hi Maged,
> >
> > I would imagine as soon as you start saturating the disks, the latency
> > impact would make the savings from the fast CPU's pointless.
> > Really you would only try and optimise the latency if you are using
> > SSD based cluster.
> >
> > This was only done with spinning disks in our case with a low Queue
> > Depth for investigation purposes. The low latency isn't something we
> > are currently making use of with this cluster, but has enabled us to
> > plan the correct hardware for any future SSD based clusters.
> >
> > Nick
> >
> >
> >> -----Original Message-----
> >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> >> Of Maged Mokhtar
> >> Sent: 12 November 2016 16:08
> >> To: ceph-users@xxxxxxxxxxxxxx
> >> Subject: Re:  Ceph Blog Articles
> >>
> >>
> >> Hi Nick,
> >>
> >> Maybe not directly relating to your use case, but it will nice to
> >> know, at least theoretically, how this latency will increase
> > under heavier
> >> loads specifically near max. cluster iops throughput where all cores
> >> will be at/near peak utilization.
> >>
> >> Would you be able to share any Ceph config parameters you changed to
> >> achieve low latency, what i/o scheduler did you use, also did you use
> >> jemalloc ?
> >>
> >> The Mhz per IO article is very interesting too, the single chart
> >> packs a lot of info.
> >>
> >> /Maged
> >>
> >> > Hi,
> >> >
> >> > Yes, I specifically wanted to make sure the disk part of the
> >> > infrastructure didn't affect the results, the main aims were to
> >> > reduce the end to end latency in the journals and Ceph code by
> >> > utilising fast CPU's and NVME journals. SQL transaction logs are a
> >> > good example where this low latency, low depth behaviour is required.
> >> >
> >> > There are also certain cases with direct io where even though you
> >> > have high queue depths, you can still get contention at the PG
> >> > depending on the IO/PG distribution. Getting latency low as
> >> > possible also helps here as well, as the PG is effectively single threaded at some point.
> >> >
> >> > Nick
> >> >
> >> >> -----Original Message-----
> >> >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On
> >> >> Behalf Of Maged Mokhtar
> >> >> Sent: 11 November 2016 21:48
> >> >> To: ceph-users@xxxxxxxxxxxxxx
> >> >> Subject: Re:  Ceph Blog Articles
> >> >>
> >> >>
> >> >>
> >> >> Nice article on write latency. If i understand correctly, this
> >> >> latency is measured while there is no overflow of the journal
> >> > caused by long
> >> >> sustained writes else you will start hitting the HDD latency. Also
> >> >> queue depth you use is 1 ?
> >> >>
> >> >> Will be interested to see your article on hardware.
> >> >>
> >> >> /Maged
> >> >>
> >> >>
> >> >>
> >> >> > Hi All,
> >> >> >
> >> >> > I've recently put together some articles around some of the
> >> >> > performance testing I have been doing.
> >> >> >
> >> >> > The first explores the high level theory behind latency in a
> >> >> > Ceph infrastructure and what we have managed to achieve.
> >> >> >
> >> >> > http://www.sys-pro.co.uk/ceph-write-latency/
> >> >> >
> >> >> > The second explores some of results we got from trying to work
> >> >> > out how much CPU a Ceph IO uses.
> >> >> >
> >> >> > http://www.sys-pro.co.uk/how-many-mhz-does-a-ceph-io-need/
> >> >> >
> >> >> > I hope they are of interest to someone.
> >> >> >
> >> >> > I'm currently working on a couple more explaining the choices
> >> >> > behind the hardware that got us 700us write latency and what we
> >> >> > finally built.
> >> >> >
> >> >> > Nick
> >> >> >
> >> >> > _______________________________________________
> >> >> > ceph-users mailing list
> >> >> > ceph-users@xxxxxxxxxxxxxx
> >> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >> >
> >> >>
> >> >>
> >> >> _______________________________________________
> >> >> ceph-users mailing list
> >> >> ceph-users@xxxxxxxxxxxxxx
> >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >
> >> >
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux