Re: ceph osd commit latency increase over time, until restart

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 30 Jan 2019, Alexandre DERUMIER wrote:
> Hi,
> 
> here some new results,
> different osd/ different cluster
> 
> before osd restart latency was between 2-5ms
> after osd restart is around 1-1.5ms
> 
> http://odisoweb1.odiso.net/cephperf2/bad.txt  (2-5ms)
> http://odisoweb1.odiso.net/cephperf2/ok.txt (1-1.5ms)
> http://odisoweb1.odiso.net/cephperf2/diff.txt

I don't see any smoking gun here... :/

The main difference between a warm OSD and a cold one is that on startup 
the bluestore cache is empty.  You might try setting the bluestore cache 
size to something much smaller and see if that has an effect on the CPU 
utilization?

Note that this doesn't necessarily mean that's what you want.  Maybe the 
reason why the CPU utilization is higher is because the cache is warm and 
the OSD is serving more requests per second...

sage



> 
> >From what I see in diff, the biggest difference is in tcmalloc, but maybe I'm wrong.
> 
> (I'm using tcmalloc 2.5-2.2)
> 
> 
> ----- Mail original -----
> De: "Sage Weil" <sage@xxxxxxxxxxxx>
> À: "aderumier" <aderumier@xxxxxxxxx>
> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>
> Envoyé: Vendredi 25 Janvier 2019 10:49:02
> Objet: Re: ceph osd commit latency increase over time, until restart
> 
> Can you capture a perf top or perf record to see where teh CPU time is 
> going on one of the OSDs wth a high latency? 
> 
> Thanks! 
> sage 
> 
> 
> On Fri, 25 Jan 2019, Alexandre DERUMIER wrote: 
> 
> > 
> > Hi, 
> > 
> > I have a strange behaviour of my osd, on multiple clusters, 
> > 
> > All cluster are running mimic 13.2.1,bluestore, with ssd or nvme drivers, 
> > workload is rbd only, with qemu-kvm vms running with librbd + snapshot/rbd export-diff/snapshotdelete each day for backup 
> > 
> > When the osd are refreshly started, the commit latency is between 0,5-1ms. 
> > 
> > But overtime, this latency increase slowly (maybe around 1ms by day), until reaching crazy 
> > values like 20-200ms. 
> > 
> > Some example graphs: 
> > 
> > http://odisoweb1.odiso.net/osdlatency1.png 
> > http://odisoweb1.odiso.net/osdlatency2.png 
> > 
> > All osds have this behaviour, in all clusters. 
> > 
> > The latency of physical disks is ok. (Clusters are far to be full loaded) 
> > 
> > And if I restart the osd, the latency come back to 0,5-1ms. 
> > 
> > That's remember me old tcmalloc bug, but maybe could it be a bluestore memory bug ? 
> > 
> > Any Hints for counters/logs to check ? 
> > 
> > 
> > Regards, 
> > 
> > Alexandre 
> > 
> > 
> 
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux