Re: ceph osd commit latency increase over time, until restart

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>hmm, so fragmentation grows eventually and drops on OSD restarts, isn't 
>>it? 
yes
>>The same for other OSDs? 
yes



>>Wondering if you have OSD mempool monitoring (dump_mempools command 
>>output on admin socket) reports? Do you have any historic data? 

not currently (I only have perf dump), I'll add them in my monitoring stats.


>>If not may I have current output and say a couple more samples with 
>>8-12 hours interval? 

I'll do it next week.

Thanks again for helping.


----- Mail original -----
De: "Igor Fedotov" <ifedotov@xxxxxxx>
À: "aderumier" <aderumier@xxxxxxxxx>
Cc: "Stefan Priebe, Profihost AG" <s.priebe@xxxxxxxxxxxx>, "Mark Nelson" <mnelson@xxxxxxxxxx>, "Sage Weil" <sage@xxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx>
Envoyé: Mardi 5 Février 2019 18:56:51
Objet: Re:  ceph osd commit latency increase over time, until restart

On 2/4/2019 6:40 PM, Alexandre DERUMIER wrote: 
>>> but I don't see l_bluestore_fragmentation counter. 
>>> (but I have bluestore_fragmentation_micros) 
> ok, this is the same 
> 
> b.add_u64(l_bluestore_fragmentation, "bluestore_fragmentation_micros", 
> "How fragmented bluestore free space is (free extents / max possible number of free extents) * 1000"); 
> 
> 
> Here a graph on last month, with bluestore_fragmentation_micros and latency, 
> 
> http://odisoweb1.odiso.net/latency_vs_fragmentation_micros.png 

hmm, so fragmentation grows eventually and drops on OSD restarts, isn't 
it? The same for other OSDs? 

This proves some issue with the allocator - generally fragmentation 
might grow but it shouldn't reset on restart. Looks like some intervals 
aren't properly merged in run-time. 

On the other side I'm not completely sure that latency degradation is 
caused by that - fragmentation growth is relatively small - I don't see 
how this might impact performance that high. 

Wondering if you have OSD mempool monitoring (dump_mempools command 
output on admin socket) reports? Do you have any historic data? 

If not may I have current output and say a couple more samples with 
8-12 hours interval? 


Wrt to backporting bitmap allocator to mimic - we haven't had such plans 
before that but I'll discuss this at BlueStore meeting shortly. 


Thanks, 

Igor 

> ----- Mail original ----- 
> De: "Alexandre Derumier" <aderumier@xxxxxxxxx> 
> À: "Igor Fedotov" <ifedotov@xxxxxxx> 
> Cc: "Stefan Priebe, Profihost AG" <s.priebe@xxxxxxxxxxxx>, "Mark Nelson" <mnelson@xxxxxxxxxx>, "Sage Weil" <sage@xxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx> 
> Envoyé: Lundi 4 Février 2019 16:04:38 
> Objet: Re:  ceph osd commit latency increase over time, until restart 
> 
> Thanks Igor, 
> 
>>> Could you please collect BlueStore performance counters right after OSD 
>>> startup and once you get high latency. 
>>> 
>>> Specifically 'l_bluestore_fragmentation' parameter is of interest. 
> I'm already monitoring with 
> "ceph daemon osd.x perf dump ", (I have 2months history will all counters) 
> 
> but I don't see l_bluestore_fragmentation counter. 
> 
> (but I have bluestore_fragmentation_micros) 
> 
> 
>>> Also if you're able to rebuild the code I can probably make a simple 
>>> patch to track latency and some other internal allocator's paramter to 
>>> make sure it's degraded and learn more details. 
> Sorry, It's a critical production cluster, I can't test on it :( 
> But I have a test cluster, maybe I can try to put some load on it, and try to reproduce. 
> 
> 
> 
>>> More vigorous fix would be to backport bitmap allocator from Nautilus 
>>> and try the difference... 
> Any plan to backport it to mimic ? (But I can wait for Nautilus) 
> perf results of new bitmap allocator seem very promising from what I've seen in PR. 
> 
> 
> 
> ----- Mail original ----- 
> De: "Igor Fedotov" <ifedotov@xxxxxxx> 
> À: "Alexandre Derumier" <aderumier@xxxxxxxxx>, "Stefan Priebe, Profihost AG" <s.priebe@xxxxxxxxxxxx>, "Mark Nelson" <mnelson@xxxxxxxxxx> 
> Cc: "Sage Weil" <sage@xxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx> 
> Envoyé: Lundi 4 Février 2019 15:51:30 
> Objet: Re:  ceph osd commit latency increase over time, until restart 
> 
> Hi Alexandre, 
> 
> looks like a bug in StupidAllocator. 
> 
> Could you please collect BlueStore performance counters right after OSD 
> startup and once you get high latency. 
> 
> Specifically 'l_bluestore_fragmentation' parameter is of interest. 
> 
> Also if you're able to rebuild the code I can probably make a simple 
> patch to track latency and some other internal allocator's paramter to 
> make sure it's degraded and learn more details. 
> 
> 
> More vigorous fix would be to backport bitmap allocator from Nautilus 
> and try the difference... 
> 
> 
> Thanks, 
> 
> Igor 
> 
> 
> On 2/4/2019 5:17 PM, Alexandre DERUMIER wrote: 
>> Hi again, 
>> 
>> I speak too fast, the problem has occured again, so it's not tcmalloc cache size related. 
>> 
>> 
>> I have notice something using a simple "perf top", 
>> 
>> each time I have this problem (I have seen exactly 4 times the same behaviour), 
>> 
>> when latency is bad, perf top give me : 
>> 
>> StupidAllocator::_aligned_len 
>> and 
>> btree::btree_iterator<btree::btree_node<btree::btree_map_params<unsigned long, unsigned long, std::less<unsigned long>, mempoo 
>> l::pool_allocator<(mempool::pool_index_t)1, std::pair<unsigned long const, unsigned long> >, 256> >, std::pair<unsigned long const, unsigned long>&, std::pair<unsigned long 
>> const, unsigned long>*>::increment_slow() 
>> 
>> (around 10-20% time for both) 
>> 
>> 
>> when latency is good, I don't see them at all. 
>> 
>> 
>> I have used the Mark wallclock profiler, here the results: 
>> 
>> http://odisoweb1.odiso.net/gdbpmp-ok.txt 
>> 
>> http://odisoweb1.odiso.net/gdbpmp-bad.txt 
>> 
>> 
>> here an extract of the thread with btree::btree_iterator && StupidAllocator::_aligned_len 
>> 
>> 
>> + 100.00% clone 
>> + 100.00% start_thread 
>> + 100.00% ShardedThreadPool::WorkThreadSharded::entry() 
>> + 100.00% ShardedThreadPool::shardedthreadpool_worker(unsigned int) 
>> + 100.00% OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*) 
>> + 70.00% PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&) 
>> | + 70.00% OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&) 
>> | + 70.00% PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&) 
>> | + 68.00% PGBackend::handle_message(boost::intrusive_ptr<OpRequest>) 
>> | | + 68.00% ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>) 
>> | | + 68.00% ReplicatedBackend::do_repop(boost::intrusive_ptr<OpRequest>) 
>> | | + 67.00% non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<OpRequest>) 
>> | | | + 67.00% BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*) 
>> | | | + 66.00% BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*) 
>> | | | | + 66.00% BlueStore::_write(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>&, unsigned long, unsigned long, ceph::buffer::list&, unsigned int) 
>> | | | | + 66.00% BlueStore::_do_write(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>&, boost::intrusive_ptr<BlueStore::Onode>, unsigned long, unsigned long, ceph::buffer::list&, unsigned int) 
>> | | | | + 65.00% BlueStore::_do_alloc_write(BlueStore::TransContext*, boost::intrusive_ptr<BlueStore::Collection>, boost::intrusive_ptr<BlueStore::Onode>, BlueStore::WriteContext*) 
>> | | | | | + 64.00% StupidAllocator::allocate(unsigned long, unsigned long, unsigned long, long, std::vector<bluestore_pextent_t, mempool::pool_allocator<(mempool::pool_index_t)4, bluestore_pextent_t> >*) 
>> | | | | | | + 64.00% StupidAllocator::allocate_int(unsigned long, unsigned long, long, unsigned long*, unsigned int*) 
>> | | | | | | + 34.00% btree::btree_iterator<btree::btree_node<btree::btree_map_params<unsigned long, unsigned long, std::less<unsigned long>, mempool::pool_allocator<(mempool::pool_index_t)1, std::pair<unsigned long const, unsigned long> >, 256> >, std::pair<unsigned long const, unsigned long>&, std::pair<unsigned long const, unsigned long>*>::increment_slow() 
>> | | | | | | + 26.00% StupidAllocator::_aligned_len(interval_set<unsigned long, btree::btree_map<unsigned long, unsigned long, std::less<unsigned long>, mempool::pool_allocator<(mempool::pool_index_t)1, std::pair<unsigned long const, unsigned long> >, 256> >::iterator, unsigned long) 
>> 
>> 
>> 
>> ----- Mail original ----- 
>> De: "Alexandre Derumier" <aderumier@xxxxxxxxx> 
>> À: "Stefan Priebe, Profihost AG" <s.priebe@xxxxxxxxxxxx> 
>> Cc: "Sage Weil" <sage@xxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx> 
>> Envoyé: Lundi 4 Février 2019 09:38:11 
>> Objet: Re:  ceph osd commit latency increase over time, until restart 
>> 
>> Hi, 
>> 
>> some news: 
>> 
>> I have tried with different transparent hugepage values (madvise, never) : no change 
>> 
>> I have tried to increase bluestore_cache_size_ssd to 8G: no change 
>> 
>> I have tried to increase TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES to 256mb : it seem to help, after 24h I'm still around 1,5ms. (need to wait some more days to be sure) 
>> 
>> 
>> Note that this behaviour seem to happen really faster (< 2 days) on my big nvme drives (6TB), 
>> my others clusters user 1,6TB ssd. 
>> 
>> Currently I'm using only 1 osd by nvme (I don't have more than 5000iops by osd), but I'll try this week with 2osd by nvme, to see if it's helping. 
>> 
>> 
>> BTW, does somebody have already tested ceph without tcmalloc, with glibc >= 2.26 (which have also thread cache) ? 
>> 
>> 
>> Regards, 
>> 
>> Alexandre 
>> 
>> 
>> ----- Mail original ----- 
>> De: "aderumier" <aderumier@xxxxxxxxx> 
>> À: "Stefan Priebe, Profihost AG" <s.priebe@xxxxxxxxxxxx> 
>> Cc: "Sage Weil" <sage@xxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx> 
>> Envoyé: Mercredi 30 Janvier 2019 19:58:15 
>> Objet: Re:  ceph osd commit latency increase over time, until restart 
>> 
>>>> Thanks. Is there any reason you monitor op_w_latency but not 
>>>> op_r_latency but instead op_latency? 
>>>> 
>>>> Also why do you monitor op_w_process_latency? but not op_r_process_latency? 
>> I monitor read too. (I have all metrics for osd sockets, and a lot of graphs). 
>> 
>> I just don't see latency difference on reads. (or they are very very small vs the write latency increase) 
>> 
>> 
>> 
>> ----- Mail original ----- 
>> De: "Stefan Priebe, Profihost AG" <s.priebe@xxxxxxxxxxxx> 
>> À: "aderumier" <aderumier@xxxxxxxxx> 
>> Cc: "Sage Weil" <sage@xxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx> 
>> Envoyé: Mercredi 30 Janvier 2019 19:50:20 
>> Objet: Re:  ceph osd commit latency increase over time, until restart 
>> 
>> Hi, 
>> 
>> Am 30.01.19 um 14:59 schrieb Alexandre DERUMIER: 
>>> Hi Stefan, 
>>> 
>>>>> currently i'm in the process of switching back from jemalloc to tcmalloc 
>>>>> like suggested. This report makes me a little nervous about my change. 
>>> Well,I'm really not sure that it's a tcmalloc bug. 
>>> maybe bluestore related (don't have filestore anymore to compare) 
>>> I need to compare with bigger latencies 
>>> 
>>> here an example, when all osd at 20-50ms before restart, then after restart (at 21:15), 1ms 
>>> http://odisoweb1.odiso.net/latencybad.png 
>>> 
>>> I observe the latency in my guest vm too, on disks iowait. 
>>> 
>>> http://odisoweb1.odiso.net/latencybadvm.png 
>>> 
>>>>> Also i'm currently only monitoring latency for filestore osds. Which 
>>>>> exact values out of the daemon do you use for bluestore? 
>>> here my influxdb queries: 
>>> 
>>> It take op_latency.sum/op_latency.avgcount on last second. 
>>> 
>>> 
>>> SELECT non_negative_derivative(first("op_latency.sum"), 1s)/non_negative_derivative(first("op_latency.avgcount"),1s) FROM "ceph" WHERE "host" =~ /^([[host]])$/ AND "id" =~ /^([[osd]])$/ AND $timeFilter GROUP BY time($interval), "host", "id" fill(previous) 
>>> 
>>> 
>>> SELECT non_negative_derivative(first("op_w_latency.sum"), 1s)/non_negative_derivative(first("op_w_latency.avgcount"),1s) FROM "ceph" WHERE "host" =~ /^([[host]])$/ AND collection='osd' AND "id" =~ /^([[osd]])$/ AND $timeFilter GROUP BY time($interval), "host", "id" fill(previous) 
>>> 
>>> 
>>> SELECT non_negative_derivative(first("op_w_process_latency.sum"), 1s)/non_negative_derivative(first("op_w_process_latency.avgcount"),1s) FROM "ceph" WHERE "host" =~ /^([[host]])$/ AND collection='osd' AND "id" =~ /^([[osd]])$/ AND $timeFilter GROUP BY time($interval), "host", "id" fill(previous) 
>> Thanks. Is there any reason you monitor op_w_latency but not 
>> op_r_latency but instead op_latency? 
>> 
>> Also why do you monitor op_w_process_latency? but not op_r_process_latency? 
>> 
>> greets, 
>> Stefan 
>> 
>>> 
>>> 
>>> 
>>> ----- Mail original ----- 
>>> De: "Stefan Priebe, Profihost AG" <s.priebe@xxxxxxxxxxxx> 
>>> À: "aderumier" <aderumier@xxxxxxxxx>, "Sage Weil" <sage@xxxxxxxxxxxx> 
>>> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx> 
>>> Envoyé: Mercredi 30 Janvier 2019 08:45:33 
>>> Objet: Re:  ceph osd commit latency increase over time, until restart 
>>> 
>>> Hi, 
>>> 
>>> Am 30.01.19 um 08:33 schrieb Alexandre DERUMIER: 
>>>> Hi, 
>>>> 
>>>> here some new results, 
>>>> different osd/ different cluster 
>>>> 
>>>> before osd restart latency was between 2-5ms 
>>>> after osd restart is around 1-1.5ms 
>>>> 
>>>> http://odisoweb1.odiso.net/cephperf2/bad.txt (2-5ms) 
>>>> http://odisoweb1.odiso.net/cephperf2/ok.txt (1-1.5ms) 
>>>> http://odisoweb1.odiso.net/cephperf2/diff.txt 
>>>> 
>>>> From what I see in diff, the biggest difference is in tcmalloc, but maybe I'm wrong. 
>>>> (I'm using tcmalloc 2.5-2.2) 
>>> currently i'm in the process of switching back from jemalloc to tcmalloc 
>>> like suggested. This report makes me a little nervous about my change. 
>>> 
>>> Also i'm currently only monitoring latency for filestore osds. Which 
>>> exact values out of the daemon do you use for bluestore? 
>>> 
>>> I would like to check if i see the same behaviour. 
>>> 
>>> Greets, 
>>> Stefan 
>>> 
>>>> ----- Mail original ----- 
>>>> De: "Sage Weil" <sage@xxxxxxxxxxxx> 
>>>> À: "aderumier" <aderumier@xxxxxxxxx> 
>>>> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>, "ceph-devel" <ceph-devel@xxxxxxxxxxxxxxx> 
>>>> Envoyé: Vendredi 25 Janvier 2019 10:49:02 
>>>> Objet: Re: ceph osd commit latency increase over time, until restart 
>>>> 
>>>> Can you capture a perf top or perf record to see where teh CPU time is 
>>>> going on one of the OSDs wth a high latency? 
>>>> 
>>>> Thanks! 
>>>> sage 
>>>> 
>>>> 
>>>> On Fri, 25 Jan 2019, Alexandre DERUMIER wrote: 
>>>> 
>>>>> Hi, 
>>>>> 
>>>>> I have a strange behaviour of my osd, on multiple clusters, 
>>>>> 
>>>>> All cluster are running mimic 13.2.1,bluestore, with ssd or nvme drivers, 
>>>>> workload is rbd only, with qemu-kvm vms running with librbd + snapshot/rbd export-diff/snapshotdelete each day for backup 
>>>>> 
>>>>> When the osd are refreshly started, the commit latency is between 0,5-1ms. 
>>>>> 
>>>>> But overtime, this latency increase slowly (maybe around 1ms by day), until reaching crazy 
>>>>> values like 20-200ms. 
>>>>> 
>>>>> Some example graphs: 
>>>>> 
>>>>> http://odisoweb1.odiso.net/osdlatency1.png 
>>>>> http://odisoweb1.odiso.net/osdlatency2.png 
>>>>> 
>>>>> All osds have this behaviour, in all clusters. 
>>>>> 
>>>>> The latency of physical disks is ok. (Clusters are far to be full loaded) 
>>>>> 
>>>>> And if I restart the osd, the latency come back to 0,5-1ms. 
>>>>> 
>>>>> That's remember me old tcmalloc bug, but maybe could it be a bluestore memory bug ? 
>>>>> 
>>>>> Any Hints for counters/logs to check ? 
>>>>> 
>>>>> 
>>>>> Regards, 
>>>>> 
>>>>> Alexandre 
>>>>> 
>>>>> 
>>>> _______________________________________________ 
>>>> ceph-users mailing list 
>>>> ceph-users@xxxxxxxxxxxxxx 
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>>>> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux