Re: OSDs cpu usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Seems solved with 10.2.5

 

Thank you

 

 

 

/Users/goodechilde/Desktop/Mapp/Brand Guide/Logos/mapp-v2_trans_200x58-noleft.png

 

George Kissandrakis

Senior Infrastructure Engineer
+49 891200 9831
www.mapp.com
LinkedIn | Twitter | Facebook

 

This e-mail is from Mapp Digital, LLC and it's international legal entities and may contain information that is confidential or proprietary. If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.

Please consider the environment before printing.

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of David Riedl
Sent: Monday, 12 December, 2016 17:13
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re: OSDs cpu usage

 

10.2.5 exists because of this bug.

Here are the patch notes

http://docs.ceph.com/docs/master/release-notes/#v10-2-5-jewel

 

Regards

David

 

Am 12.12.2016 um 17:09 schrieb George Kissandrakis:

I saw that 10.2.5 is out but if a bug appeared on 10.2.4 would that have
been fixed in 10.2.5, or just upgrade and hope for the best?
 
 
 
 
 
George Kissandrakis
Senior Infrastructure Engineer
+49 891200 9831
www.mapp.com
LinkedIn | Twitter | Facebook
 
This e-mail is from Mapp Digital, LLC and it's international legal entities
and may contain information that is confidential or proprietary. If you are
not the intended recipient, do not read, copy or distribute the e-mail or
any attachments. Instead, please notify the sender and delete the e-mail and
any attachments. Thank you.
Please consider the environment before printing.
 
-----Original Message-----
From: ulembke@xxxxxxxxxxxx [mailto:ulembke@xxxxxxxxxxxx] 
Sent: Monday, 12 December, 2016 16:48
To: George Kissandrakis <george.kissandrakis@xxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  OSDs cpu usage
 
Hi,
update to 10.2.5 - available since saturday.
 
Udo
 
Am 2016-12-12 13:40, schrieb George Kissandrakis:
Hi
 
 
 
I have a jewel/xenial ceph installation with 61 OSDs mixed sas/sata in 
hosts with two roots
 
The installation has version jewel 10.2.3-1xenial (and monitors)
 
 
 
Two hosts where newly added and the version jewel 10.2.4-1xenial as 
installed
 
These two hosts' with the newer packages,  ceph-osd processes use 300% 
cpu usage
 
Second host seems more calm after the weekend but on Friday it was the 
same
 
Attaching top, tree, config, ceph -s, and logs from osd.54 which is 
the same for all
 
 
 
My next move is to get deb packages from other hosts and install these 
two hosts with 10.2.3
 
 
 
Any suggestions?
 
Thank you
 
 
 
 
 
6242 ceph      20   0 1402008 290444  25196 S 304.3  0.2  13477:31 
ceph-osd
 
  7724 ceph      20   0 1446532 317972  24760 S 304.0  0.2  15575:49
ceph-osd
 
  8256 ceph      20   0 1405440 305388  24884 S 302.0  0.2  15584:05
ceph-osd
 
  6998 ceph      20   0 1414428 327496  24804 S 301.3  0.2  15502:35
ceph-osd
 
  6643 ceph      20   0 1391592 256508  24840 S 301.0  0.2  15465:15
ceph-osd
 
  5848 ceph      20   0 1423724 265708  24844 S 206.3  0.2   7649:34
ceph-osd
 
  7456 ceph      20   0 1420584 303252  25408 S 206.0  0.2  13801:49
ceph-osd
 
  8657 ceph      20   0 1242184 307412  28936 S 200.7  0.2   9604:28
ceph-osd
 
  8699 ceph      20   0 1349152 394528  29688 S 200.3  0.3   7553:50
ceph-osd
 
  8786 ceph      20   0 1303448 355868  27148 S 100.7  0.3   3786:07
ceph-osd
 
 
 
 
 
7593 ceph      20   0 1274436 272964  27444 S 703.9  0.2  29144:22 
ceph-osd
 
5106 ceph      20   0 1532896 344476  25104 S 111.8  0.3  13437:23 
ceph-osd
 
5222 ceph      20   0 1468644 370156  25340 S  20.6  0.3   2962:27 
ceph-osd
 
5119 ceph      20   0 1500656 342368  24720 S  18.6  0.3   4971:34 
ceph-osd
 
5132 ceph      20   0 1472920 326612  24212 S  18.6  0.2   2793:09 
ceph-osd
 
5153 ceph      20   0 1422372 275816  24420 S   9.8  0.2   2944:55 
ceph-osd
 
5279 ceph      20   0 1418700 299712  24932 S   3.9  0.2   7050:08 
ceph-osd
 
 
 
 
 
 
 
 
 
ID  WEIGHT  TYPE NAME                        UP/DOWN REWEIGHT
PRIMARY-AFFINITY
 
-8 3.70183 root host-spin
 
-5 0.76799     host-spin opstck-emc201-spin
 
  8 0.22299         osd.8                         up  1.00000
1.00000
 
15 0.27199         osd.15                        up  1.00000
1.00000
 
24 0.27199         osd.24                        up  1.00000
1.00000
 
-6 0.76799     host-spin opstck-emc202-spin
 
25 0.22299         osd.25                        up  1.00000
1.00000
 
26 0.27199         osd.26                        up  1.00000
1.00000
 
27 0.27199         osd.27                        up  1.00000
1.00000
 
-7       0     host-spin opstck-emc203-spin
 
-12 0.46696     host-spin opstck-emc209-spin
 
23 0.22298         osd.23                        up  1.00000
1.00000
 
36 0.24399         osd.36                        up  1.00000
1.00000
 
-10 0.48900     host-spin opstck-emc210-spin
 
35 0.24399         osd.35                        up  1.00000
1.00000
 
34 0.24399         osd.34                        up  1.00000
1.00000
 
-13 0.76949     host-spin opstck-emc205-spin
 
50 0.22459         osd.50                        up  1.00000
1.00000
 
51 0.27245         osd.51                        up  1.00000
1.00000
 
52 0.27245         osd.52                        up  1.00000
1.00000
 
-15 0.44040     host-spin opstck-emc211-spin
 
53 0.19530         osd.53                        up  1.00000
1.00000
 
54 0.24510         osd.54                        up  1.00000
1.00000
 
-1 6.80861 root default
 
-2 1.24399     host opstck-emc201
 
  2 0.17200         osd.2                         up  1.00000
1.00000
 
  3 0.17200         osd.3                         up  1.00000
1.00000
 
  4 0.18199         osd.4                         up  1.00000
1.00000
 
  5 0.18199         osd.5                         up  1.00000
1.00000
 
  6 0.18199         osd.6                         up  1.00000
1.00000
 
  7 0.18199         osd.7                         up  1.00000
1.00000
 
  0 0.17200         osd.0                         up  1.00000
1.00000
 
-3 1.24399     host opstck-emc202
 
  1 0.17200         osd.1                         up  1.00000
1.00000
 
  9 0.17200         osd.9                         up  1.00000
1.00000
 
10 0.17200         osd.10                        up  1.00000
1.00000
 
11 0.18199         osd.11                        up  1.00000
1.00000
 
12 0.18199         osd.12                        up  1.00000
1.00000
 
13 0.18199         osd.13                        up  1.00000
1.00000
 
14 0.18199         osd.14                        up  1.00000
1.00000
 
-4 1.24399     host opstck-emc203
 
16 0.17200         osd.16                        up  1.00000
1.00000
 
17 0.17200         osd.17                        up  1.00000
1.00000
 
18 0.17200         osd.18                        up  1.00000
1.00000
 
19 0.18199         osd.19                        up  1.00000
1.00000
 
20 0.18199         osd.20                        up  1.00000
1.00000
 
21 0.18199         osd.21                        up  1.00000
1.00000
 
22 0.18199         osd.22                        up  1.00000
1.00000
 
-11 0.81573     host opstck-emc209
 
37 0.18198         osd.37                        up  1.00000
1.00000
 
38 0.18129         osd.38                        up  1.00000
1.00000
 
42 0.09039         osd.42                        up  1.00000
1.00000
 
41 0.09039         osd.41                        up  1.00000
1.00000
 
39 0.18129         osd.39                        up  1.00000
1.00000
 
40 0.09039         osd.40                        up  1.00000
1.00000
 
-9 1.08600     host opstck-emc210
 
28 0.18100         osd.28                        up  1.00000
1.00000
 
33 0.18100         osd.33                        up  1.00000
1.00000
 
29 0.18100         osd.29                        up  1.00000
1.00000
 
30 0.18100         osd.30                        up  1.00000
1.00000
 
31 0.18100         osd.31                        up  1.00000
1.00000
 
32 0.18100         osd.32                        up  1.00000
1.00000
 
-14 0.63264     host opstck-emc205
 
43 0.09038         osd.43                        up  1.00000
1.00000
 
44 0.09038         osd.44                        up  1.00000
1.00000
 
45 0.09038         osd.45                        up  1.00000
1.00000
 
46 0.09038         osd.46                        up  1.00000
1.00000
 
47 0.09038         osd.47                        up  1.00000
1.00000
 
48 0.09038         osd.48                        up  1.00000
1.00000
 
49 0.09038         osd.49                        up  1.00000
1.00000
 
-16 0.54227     host opstck-emc211
 
55 0.09038         osd.55                        up  1.00000
1.00000
 
56 0.09038         osd.56                        up  1.00000
1.00000
 
57 0.09038         osd.57                        up  1.00000
1.00000
 
58 0.09038         osd.58                        up  1.00000
1.00000
 
59 0.09038         osd.59                        up  1.00000
1.00000
 
60 0.09038         osd.60                        up  1.00000
1.00000
 
 
 
 
 
[global]
 
        filestore xattr use omap = true
 
        fsid = 238cee7d-8f8b-439c-8ffb-49488e8f1380
 
        mon initial members = 
opstck-emc201,opstck-emc202,opstck-emc203
 
        mon host = 192.168.124.149,192.168.124.150,192.168.124.151
 
        auth cluster required = cephx
 
        auth service required = cephx
 
        auth client required = cephx
 
        osd journal size = 1024
 
        osd pool default size = 2
 
        osd pool default min size = 1
 
        osd pool default pg num = 333
 
        osd pool default pgp num = 333
 
        osd crush chooseleaf type = 1
 
        public network = 192.168.124.0/22
 
 
 
[mon.opstck-emc201]
 
        host = opstck-emc201
 
        mon addr = 192.168.124.149:6789
 
 
 
[mon.opstck-emc202]
 
        host = opstck-emc202
 
        mon addr = 192.168.124.150:6789
 
 
 
[mon.opstck-emc203]
 
        host = opstck-emc203
 
        mon addr = 192.168.124.151:6789
 
 
 
[osd]
 
        osd crush update on start = false
 
 
 
 
 
    cluster 238cee7d-8f8b-439c-8ffb-49488e8f1380
 
     health HEALTH_OK
 
     monmap e1: 3 mons at
{opstck-emc201=192.168.124.149:6789/0,opstck-emc202=192.168.124.150:67
89/0,o
pstck-emc203=192.168.124.151:6789/0}
 
            election epoch 26, quorum 0,1,2
opstck-emc201,opstck-emc202,opstck-emc203
 
     osdmap e5277: 61 osds: 61 up, 61 in
 
            flags sortbitwise
 
      pgmap v2394621: 2560 pgs, 4 pools, 1660 GB data, 417 kobjects
 
            3254 GB used, 7404 GB / 10658 GB avail
 
                2560 active+clean
 
  client io 14126 B/s rd, 26935 kB/s wr, 2 op/s rd, 1582 op/s wr
 
 
 
2016-12-10 03:55:43.262110 7f138e81f700  1 leveldb: Compacting 4@0 +
4@1
files
 
2016-12-10 03:55:43.315348 7f138e81f700  1 leveldb: Generated table
#119:
37467 keys, 2135218 bytes
 
2016-12-10 03:55:43.382277 7f138e81f700  1 leveldb: Generated table
#120:
39570 keys, 2138113 bytes
 
2016-12-10 03:55:43.457026 7f138e81f700  1 leveldb: Generated table
#121:
39776 keys, 2137116 bytes
 
2016-12-10 03:55:43.505193 7f138e81f700  1 leveldb: Generated table
#122:
37782 keys, 2135704 bytes
 
2016-12-10 03:55:43.506256 7f138e81f700  1 leveldb: Generated table
#123:
403 keys, 25351 bytes
 
2016-12-10 03:55:43.506275 7f138e81f700  1 leveldb: Compacted 4@0 + 
4@1 files => 8571502 bytes
 
2016-12-10 03:55:43.506652 7f138e81f700  1 leveldb: compacted to: 
files[ 0 6
1 0 0 0 0 ]
 
2016-12-10 03:55:43.506859 7f138e81f700  1 leveldb: Delete type=2 #106
 
2016-12-10 03:55:43.507614 7f138e81f700  1 leveldb: Delete type=2 #107
 
2016-12-10 03:55:43.508325 7f138e81f700  1 leveldb: Delete type=2 #108
 
2016-12-10 03:55:43.508879 7f138e81f700  1 leveldb: Delete type=2 #109
 
2016-12-10 03:55:43.509431 7f138e81f700  1 leveldb: Delete type=2 #112
 
2016-12-10 03:55:43.509791 7f138e81f700  1 leveldb: Delete type=2 #114
 
2016-12-10 03:55:43.510285 7f138e81f700  1 leveldb: Delete type=2 #116
 
2016-12-10 03:55:43.510837 7f138e81f700  1 leveldb: Delete type=2 #118
 
2016-12-10 04:27:53.277922 7f1392259700  0 -- 
192.168.124.159:6829/7818
 
192.168.124.157:6825/4030 pipe(0x560134c83400 sd=26 :39864 s=2 
pgs=1133
cs=43 l=0 c=0x5601348fcd80).fault with nothing to send, going to 
standby
 
2016-12-10 04:27:54.053701 7f138ff36700  0 -- 
192.168.124.159:6829/7818
 
192.168.124.157:6825/4030 pipe(0x56013925e800 sd=96 :6829 s=0 pgs=0
cs=0 l=0
c=0x560136281c00).accept connect_seq 43 vs existing 43 state standby
 
2016-12-10 04:27:54.054159 7f138ff36700  0 -- 
192.168.124.159:6829/7818
 
192.168.124.157:6825/4030 pipe(0x56013925e800 sd=96 :6829 s=0 pgs=0
cs=0 l=0
c=0x560136281c00).accept connect_seq 44 vs existing 43 state standby
 
2016-12-10 04:42:54.082431 7f138ff36700  0 -- 
192.168.124.159:6829/7818
 
192.168.124.157:6825/4030 pipe(0x56013925e800 sd=96 :6829 s=2 pgs=1138
cs=45
l=0 c=0x5601348fcd80).fault with nothing to send, going to standby
 
2016-12-10 05:16:24.885751 7f138f930700  0 -- 
192.168.124.159:6829/7818
 
192.168.124.150:6833/174449 pipe(0x560136595400 sd=60 :35242 s=2
pgs=1725
cs=19 l=0 c=0x560134bf7180).fault with nothing to send, going to 
standby
 
2016-12-10 05:25:16.629798 7f138e81f700  1 leveldb: Level-0 table #125:
started
 
2016-12-10 05:25:16.657153 7f138e81f700  1 leveldb: Level-0 table #125:
1246290 bytes OK
 
2016-12-10 05:25:16.658440 7f138e81f700  1 leveldb: Delete type=0 #117
 
2016-12-10 05:42:51.080805 7f138ff36700  0 -- 
192.168.124.159:6829/7818
 
192.168.124.157:6825/4030 pipe(0x56013925e800 sd=96 :40290 s=2 
pgs=1143
cs=47 l=0 c=0x5601348fcd80).fault with nothing to send, going to 
standby
 
2016-12-10 06:17:17.849436 7f138f72e700  0 -- 
192.168.124.159:6829/7818
 
192.168.124.153:6837/8786 pipe(0x56012bbe5400 sd=136 :59220 s=2 pgs=91
cs=5
l=0 c=0x56013666c580).fault with nothing to send, going to standby
 
 
 
 
 
 
 
 
 
 
 
George Kissandrakis
 
Senior Infrastructure Engineer
 
This e-mail is from Mapp Digital, LLC and it's international legal 
entities and may contain information that is confidential or 
proprietary. If you are not the intended recipient, do not read, copy 
or distribute the e-mail or any attachments. Instead, please notify 
the sender and delete the e-mail and any attachments. Thank you.
 
Please consider the environment before printing.
 
 
 
 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
David Riedl
 
IT Systemadministrator
Staatl. gepr. Techniker (IT)
Tel. +49 7543 966-126
 
Sitz der Gesellschaft: Langenargen
Registergericht: ULM, HRB 734260
USt-Id.: DE232931635, WEEE-Id.: DE74015979
Vorstand: Thomas Ehrle (Vorsitz), Fritz R. Paul, Tobias Treß
Aufsichtsratvorsitzender: Jürgen Maucher 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux