I'm seeing the same behavior with very similar perf top output. One server with 32 OSDs has a load average approaching 800. No excessive memory usage
and no iowait at all.
Steve Taylor |
Senior Software Engineer |
StorageCraft
Technology Corporation 380 Data Drive Suite 300 | Draper | Utah | 84020 Office: 801.871.2799 | |
If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited. |
-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Ruben Kerkhof
Sent: Wednesday, December 7, 2016 3:08 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re: 10.2.4 Jewel released
On Wed, Dec 7, 2016 at 8:46 PM, Francois Lafont <francois.lafont.1978@xxxxxxxxx> wrote:
> Hi,
>
> On 12/07/2016 01:21 PM, Abhishek L wrote:
>
>> This point release fixes several important bugs in RBD mirroring, RGW
>> multi-site, CephFS, and RADOS.
>>
>> We recommend that all v10.2.x users upgrade. Also note the following
>> when upgrading from hammer
>
> Well... little warning: after upgrade from 10.2.3 to 10.2.4, I have big load cpu on osd and mds.
Yes, same here. perf top shows:
8.23% [kernel] [k] sock_recvmsg
8.16% libpthread-2.17.so [.] __libc_recv
7.33% [kernel] [k] fget_light
7.24% [kernel] [k] tcp_recvmsg
6.41% [kernel] [k] sock_has_perm
6.19% [kernel] [k] _raw_spin_lock_bh
4.89% [kernel] [k] system_call
4.74% [kernel] [k] avc_has_perm_flags
3.93% [kernel] [k] SYSC_recvfrom
3.18% [kernel] [k] fput
3.15% [kernel] [k] system_call_after_swapgs
3.12% [kernel] [k] local_bh_enable_ip
3.11% [kernel] [k] release_sock
2.90% libpthread-2.17.so [.] __pthread_enable_asynccancel
2.71% libpthread-2.17.so [.] __pthread_disable_asynccancel
2.57% [kernel] [k] inet_recvmsg
2.43% [kernel] [k] local_bh_enable
2.16% [kernel] [k] local_bh_disable
2.03% [kernel] [k] tcp_cleanup_rbuf
1.44% [kernel] [k] sockfd_lookup_light
1.26% [kernel] [k] _raw_spin_unlock
1.20% [kernel] [k] sysret_check
1.18% [kernel] [k] lock_sock_nested
1.07% [kernel] [k] selinux_socket_recvmsg
0.98% [kernel] [k] _raw_spin_unlock_bh
0.97% ceph-osd [.] Pipe::do_recv
0.87% [kernel] [k] _cond_resched
0.73% [kernel] [k] tcp_release_cb
0.52% [kernel] [k] security_socket_recvmsg
Kind regards,
Ruben
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Ruben Kerkhof
Sent: Wednesday, December 7, 2016 3:08 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re: 10.2.4 Jewel released
On Wed, Dec 7, 2016 at 8:46 PM, Francois Lafont <francois.lafont.1978@xxxxxxxxx> wrote:
> Hi,
>
> On 12/07/2016 01:21 PM, Abhishek L wrote:
>
>> This point release fixes several important bugs in RBD mirroring, RGW
>> multi-site, CephFS, and RADOS.
>>
>> We recommend that all v10.2.x users upgrade. Also note the following
>> when upgrading from hammer
>
> Well... little warning: after upgrade from 10.2.3 to 10.2.4, I have big load cpu on osd and mds.
Yes, same here. perf top shows:
8.23% [kernel] [k] sock_recvmsg
8.16% libpthread-2.17.so [.] __libc_recv
7.33% [kernel] [k] fget_light
7.24% [kernel] [k] tcp_recvmsg
6.41% [kernel] [k] sock_has_perm
6.19% [kernel] [k] _raw_spin_lock_bh
4.89% [kernel] [k] system_call
4.74% [kernel] [k] avc_has_perm_flags
3.93% [kernel] [k] SYSC_recvfrom
3.18% [kernel] [k] fput
3.15% [kernel] [k] system_call_after_swapgs
3.12% [kernel] [k] local_bh_enable_ip
3.11% [kernel] [k] release_sock
2.90% libpthread-2.17.so [.] __pthread_enable_asynccancel
2.71% libpthread-2.17.so [.] __pthread_disable_asynccancel
2.57% [kernel] [k] inet_recvmsg
2.43% [kernel] [k] local_bh_enable
2.16% [kernel] [k] local_bh_disable
2.03% [kernel] [k] tcp_cleanup_rbuf
1.44% [kernel] [k] sockfd_lookup_light
1.26% [kernel] [k] _raw_spin_unlock
1.20% [kernel] [k] sysret_check
1.18% [kernel] [k] lock_sock_nested
1.07% [kernel] [k] selinux_socket_recvmsg
0.98% [kernel] [k] _raw_spin_unlock_bh
0.97% ceph-osd [.] Pipe::do_recv
0.87% [kernel] [k] _cond_resched
0.73% [kernel] [k] tcp_release_cb
0.52% [kernel] [k] security_socket_recvmsg
Kind regards,
Ruben
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com