Re: Number of threads for osd processes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The largest group of threads is those from the network messenger — in
the current implementation it creates two threads per process the
daemon is communicating with. That's two threads for each OSD it
shares PGs with, and two threads for each client which is accessing
any data on that OSD.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Tue, Nov 26, 2013 at 7:22 AM, Jens-Christian Fischer
<jens-christian.fischer@xxxxxxxxx> wrote:
> Hi all
>
> we have a ceph 0.67.4 cluster with 24 OSDs
>
> I have noticed that the two servers that have 9 OSD each, have around 10'000
> threads running - a number that went up significantly 2 weeks ago.
>
> Looking at the threads:
>
>
> root@h2:/var/log/ceph# ps -efL | grep ceph-osd | awk '{ print $2 }' | uniq
> -c | sort -n
>       1 17583
>     856 3151
>     874 11946
>    1034 3173
>    1038 3072
>    1040 3175
>    1052 3068
>    1062 3149
>    1068 3060
>    1070 3190
>
> root@h2:/var/log/ceph# ps axwu | grep ceph-osd
> root      3060  1.0  0.2 2224068 312456 ?      Ssl  Nov01 392:17
> /usr/bin/ceph-osd --cluster=ceph -i 5 -f
> root      3068  1.2  0.2 2140988 356208 ?      Ssl  Nov01 441:22
> /usr/bin/ceph-osd --cluster=ceph -i 9 -f
> root      3072  1.2  0.2 2049608 370236 ?      Ssl  Nov01 443:18
> /usr/bin/ceph-osd --cluster=ceph -i 4 -f
> root      3149  1.2  0.3 2122548 402236 ?      Ssl  Nov01 440:16
> /usr/bin/ceph-osd --cluster=ceph -i 8 -f
> root      3151  1.2  0.3 1917856 426224 ?      Ssl  Nov01 453:30
> /usr/bin/ceph-osd --cluster=ceph -i 7 -f
> root      3173  0.8  0.2 1978252 264732 ?      Ssl  Nov01 325:01
> /usr/bin/ceph-osd --cluster=ceph -i 11 -f
> root      3175  1.1  0.3 2186676 422112 ?      Ssl  Nov01 401:27
> /usr/bin/ceph-osd --cluster=ceph -i 12 -f
> root      3190  1.1  0.3 2140480 412844 ?      Ssl  Nov01 421:31
> /usr/bin/ceph-osd --cluster=ceph -i 6 -f
> root     11946  1.7  0.3 2060968 445368 ?      Ssl  Nov14 302:36
> /usr/bin/ceph-osd --cluster=ceph -i 10 -f
> root     17589  0.0  0.0   9456   952 pts/25   S+   16:13   0:00 grep
> --color=auto ceph-osd
>
> we see each osd process with around 1000 threads. Is this normal and
> expected?
>
> One theory we have, is that this has to do with the number of placement
> groups - I had increased the number of PGs in one of the pools:
>
> root@h2:/var/log/ceph# ceph osd pool get images pg_num
> pg_num: 1000
> root@h2:/var/log/ceph# ceph osd pool get volumes pg_num
> pg_num: 128
>
> That could possibly have been on the day, the number of treads started to
> rise.
>
> Feedback appreciated!
>
> thanks
> Jens-Christian
>
> --
> SWITCH
> Jens-Christian Fischer, Peta Solutions
> Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
> phone +41 44 268 15 15, direct +41 44 268 15 71
> jens-christian.fischer@xxxxxxxxx
> http://www.switch.ch
>
> http://www.switch.ch/socialmedia
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux