Re: what is Implicated osds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 21, 2018 at 2:37 AM, Satish Patel <satish.txt@xxxxxxxxx> wrote:
> Folks,
>
> Today i found ceph -s is really slow and just hanging for minute or 2
> minute to give me output also same with "ceph osd tree" output,
> command just hanging long time to give me output..
>
> This is what i am seeing output, one OSD down not sure why its down
> and what is the relation with command running slow?
>
> I am also seeing what does that means? " 369 slow requests are blocked
>> 32 sec. Implicated osds 0,2,3,4,5,6,7,8,9,11"

This is just a hint that these are the osds you should look at in
regard to the slow requests.

What's common about the stale pgs, what pool do they belong too and
what are the configuration details of that pool?

Can you do a pg query on one of the stale pgs?

>
>
> [root@ostack-infra-01-ceph-mon-container-692bea95 ~]# ceph -s
>   cluster:
>     id:     c369cdc9-35a2-467a-981d-46e3e1af8570
>     health: HEALTH_WARN
>             Reduced data availability: 53 pgs stale
>             369 slow requests are blocked > 32 sec. Implicated osds
> 0,2,3,4,5,6,7,8,9,11
>
>   services:
>     mon: 3 daemons, quorum
> ostack-infra-02-ceph-mon-container-87f0ee0e,ostack-infra-01-ceph-mon-container-692bea95,ostack-infra-03-ceph-mon-container-a92c1c2a
>     mgr: ostack-infra-01-ceph-mon-container-692bea95(active),
> standbys: ostack-infra-03-ceph-mon-container-a92c1c2a,
> ostack-infra-02-ceph-mon-container-87f0ee0e
>     osd: 12 osds: 11 up, 11 in
>
>   data:
>     pools:   5 pools, 656 pgs
>     objects: 1461 objects, 11509 MB
>     usage:   43402 MB used, 5080 GB / 5122 GB avail
>     pgs:     603 active+clean
>              53  stale+active+clean
>
>
>
> [root@ostack-infra-01-ceph-mon-container-692bea95 ~]# ceph osd tree
> ID CLASS WEIGHT  TYPE NAME            STATUS REWEIGHT PRI-AFF
> -1       5.45746 root default
> -3       1.81915     host ceph-osd-01
>  0   ssd 0.45479         osd.0            up  1.00000 1.00000
>  2   ssd 0.45479         osd.2            up  1.00000 1.00000
>  5   ssd 0.45479         osd.5            up  1.00000 1.00000
>  6   ssd 0.45479         osd.6            up  1.00000 1.00000
> -5       1.81915     host ceph-osd-02
>  1   ssd 0.45479         osd.1          down        0 1.00000
>  3   ssd 0.45479         osd.3            up  1.00000 1.00000
>  4   ssd 0.45479         osd.4            up  1.00000 1.00000
>  7   ssd 0.45479         osd.7            up  1.00000 1.00000
> -7       1.81915     host ceph-osd-03
>  8   ssd 0.45479         osd.8            up  1.00000 1.00000
>  9   ssd 0.45479         osd.9            up  1.00000 1.00000
> 10   ssd 0.45479         osd.10           up  1.00000 1.00000
> 11   ssd 0.45479         osd.11           up  1.00000 1.00000
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Cheers,
Brad
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux