[PG] Slow request *** seconds old,v4 currently waiting for pg to exist locally

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yeah, three of nine OSDs went down but I recreated them, but the pgs 
cannot be recovered.

I don't know how to erase all the pgs, so I deleted all the osd pools, 
including data and metadata ? Now all pgs are active and clean...

I'm not sure if there are more elegant ways to deal with this.

===========
Aegeaner


? 2014-09-25 14:11, Irek Fasikhov ??:
> osd_op(client.4625.1:9005787)
> .....
>
>
> This is due to external factors. For example, the network settings.
>
> 2014-09-25 10:05 GMT+04:00 Udo Lembke <ulembke at polarzone.de 
> <mailto:ulembke at polarzone.de>>:
>
>     Hi again,
>     sorry - forgot my post... see
>
>     osdmap e421: 9 osds: 9 up, 9 in
>
>     shows that all your 9 osds are up!
>
>     Do you have trouble with your journal/filesystem?
>
>     Udo
>
>     Am 25.09.2014 08:01, schrieb Udo Lembke:
>     > Hi,
>     > looks that some osds are down?!
>     >
>     > What is the output of "ceph osd tree"
>     >
>     > Udo
>     >
>     > Am 25.09.2014 04:29, schrieb Aegeaner:
>     >> The cluster healthy state is WARN:
>     >>
>     >>          health HEALTH_WARN 118 pgs degraded; 8 pgs down; 59 pgs
>     >>     incomplete; 28 pgs peering; 292 pgs stale; 87 pgs stuck
>     inactive;
>     >>     292 pgs stuck stale; 205 pgs stuck unclean; 22 requests are
>     blocked
>     >>     > 32 sec; recovery 12474/46357 objects degraded (26.909%)
>     >>          monmap e3: 3 mons at
>     >>   
>      {CVM-0-mon01=172.18.117.146:6789/0,CVM-0-mon02=172.18.117.152:6789/0,CVM-0-mon03=172.18.117.153:6789/0
>     <http://172.18.117.146:6789/0,CVM-0-mon02=172.18.117.152:6789/0,CVM-0-mon03=172.18.117.153:6789/0>},
>     >>     election epoch 24, quorum 0,1,2
>     CVM-0-mon01,CVM-0-mon02,CVM-0-mon03
>     >>          osdmap e421: 9 osds: 9 up, 9 in
>     >>           pgmap v2261: 292 pgs, 4 pools, 91532 MB data, 23178
>     objects
>     >>                 330 MB used, 3363 GB / 3363 GB avail
>     >>                 12474/46357 objects degraded (26.909%)
>     >>                       20 stale+peering
>     >>                       87 stale+active+clean
>     >>                        8 stale+down+peering
>     >>                       59 stale+incomplete
>     >>                      118 stale+active+degraded
>     >>
>     >>
>     >> What does these errors mean? Can these PGs be recovered?
>     >>
>     >>
>     > _______________________________________________
>     > ceph-users mailing list
>     > ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     >
>
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> -- 
> ? ?????????, ??????? ???? ???????????
> ???.: +79229045757

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140925/25d088d4/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux