Re: Fwd: down+peering PGs, can I move PGs from one OSD to another

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

You can export and import PG's using ceph_objectstore_tool, but if the osd won't start you may have trouble exporting a PG.

It maybe useful to share the errors you get when trying to start the osd.

Thanks

On Fri, Aug 3, 2018 at 10:13 PM, Sean Patronis <spatronis@xxxxxxxxxx> wrote:


Hi all.

We have an issue with some down+peering PGs (I think), when I try to mount or access data the requests are blocked:
114891/7509353 objects degraded (1.530%)
                 887 stale+active+clean
                   1 peering
                  54 active+recovery_wait
               19609 active+clean
                  91 active+remapped+wait_backfill
                  10 active+recovering
                   1 active+clean+scrubbing+deep
                   9 down+peering
                  10 active+remapped+backfilling
recovery io 67324 kB/s, 10 objects/s
when I query one of these down+peering PGs, I can see the following:

         "peering_blocked_by": [
                { "osd": 7,
                  "current_lost_at": 0,
                  "comment": "starting or marking this osd lost may let us proceed"},
                { "osd": 21,
                  "current_lost_at": 0,
                  "comment": "starting or marking this osd lost may let us proceed"}]},
        { "name": "Started",
          "enter_time": "2018-08-01 07:06:16.806339"}],


Both of these OSDs (7 and 21) will not come back up and in with ceph due to some errors, but I can mount the disks and read data off of them.  Can I manually move/copy these PGs off of these down and out OSDs and put them on a good OSD?  

This is an older ceph cluster running firefly.  

Thanks.




This email message may contain privileged or confidential information, and is for the use of intended recipients only. Do not share with or forward to additional parties except as necessary to conduct the business for which this email (and attachments) was clearly intended. If you have received this message in error, please immediately advise the sender by reply email and then delete this message.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux