Re: Can't get one OSD (out of 14) to start

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All the backfill operations are complete and I'm now just left with the 3 incomplete and 1 down+incomplete:

# ceph health detail
HEALTH_ERR 4 pgs are stuck inactive for more than 300 seconds; 1 pgs down; 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck unclean; 266 requests are blocked > 32 sec; 3 osds have slow requests
pg 1.38 is stuck inactive for 80654.111975, current state incomplete, last acting [17,4]
pg 30.7a is stuck inactive for 76259.649932, current state incomplete, last acting [12,9]
pg 30.8d is stuck inactive for 76201.794001, current state incomplete, last acting [0,5]
pg 30.c1 is stuck inactive for 76305.051390, current state down+incomplete, last acting [14,25]
pg 1.38 is stuck unclean for 80654.112037, current state incomplete, last acting [17,4]
pg 30.7a is stuck unclean for 76259.649989, current state incomplete, last acting [12,9]
pg 30.8d is stuck unclean for 76201.794058, current state incomplete, last acting [0,5]
pg 30.c1 is stuck unclean for 76305.051447, current state down+incomplete, last acting [14,25]
pg 30.c1 is down+incomplete, acting [14,25]
pg 30.8d is incomplete, acting [0,5]
pg 30.7a is incomplete, acting [12,9]
pg 1.38 is incomplete, acting [17,4]
50 ops are blocked > 33554.4 sec on osd.14
16 ops are blocked > 16777.2 sec on osd.14
2 ops are blocked > 67108.9 sec on osd.12
98 ops are blocked > 33554.4 sec on osd.12
100 ops are blocked > 33554.4 sec on osd.0
3 osds have slow requests


I tried issuing a 'ceph pg repair' to one of those PGs and got the following:

# ceph pg repair 1.38
instructing pg 1.38 on osd.17 to repair

But it doesn't appear to be doing anything.  Health status still says the exact same thing.  No idea where to go from here.


-----Original Message-----
From: Mark Johnson <markj@xxxxxxxxx<mailto:Mark%20Johnson%20%3cmarkj@xxxxxxxxx%3e>>
To: ag@xxxxxxxxxxxxxxxxxxx <ag@xxxxxxxxxxxxxxxxxxx<mailto:%22ag@xxxxxxxxxxxxxxxxxxx%22%20%3cag@xxxxxxxxxxxxxxxxxxx%3e>>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx<mailto:%22ceph-users@xxxxxxx%22%20%3cceph-users@xxxxxxx%3e>>
Subject:  Re: Can't get one OSD (out of 14) to start
Date: Fri, 16 Apr 2021 22:00:20 +0000


That's the exact same page I used to mark the osd as lost.  Nothing in there seems to reference the incomplete and down+incomplete pgs that I have however so I really don't know if it helps me.  I don't really understand what my problem is here.




-----Original Message-----

From: Alex Gorbachev <

<mailto:ag@xxxxxxxxxxxxxxxxxxx>

ag@xxxxxxxxxxxxxxxxxxx

<mailto:

<mailto:Alex%20Gorbachev%20%3cag@xxxxxxxxxxxxxxxxxxx>

Alex%20Gorbachev%20%3cag@xxxxxxxxxxxxxxxxxxx

%3e>>

To: Mark Johnson <

<mailto:markj@xxxxxxxxx>

markj@xxxxxxxxx

<mailto:

<mailto:Mark%20Johnson%20%3cmarkj@xxxxxxxxx>

Mark%20Johnson%20%3cmarkj@xxxxxxxxx

%3e>>

Cc:

<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx

 <

<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx

<mailto:

<mailto:%22ceph-users@xxxxxxx>

%22ceph-users@xxxxxxx

<mailto:%22%20%3cceph-users@xxxxxxx>

%22%20%3cceph-users@xxxxxxx

%3e>>

Subject: Re:  Re: Can't get one OSD (out of 14) to start

Date: Fri, 16 Apr 2021 14:16:28 -0400


Hi Mark,


I wonder if the following will help you:

<https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-pg/>

https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-pg/



There are instructions there on how to mark unfound PGs lost and delete them.  You will regain a healthy cluster that way, and then you can adjust replica counts etc to best practice, and restore your objects.


Best regards,

--

Alex Gorbachev

ISS/Storcium




On Fri, Apr 16, 2021 at 10:51 AM Mark Johnson <

<mailto:markj@xxxxxxxxx>

markj@xxxxxxxxx

<mailto:

<mailto:markj@xxxxxxxxx>

markj@xxxxxxxxx

>> wrote:

I ran an fsck on the problem OSD and found and repaired a couple of errors.  Remounted and started the OSD but it crashed again shortly after as before.  So (and possibly from bad advise) I figured I'd mark the OSD lost and let it write out the pgs to other OSDs which it's in the process of backfilling.  However, I'm seeing 1 down+incomplete and 3 incomplete and I'm expecting that these won't recover.


So, would love to know what my options are here when all the backfilling has finished (or stalled).  Losing data or even entire PGs isn't a big problem as this cluster is really just a replica of our main cluster so we can restore lost objects manually from there.  Is there a way I can clear out/repair/whatever these pgs so I can get a healthy cluster again?


Yes, I know this would have probably been easier with an additional storage server and a pool size of 3.  But that's not going to help me right now.




-----Original Message-----

From: Mark Johnson <

<mailto:markj@xxxxxxxxx>

markj@xxxxxxxxx

<mailto:

<mailto:markj@xxxxxxxxx>

markj@xxxxxxxxx

><mailto:

<mailto:Mark%20Johnson%20%3cmarkj@xxxxxxxxx>

Mark%20Johnson%20%3cmarkj@xxxxxxxxx

<mailto:

<mailto:Mark%2520Johnson%2520%253cmarkj@xxxxxxxxx>

Mark%2520Johnson%2520%253cmarkj@xxxxxxxxx

>%3e>>

To:

<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx

<mailto:

<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx

> <

<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx

<mailto:

<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx

><mailto:

<mailto:%22ceph-users@xxxxxxx>

%22ceph-users@xxxxxxx

<mailto:

<mailto:22ceph-users@xxxxxxx>

22ceph-users@xxxxxxx

>

<mailto:%22%20%3cceph-users@xxxxxxx>

%22%20%3cceph-users@xxxxxxx

<mailto:

<mailto:22%2520%253cceph-users@xxxxxxx>

22%2520%253cceph-users@xxxxxxx

>%3e>>

Subject:  Can't get one OSD (out of 14) to start

Date: Fri, 16 Apr 2021 12:43:33 +0000



Really not sure where to go with this one.  Firstly, a description of my cluster.  Yes, I know there are a lot of "not ideals" here but this is what I inherited.



The cluster is running Jewel and has two storage/mon nodes and an additional mon only node, with a pool size of 2.  Today, we had a some power issues in the data centre and we very ungracefully lost both storage servers at the same time.  Node 1 came back online before node 2 but I could see there were a few OSDs that were down.  When node 2 came back, I started trying to get OSDs up.  Each node has 14 OSDs and I managed to get all OSDs up and in on node 2, but one of the OSDs on node 1 keeps starting and crashing and just won't stay up.  I'm not finding the OSD log output to be much use.  Current health status looks like this:



# ceph health


HEALTH_ERR 26 pgs are stuck inactive for more than 300 seconds; 26 pgs down; 26 pgs peering; 26 pgs stuck inactive; 26 pgs stuck unclean; 5 requests are blocked > 32 sec


# ceph status


    cluster e2391bbf-15e0-405f-af12-943610cb4909


     health HEALTH_ERR


            26 pgs are stuck inactive for more than 300 seconds


            26 pgs down


            26 pgs peering


            26 pgs stuck inactive


            26 pgs stuck unclean


            5 requests are blocked > 32 sec



Any clues as to what I should be looking for or what sort of action I should be taking to troubleshoot this?  Unfortunately, I'm a complete novice with Ceph.



Here's a snippet from the OSD log that means little to me...



--- begin dump of recent events ---


     0> 2021-04-16 12:25:10.169340 7f2e23921ac0 -1 *** Caught signal (Aborted) **


 in thread 7f2e23921ac0 thread_name:ceph-osd



 ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)


 1: (()+0x9f1c2a) [0x7f2e24330c2a]


 2: (()+0xf5d0) [0x7f2e21ee95d0]


 3: (gsignal()+0x37) [0x7f2e2049f207]


 4: (abort()+0x148) [0x7f2e204a08f8]


 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x267) [0x7f2e2442fd47]


 6: (FileJournal::read_entry(ceph::buffer::list&, unsigned long&, bool*)+0x90c) [0x7f2e2417bc7c]


 7: (JournalingObjectStore::journal_replay(unsigned long)+0x1ee) [0x7f2e240c8dce]


 8: (FileStore::mount()+0x3cd6) [0x7f2e240a0546]


 9: (OSD::init()+0x27d) [0x7f2e23d5828d]


 10: (main()+0x2c18) [0x7f2e23c71088]


 11: (__libc_start_main()+0xf5) [0x7f2e2048b3d5]


 12: (()+0x3c8847) [0x7f2e23d07847]


 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.



Thanks in advance,


Mark



_______________________________________________


ceph-users mailing list --


<mailto:

<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx

<mailto:

<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx

>>


<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx

<mailto:

<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx

>



To unsubscribe send an email to


<mailto:

<mailto:ceph-users-leave@xxxxxxx>

ceph-users-leave@xxxxxxx

<mailto:

<mailto:ceph-users-leave@xxxxxxx>

ceph-users-leave@xxxxxxx

>>


<mailto:ceph-users-leave@xxxxxxx>

ceph-users-leave@xxxxxxx

<mailto:

<mailto:ceph-users-leave@xxxxxxx>

ceph-users-leave@xxxxxxx

>


_______________________________________________

ceph-users mailing list --

<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx

<mailto:

<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx

>

To unsubscribe send an email to

<mailto:ceph-users-leave@xxxxxxx>

ceph-users-leave@xxxxxxx

<mailto:

<mailto:ceph-users-leave@xxxxxxx>

ceph-users-leave@xxxxxxx

>

_______________________________________________

ceph-users mailing list --

<mailto:ceph-users@xxxxxxx>

ceph-users@xxxxxxx


To unsubscribe send an email to

<mailto:ceph-users-leave@xxxxxxx>

ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux