On 13. sep. 2017 07:04, hjcho616 wrote:
Ronny,
Did bunch of ceph pg repair pg# and got the scrub errors down to 10...
well was 9, trying to fix one became 10.. waiting for it to fix (I did
that noout trick as I only have two copies). 8 of those scrub errors
looks like it would need data from osd.0.
HEALTH_ERR 22 pgs are stuck inactive for more than 300 seconds; 22 pgs
degraded; 6 pgs down; 3 pgs inconsistent; 6 pgs peering; 6 pgs
recovering; 16 pgs stale; 22 pgs stuck degraded; 6 pgs stuck inactive;
16 pgs stuck stale; 28 pgs stuck unclean; 16 pgs stuck undersized; 16
pgs undersized; 1 requests are blocked > 32 sec; recovery 221990/4503980
objects degraded (4.929%); recovery 147/2251990 unfound (0.007%); 10
scrub errors; mds cluster is degraded; no legacy OSD present but
'sortbitwise' flag is not set
From what I saw from ceph health detail, running osd.0 would solve
majority of the problems. But that was the disk with the smart error
earlier. I did move to new drive using ddrescue. When trying to start
osd.0, I get this. Is there anyway I can get around this?
running a rescued disk is not something you should try. this is when you
should try to export using the objectstoretool
this was the drive that failed to export pg's becouse of missing
superblock ? you could also try the export directly on the failed drive.
just to try if that works. you many have to run the tool as ceph user if
that is the user owning all the files
you could try running the export of one of the pg's on osd.0 again and
post all commands and output.
good luck
Ronny
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com