inconsistent pgs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I was having some problem with my mds getting stick in 'rejoin' state on a dumpling install so I did a dist-upgrade on my installation, thinking it would install a later dumpling, and got landed with firefly, which is now in Debian Jessie. That resolved the mds problem but introduced a problem of its own...

I have 4 physical boxes each running 2 OSD's. I needed to retire one so I set the 2 OSD's on it to 'out' and everything went as expected. Then I noticed that 'ceph health' was reporting that my crush map had legacy tunables. The release notes told me I needed to do 'ceph osd crush tunables optimal' to fix this, and I wasn't running any old kernel clients, so I made it so. Shortly after that, my OSD's started dying until only one remained. I eventually figured out that they would stay up until I started the OSD's on the 'out' node. I hadn't made the connection to the tunables until I turned up an old mailing list post, but sure enough setting the tunables back to legacy got everything stable again. I assume that the churn introduced by 'optimal' resulted in the situation where the 'out' node stored the only copy of some data, because there were down pgs until I got all the OSD's running again

Anyway, after all that dust settled, I now have 5 pgs inconsistent from scrub errors (it was 4 when I started writing this email... I assume more will be found). Is there a way I can find out what rbd(s?) they belong to before I repair them?

Thanks

James




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux