Re: Upgrade to Infernalis: failed to pick suitable auth object

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi again,

After listing all placement groups the problematic OSD (osd.0) being part of, I forced a deep-scrub for all those PGs.

A few hours later (and some other deep scrubbing as well) the result seems to be:

HEALTH_ERR 8 pgs inconsistent; 14 scrub errors
pg 3.6c is active+clean+inconsistent, acting [14,2,38]
pg 3.32 is active+clean+inconsistent, acting [0,11,33]
pg 3.13 is active+clean+inconsistent, acting [8,34,9]
pg 3.30 is active+clean+inconsistent, acting [14,35,26]
pg 3.31 is active+clean+inconsistent, acting [44,35,26]
pg 3.7d is active+clean+inconsistent, acting [46,37,35]
pg 3.70 is active+clean+inconsistent, acting [0,36,11]
pg 3.72 is active+clean+inconsistent, acting [0,33,39]
14 scrub errors

OSDs (in order) 0, 8, 14 and 46 all reside on the same server. Obviously being the one upgraded to Infernalis.

It makes sense I acted too quick given a OSD (regarding to fixing the ownerships while maybe still running), maybe two but not all of them.

Although it's very likely it wouldn't make a difference, I'll try a ceph pg repair for each PG.

To be continued again!

Regards,
Kees

On 18-08-18 10:52, Kees Meijs wrote:
To be continued... Over night, some more placement groups seem to be inconsistent. I'll post my findings later on.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux