for your cluster warning message, it's a pg's some objects have inconsistent in primary and replicas, so you can try 'ceph pg repair $PGID'. 2016-04-16 9:04 GMT+08:00 Oliver Dzombic <info@xxxxxxxxxxxxxxxxx>: > Hi, > > i meant of course > > 0.e6_head > 0.e6_TEMP > > in > > /var/lib/ceph/osd/ceph-12/current > > sry... > > > -- > Mit freundlichen Gruessen / Best regards > > Oliver Dzombic > IP-Interactive > > mailto:info@xxxxxxxxxxxxxxxxx > > Anschrift: > > IP Interactive UG ( haftungsbeschraenkt ) > Zum Sonnenberg 1-3 > 63571 Gelnhausen > > HRB 93402 beim Amtsgericht Hanau > Geschäftsführung: Oliver Dzombic > > Steuer Nr.: 35 236 3622 1 > UST ID: DE274086107 > > > Am 16.04.2016 um 03:03 schrieb Oliver Dzombic: >> Hi, >> >> pg 0.e6 is active+clean+inconsistent, acting [12,7] >> >> /var/log/ceph/ceph-osd.12.log:36:2016-04-16 01:08:40.058585 7f4f6bc70700 >> -1 log_channel(cluster) log [ERR] : 0.e6 deep-scrub stat mismatch, got >> 4476/4477 objects, 133/133 clones, 4476/4477 dirty, 1/1 omap, 0/0 >> hit_set_archive, 0/0 whiteouts, 18467422208/18471616512 bytes,0/0 >> hit_set_archive bytes. >> >> >> i tried to follow >> >> https://ceph.com/planet/ceph-manually-repair-object/ >> >> did not really work for me. >> >> How do i kill this pg completely from osd.12 ? >> >> Can i simply delete >> >> 0.6_head >> 0.6_TEMP >> >> in >> >> /var/lib/ceph/osd/ceph-12/current >> >> and ceph will take the other copy and multiply it again, and all is fine ? >> >> Or would that be the start of the end ? ^^; >> >> Thank you ! >> > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Thank you! HuangJun _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com