Re: scrub errors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It would help to know what version you are running but, to begin with,
could you post the output of the following?

$ sudo ceph pg 10.2a query
$ sudo rados list-inconsistent-obj 10.2a --format=json-pretty

Also, have a read of
http://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-pg/
(adjust the URl for your release).

On Tue, Mar 26, 2019 at 8:19 AM solarflow99 <solarflow99@xxxxxxxxx> wrote:
>
> I noticed my cluster has scrub errors but the deep-scrub command doesn't show any errors.  Is there any way to know what it takes to fix it?
>
>
>
> # ceph health detail
> HEALTH_ERR 1 pgs inconsistent; 47 scrub errors
> pg 10.2a is active+clean+inconsistent, acting [41,38,8]
> 47 scrub errors
>
> # zgrep 10.2a /var/log/ceph/ceph.log*
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 16:20:18.148299 osd.41 192.168.4.19:6809/30077 54885 : cluster [INF] 10.2a deep-scrub starts
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024040 osd.41 192.168.4.19:6809/30077 54886 : cluster [ERR] 10.2a shard 38 missing 10/24083d2a/ec50777d-cc99-46a8-8610-4492213f412f/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024049 osd.41 192.168.4.19:6809/30077 54887 : cluster [ERR] 10.2a shard 38 missing 10/ff183d2a/fce859b9-61a9-46cb-82f1-4b4af31c10db/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024074 osd.41 192.168.4.19:6809/30077 54888 : cluster [ERR] 10.2a shard 38 missing 10/34283d2a/4b7c96cb-c494-4637-8669-e42049bd0e1c/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024076 osd.41 192.168.4.19:6809/30077 54889 : cluster [ERR] 10.2a shard 38 missing 10/df283d2a/bbe61149-99f8-4b83-a42b-b208d18094a8/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024077 osd.41 192.168.4.19:6809/30077 54890 : cluster [ERR] 10.2a shard 38 missing 10/35383d2a/60e8ed9b-bd04-5a43-8917-6f29eba28a66:0014/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024078 osd.41 192.168.4.19:6809/30077 54891 : cluster [ERR] 10.2a shard 38 missing 10/d5383d2a/2bdeb186-561b-4151-b87e-fe7c2e217d41/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024080 osd.41 192.168.4.19:6809/30077 54892 : cluster [ERR] 10.2a shard 38 missing 10/a7383d2a/b6b9d21d-2f4f-4550-8928-52552349db7d/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024081 osd.41 192.168.4.19:6809/30077 54893 : cluster [ERR] 10.2a shard 38 missing 10/9c383d2a/5b552687-c709-4e87-b773-1cce5b262754/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024082 osd.41 192.168.4.19:6809/30077 54894 : cluster [ERR] 10.2a shard 38 missing 10/5d383d2a/cb1a2ea8-0872-4de9-8b93-5ea8d9d8e613/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024083 osd.41 192.168.4.19:6809/30077 54895 : cluster [ERR] 10.2a shard 38 missing 10/8f483d2a/74c7a2b9-f00a-4c89-afbd-c1b8439234ac/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024085 osd.41 192.168.4.19:6809/30077 54896 : cluster [ERR] 10.2a shard 38 missing 10/b1583d2a/b3f00768-82a2-4637-91d1-164f3a51312a/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024086 osd.41 192.168.4.19:6809/30077 54897 : cluster [ERR] 10.2a shard 38 missing 10/35583d2a/e347aff4-7b71-476e-863a-310e767e4160/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024088 osd.41 192.168.4.19:6809/30077 54898 : cluster [ERR] 10.2a shard 38 missing 10/69583d2a/0805d07a-49d1-44cb-87c7-3bd73a0ce692/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024122 osd.41 192.168.4.19:6809/30077 54899 : cluster [ERR] 10.2a shard 38 missing 10/1a583d2a/d65bcf6a-9457-46c3-8fbc-432ebbaad89a/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024123 osd.41 192.168.4.19:6809/30077 54900 : cluster [ERR] 10.2a shard 38 missing 10/6d583d2a/5592f7d6-a131-4eb2-a3dd-b2d96691dd7e/head
> /var/log/ceph/ceph.log-20190323.gz:2019-03-22 18:29:02.024124 osd.41 192.168.4.19:6809/30077 54901 : cluster [ERR] 10.2a shard 38 missing 10/f0683d2a/81897399-4cb0-59b3-b9ae-bf043a272137:0003/head
>
>
>
> # ceph pg deep-scrub 10.2a
> instructing pg 10.2a on osd.41 to deep-scrub
>
>
> # ceph -w | grep 10.2a
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Cheers,
Brad
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux