Re: Ceph pg repair clone_missing?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Awesome! Sorry it took so long.

On Thu, Oct 10, 2019 at 12:44 AM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:
>
>
> Brad, many thanks!!! My cluster has finally HEALTH_OK af 1,5 year or so!
> :)
>
>
> -----Original Message-----
> Subject: Re: Ceph pg repair clone_missing?
>
> On Fri, Oct 4, 2019 at 6:09 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>
> wrote:
> >
> >  >
> >  >Try something like the following on each OSD that holds a copy of
> >  >rbd_data.1f114174b0dc51.0000000000000974 and see what output you
> get.
> >  >Note that you can drop the bluestore flag if they are not bluestore
>
> > >osds and you will need the osd stopped at the time (set noout). Also
>
> > >note, snapids are displayed in hexadecimal in the output (but then
> '4'
> >  >is '4' so not a big issues here).
> >  >
> >  >$ ceph-objectstore-tool --type bluestore --data-path
> > >/var/lib/ceph/osd/ceph-XX/ --pgid 17.36 --op list
> >  >rbd_data.1f114174b0dc51.0000000000000974
> >
> > I got these results
> >
> > osd.7
> > Error getting attr on : 17.36_head,#-19:6c000000:::scrub_17.36:head#,
> > (61) No data available
> > ["17.36",{"oid":"rbd_data.1f114174b0dc51.0000000000000974","key":"","s
> > na
> > pid":63,"hash":1357874486,"max":0,"pool":17,"namespace":"","max":0}]
> > ["17.36",{"oid":"rbd_data.1f114174b0dc51.0000000000000974","key":"","s
> > na
> > pid":-2,"hash":1357874486,"max":0,"pool":17,"namespace":"","max":0}]
>
> Ah, so of course the problem is the snapshot is missing. You may need to
> try something like the following on each of those osds.
>
> $ ceph-objectstore-tool --type bluestore --data-path
> /var/lib/ceph/osd/ceph-XX/ --pgid 17.36
> '{"oid":"rbd_data.1f114174b0dc51.0000000000000974","key":"","snapid":-2,
> "hash":1357874486,"max":0,"pool":17,"namespace":"","max":0}'
> remove-clone-metadata 4
>
> >
> > osd.12
> > ["17.36",{"oid":"rbd_data.1f114174b0dc51.0000000000000974","key":"","s
> > na
> > pid":63,"hash":1357874486,"max":0,"pool":17,"namespace":"","max":0}]
> > ["17.36",{"oid":"rbd_data.1f114174b0dc51.0000000000000974","key":"","s
> > na
> > pid":-2,"hash":1357874486,"max":0,"pool":17,"namespace":"","max":0}]
> >
> > osd.29
> > ["17.36",{"oid":"rbd_data.1f114174b0dc51.0000000000000974","key":"","s
> > na
> > pid":63,"hash":1357874486,"max":0,"pool":17,"namespace":"","max":0}]
> > ["17.36",{"oid":"rbd_data.1f114174b0dc51.0000000000000974","key":"","s
> > na
> > pid":-2,"hash":1357874486,"max":0,"pool":17,"namespace":"","max":0}]
> >
> >
> >  >
> >  >The likely issue here is the primary believes snapshot 4 is gone but
>
> > >there is still data and/or metadata on one of the replicas which is
> > >confusing the issue. If that is the case you can use the the
> > >ceph-objectstore-tool to delete the relevant snapshot(s)  >
>
>
>
> --
> Cheers,
> Brad
>
>
>


-- 
Cheers,
Brad

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux