Re: Ceph scrub logs: _scan_snaps no head for $object?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sage Wrote( Tue, 2 Jan 2018 17:57:32 +0000 (UTC)):
Hi Stefan, Mehmet,


Hi Sage,
Sorry for the *extremly late* response!

Are these clusters that were upgraded from prior versions, or fresh
luminous installs?

My Cluster was initialy installed with jewel (10.2.1) have seen some minor updates and is finaly upgraded from Jewel (10.2.10) to Luminous (12.2.1)

Actualy is installed:

- ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)

I had a look in my logfiled and have still the log entries like:

... .. .
2018-02-23 11:23:34.247878 7feaa2a2d700 -1 osd.59 pg_epoch: 36269 pg[0.346( v 36269'30160204 (36269'30158634,36269'30160204] local-lis/les=36253/36254 n=12956 ec=141/141 lis/c 36253/36253 les/c/f 36254/36264/0 36253/36253/36190) [4,59,23] r=1 lpr=36253 luod=0'0 crt=36269'30160204 lcod 36269'30160203 active] _scan_snaps no head for 0:62e347cd:::rbd_data.63efee238e1f29.000000000000038c:48 (have MIN)
... .. .

need further information?
- Mehmet


This message indicates that there is a stray clone object with no
associated head or snapdir object.  That normally should never
happen--it's presumably the result of a (hopefully old) bug.  The scrub
process doesn't even clean them up, which maybe says something about how
common it is/was...

sage


On Sun, 24 Dec 2017, ceph@xxxxxxxxxx wrote:

> Hi Stefan,
>
> Am 14. Dezember 2017 09:48:36 MEZ schrieb Stefan Kooman <stefan@xxxxxx>:
> >Hi,
> >
> >We see the following in the logs after we start a scrub for some osds:
> >
> >ceph-osd.2.log:2017-12-14 06:50:47.180344 7f0f47db2700  0
> >log_channel(cluster) log [DBG] : 1.2d8 scrub starts
> >ceph-osd.2.log:2017-12-14 06:50:47.180915 7f0f47db2700 -1 osd.2
> >pg_epoch: 11897 pg[1.2d8( v 11890'165209 (3221'163647,11890'165209]
> >local-lis/les=11733/11734 n=67 ec=132/132 lis/c 11733/11733 les/c/f
> > >11734/11734/0 11733/11733/11733) [2,45,31] r=0 lpr=11733
> >crt=11890'165209 lcod 11890'165208 mlcod 11890'165208
> >active+clean+scrubbing] _scan_snaps no head for
> >1:1b518155:::rbd_data.620652ae8944a.0000000000000126:29 (have MIN)
> >ceph-osd.2.log:2017-12-14 06:50:47.180929 7f0f47db2700 -1 osd.2
> >pg_epoch: 11897 pg[1.2d8( v 11890'165209 (3221'163647,11890'165209]
> >local-lis/les=11733/11734 n=67 ec=132/132 lis/c 11733/11733 les/c/f
> >11734/11734/0 11733/11733/11733) [2,45,31] r=0 lpr=11733
> >crt=11890'165209 lcod 11890'165208 mlcod 11890'165208
> >active+clean+scrubbing] _scan_snaps no head for
> >1:1b518155:::rbd_data.620652ae8944a.0000000000000126:14 (have MIN)
> >ceph-osd.2.log:2017-12-14 06:50:47.180941 7f0f47db2700 -1 osd.2
> >pg_epoch: 11897 pg[1.2d8( v 11890'165209 (3221'163647,11890'165209]
> >local-lis/les=11733/11734 n=67 ec=132/132 lis/c 11733/11733 les/c/f
> >11734/11734/0 11733/11733/11733) [2,45,31] r=0 lpr=11733
> >crt=11890'165209 lcod 11890'165208 mlcod 11890'165208
> >active+clean+scrubbing] _scan_snaps no head for
> >1:1b518155:::rbd_data.620652ae8944a.0000000000000126:a (have MIN)
> >ceph-osd.2.log:2017-12-14 06:50:47.214198 7f0f43daa700  0
> >log_channel(cluster) log [DBG] : 1.2d8 scrub ok
> >
> >So finally it logs "scrub ok", but what does " _scan_snaps no head for
> >..." mean?
>
> I also see this lines in our Logfiles and am wonder  what this means.
>
> >Does this indicate a problem?
>
> I do not guess so because we actually have not  any issues.
>
> >
> >Ceph 12.2.2 with bluestore on lvm
>
> We using 12.2.2 with filestore on xfs.
>
> - Mehmet
> >
> >Gr. Stefan
> _______________________________________________
> ceph-users mailing list
v> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux