Re: Ceph scrub logs: _scan_snaps no head for $object?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stefan, Mehmet,

Are these clusters that were upgraded from prior versions, or fresh 
luminous installs?

This message indicates that there is a stray clone object with no 
associated head or snapdir object.  That normally should never 
happen--it's presumably the result of a (hopefully old) bug.  The scrub 
process doesn't even clean them up, which maybe says something about how 
common it is/was...

sage


On Sun, 24 Dec 2017, ceph@xxxxxxxxxx wrote:

> Hi Stefan,
> 
> Am 14. Dezember 2017 09:48:36 MEZ schrieb Stefan Kooman <stefan@xxxxxx>:
> >Hi,
> >
> >We see the following in the logs after we start a scrub for some osds:
> >
> >ceph-osd.2.log:2017-12-14 06:50:47.180344 7f0f47db2700  0
> >log_channel(cluster) log [DBG] : 1.2d8 scrub starts
> >ceph-osd.2.log:2017-12-14 06:50:47.180915 7f0f47db2700 -1 osd.2
> >pg_epoch: 11897 pg[1.2d8( v 11890'165209 (3221'163647,11890'165209]
> >local-lis/les=11733/11734 n=67 ec=132/132 lis/c 11733/11733 les/c/f
> >11734/11734/0 11733/11733/11733) [2,45,31] r=0 lpr=11733
> >crt=11890'165209 lcod 11890'165208 mlcod 11890'165208
> >active+clean+scrubbing] _scan_snaps no head for
> >1:1b518155:::rbd_data.620652ae8944a.0000000000000126:29 (have MIN)
> >ceph-osd.2.log:2017-12-14 06:50:47.180929 7f0f47db2700 -1 osd.2
> >pg_epoch: 11897 pg[1.2d8( v 11890'165209 (3221'163647,11890'165209]
> >local-lis/les=11733/11734 n=67 ec=132/132 lis/c 11733/11733 les/c/f
> >11734/11734/0 11733/11733/11733) [2,45,31] r=0 lpr=11733
> >crt=11890'165209 lcod 11890'165208 mlcod 11890'165208
> >active+clean+scrubbing] _scan_snaps no head for
> >1:1b518155:::rbd_data.620652ae8944a.0000000000000126:14 (have MIN)
> >ceph-osd.2.log:2017-12-14 06:50:47.180941 7f0f47db2700 -1 osd.2
> >pg_epoch: 11897 pg[1.2d8( v 11890'165209 (3221'163647,11890'165209]
> >local-lis/les=11733/11734 n=67 ec=132/132 lis/c 11733/11733 les/c/f
> >11734/11734/0 11733/11733/11733) [2,45,31] r=0 lpr=11733
> >crt=11890'165209 lcod 11890'165208 mlcod 11890'165208
> >active+clean+scrubbing] _scan_snaps no head for
> >1:1b518155:::rbd_data.620652ae8944a.0000000000000126:a (have MIN)
> >ceph-osd.2.log:2017-12-14 06:50:47.214198 7f0f43daa700  0
> >log_channel(cluster) log [DBG] : 1.2d8 scrub ok
> >
> >So finally it logs "scrub ok", but what does " _scan_snaps no head for
> >..." mean?
> 
> I also see this lines in our Logfiles and am wonder  what this means.
> 
> >Does this indicate a problem?
> 
> I do not guess so because we actually have not  any issues.
>  
> >
> >Ceph 12.2.2 with bluestore on lvm
> 
> We using 12.2.2 with filestore on xfs.
> 
> - Mehmet
> >
> >Gr. Stefan
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux