CephFS Snaptrim stuck?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Ceph Users,

We are experiencing a strange behaviour on Ceph v15.2.9 that a set of PGs
seem to be stuck in active + clean + snaptrim state. (for almost a day now)

Usually snaptrim is quite fast (done in a few minutes), however now in the
osd logs we see slowly increasing trimq numbers, with such entries coming
constantly (every few seconds):

2021-05-16T13:58:28.795+0000 7fc668f2e700 -1 osd.9 pg_epoch: 91137 pg[2.39(
v 91137'1584600 (91059'1581379,91137'1584600] local-lis/les=91119/91120
n=115714 ec=165/115 lis/c=91119/91119 les/c/f=91120/91120/0 sis=91119)
[9,10,3] r=0 lpr=91119 luod=91137'1584598 crt=91137'1584600 lcod
91137'1584597 mlcod 91137'1584597 active+clean+snaptrim* trimq=82*
ps=[75fe~1,7600~1,868a~1,8caa~1,a30c~1,a422~1,a65e~1,c0cf~1,c569~1]]
removing snap head

We tried restarting the OSD-s, but no difference. Otherwise the cluster
reports itself as being healthy.

If anyone has any ideas what might be causing this and how to get these PGs
finished with the snaptrim state, that would be very much appreciated.

Kind regards,

András Sali
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux