In data mercoledì 4 marzo 2020 18:14:31 CET, Chad William Seys ha scritto: > > Maybe.... I've marked the object as "lost" and removed the failed > > OSD. > > > > The cluster now is healthy, but I'd like to understand if it's likely > > to bother me again in the future. > > Yeah, I don't know. > > Within the last month there are 4 separate instances of people > mentioning "unfound" object in their cluster. > > I'm deferring as long as possible any OSD drive upgrades. I ran into > the problem when "draining" an OSD. > > "draining" means remove OSD from crush map, wait for all PG to be stored > elsewhere, then replace drive with larger one. Under those > circumstances there should be no PG unfound. > > BTW, are you using cache tiering ? The bug report mentions this, but > some people did not have this enabled. > > Chad. No, I don't have cache tiering enabled. I also found strange that the PG was marked unfound: the cluster was perfectly healthy before the kernel panic and a single OSD failure shouldn't create mush hassle. *Simone Lazzaris* *Qcom S.p.A. a Socio Unico* Via Roggia Vignola, 9 | 24047 Treviglio (BG)T +39 0363 1970352 | M +39 3938111237 simone.lazzaris@xxxxxxx[1] | www.qcom.it[2] * LinkedIn[3]* | *Facebook*[4] [5] -------- [1] mailto:simone.lazzaris@xxxxxxx [2] https://www.qcom.it [3] https://www.linkedin.com/company/qcom-spa [4] http://www.facebook.com/qcomspa [5] https://www.qcom.it/includes/NUOVAemail-banner.gif _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx