On Mon, Jun 3, 2019 at 9:02 AM Hervé Ballans <herve.ballans@xxxxxxxxxxxxx> wrote:
Hi all,
For information, I updated my Luminous cluster to the latest version 12.2.12 two weeks ago and, since then, I no longer encounter any problems of inconsistent pgs :)
You probably were affected by https://tracker.ceph.com/issues/22464
tl;dr: new kernel + low on RAM = read errors, for some reason. Fix was to retry reads ;)
_______________________________________________
Regards,
rv
Le 03/05/2019 à 11:54, Hervé Ballans a écrit :
Le 24/04/2019 à 10:06, Janne Johansson a écrit :
Den ons 24 apr. 2019 kl 08:46 skrev Zhenshi Zhou <deaderzzs@xxxxxxxxx>:
Hi,
I'm running a cluster for a period of time. I find the cluster usuallyrun into unhealthy state recently.
With 'ceph health detail', one or two pg are inconsistent. What'smore, pg in wrong state each day are not placed on the same disk,so that I don't think it's a disk problem.
The cluster is using version 12.2.5. Any idea about this strange issue?
There was lots of fixes for releases around that version,and later release notes on the 12.2.x series.Hi,
I encounter exactly the same problem on my Ceph Luminous cluster while I am in 12.2.10 version ! (and this already was the case with previous Luminous releases)
And unfortunately, I don't see any mention of that issue in the changelog of 12.2.12 :(
Has anyone ever looked into this issue ?
Regards,
rv
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com