Re: fixable inconsistencies but more appears

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Well. it seems memory

I have 3 ODS per host with 8G RAM and block.db in SSD

Setting bluestore_cache_size_ssd=1G seems to have fixed the problem. No new inconsistencies.



On 21/08/18 16:09, Paul Emmerich wrote:
Are you running tight on memory?

Paul

2018-08-21 20:37 GMT+02:00 Alfredo Daniel Rezinovsky
<alfredo.rezinovsky@xxxxxxxxxxxxxxxxxxxxxxxx>:
My cluster suddenly shows many inconsistent PGs.

with this kind of log

2018-08-21 15:29:39.065613 osd.2 osd.2 10.64.1.1:6801/1310438 146 : cluster
[ERR] 2.61 shard 5: soid 2:864a5b37:::1000070510e.00000004:head candidate
had a read error
2018-08-21 15:31:38.542447 osd.2 osd.2 10.64.1.1:6801/1310438 147 : cluster
[ERR] 2.61 shard 5: soid 2:86783f28:::10000241f7f.00000000:head candidate
had a read error

Al error fixes with "ceph pg repair" eventually but new inconsistencies
appears.

smart and kernel logs shows no hdd problems.

I have bluestore OSDs in HDD with journal in an SDD partition.

--
Alfredo Daniel Rezinovsky
Director de Tecnologías de Información y Comunicaciones
Facultad de Ingeniería - Universidad Nacional de Cuyo

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Alfredo Daniel Rezinovsky
Director de Tecnologías de Información y Comunicaciones
Facultad de Ingeniería - Universidad Nacional de Cuyo

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux