upgrading to 12.2.6 damages cephfs (crc errors)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Following the reports on ceph-users about damaged cephfs after
updating to 12.2.6 I spun up a 1 node cluster to try the upgrade.
I started with two OSDs on 12.2.5, wrote some data.
Then I restarted the OSDs one by one while continuing to write to the
cephfs mountpoint.
Then I restarted the (single) MDS, and it is indeed damaged with a crc error:

2018-07-13 12:38:55.261379 osd.1 osd.1 137.138.62.86:6805/35320 2 :
cluster [ERR] 2.15 full-object read crc 0xed77af7c != expected
0x1a1d319d on 2:aa448500:::500.00000000:head
2018-07-13 12:38:55.285994 osd.0 osd.0 137.138.62.86:6801/34755 2 :
cluster [ERR] 2.13 full-object read crc 0xa73a97ef != expected
0x3e6fdb4a on 2:c91d4a1d:::mds0_inotable:head

I think it goes without saying that nobody should upgrade a cephfs to
12.2.6 until this is understood.

-- Dan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux