If it's bluestore, this is pretty likely to be a bluestore bug. If you are interested in experimenting with bluestore, you probably want to watch developements on the master branch, it's undergoing a bunch of changes right now. -Sam On Thu, Sep 1, 2016 at 1:54 PM, Виталий Филиппов <vitalif@xxxxxxxxxx> wrote: > Hi! I'm playing with a test setup of ceph jewel with bluestore and cephfs > over erasure-coded pool with replicated pool as a cache tier. After writing > some number of small files to cephfs I begin seeing the following error > messages during the migration of data from cache to EC pool: > > 2016-09-01 10:19:27.364710 7f37c1a09700 -1 osd.0 pg_epoch: 329 pg[6.2cs0( v > 329'388 (0'0,329'388] local-les=315 n=326 ec=279 les/c/f 315/315/0 > 314/314/314) [0,1,2] r=0 lpr=314 crt=329'387 lcod 329'387 mlcod 329'387 > active+clean] process_copy_chunk data digest 0x648fd38c != source 0x40203b61 > 2016-09-01 10:19:27.364742 7f37c1a09700 -1 log_channel(cluster) log [ERR] : > 6.2cs0 copy from 8:372dc315:::200.0000002b:head to > 6:372dc315:::200.0000002b:head data digest 0x648fd38c != source 0x40203b61 > > These messages then repeat infinitely for the same set of objects with some > interval. I'm not sure - does this mean some objects are corrupted in OSDs? > (how to check?) Is it a bug at all? > > P.S: I've also reported this as an issue: > http://tracker.ceph.com/issues/17194 (not sure if it was right to do :)) > > -- > With best regards, > Vitaliy Filippov > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com