Re: Copying without crc check when peering may lack reliability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi,

My config is the same as yours:

-> # ceph daemon osd.0 config show | grep bluestore | grep crc
    "bluestore_csum_type": "crc32c",

I think the problem seems to be that there is no crc check on the IO
path of recovering.

Thanks,
Poi
Xiaoguang Wang <xiaoguang.wang@xxxxxxxxxxxx> 于2018年8月27日周一 上午10:21写道:
>
> hi,
>
> Could you please show your ceph bluestore config "bluestore_csum_type"?
>
> In my cluster,
>
> [lege@root build]$  ceph daemon osd.0  config show | grep bluestore |
> grep crc
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
>      "bluestore_csum_type": "crc32c",
>
> Regards,
> Xiaoguang Wang
>
> On 08/23/2018 11:38 PM, poi wrote:
> > Hello!
> >
> > Recently, we did data migration from one crush root to another, but
> > after that, we found some objects were wrong and their copies on other
> > OSDs were also wrong.
> >
> > Finally, we found that for one pg, the data migration uses only one
> > OSD's data to generate three new copies, and do not check the crc
> > before migration like assuming the data is always correct (but
> > actually nobody can promise it). We tried both filestore and
> > bluestore, and the results were the same. Copying from one pg without
> > crc check may lack reliability.
> >
> > Is there any way to ensure the correctness of data when data
> > migration? Although we can do deep scrub before migration, but the
> > cost is too high. I think when peering, adding crc check for objects
> > before copying may work.
> >
> > Regards
> >
> > Poi
>
>




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux