Re: BlueStore checksums all data written to disk! so, can we use two copies in the production?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Short answer: no and no.

Long:

1. having size = 2 is safe *if you also keep min_size at 2*. But
that's not highly available so you usually don't want this. min_size =
1 (or reducing min size on an ec pool) is basically a guarantee to
lose at least some data/writes in the long run.

2. It's no longer as important as it used to be yes. We typically
increase the interval to a month instead of the default week with
bluestore.
But a properly tuned scrubbing configuration has a neglible overhead
and it can give you some confidence about the integrity of data that
is rarely accessed.


Paul

Am So., 23. Sep. 2018 um 08:49 Uhr schrieb jython.li <zijian1012@xxxxxxx>:
>
> "when using BlueStore, Ceph can ensure data integrity by conducting a cyclical redundancy check (CRC) on write operations; then, store the CRC value in the block database. On read operations, Ceph can retrieve the CRC value from the block database and compare it with the generated CRC of the retrieved data to ensure data integrity instantly"
> from https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/architecture_guide/#ensuring_data_integrity
>
> As mentioned above, BlueStore calculates, stores, and verifies checksums for all data and metadata it stores, then, my question are:
> 1. In bulestore, is it safe enough to use two copies in the production environment?
>
> In the filestore, there is no CRC, if you use two copies, there will be brain splitting. For example, when ceph checks that the data between the two copies is inconsistent, then ceph will not know which copy data is correct.
> But in BlueStore, all write data will have CRC, then even if there is inconsistency between the two copies, ceph will know which copy is correct (using the CRC value saved in block database)
>
>
> 2. Similarly, can we turn off deep-scrub at this time?
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux