Re: Raid 1 vs Raid 5 suggestion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Luca and Wol

Now it has a problem about the combination of raid6 and dm-integrity. If there are many read failures at a time,

it can set the member as faulty.

                } else if (atomic_read(&rdev->read_errors)
                         > conf->max_nr_stripes)
                        printk(KERN_WARNING
"md/raid:%s: Too many read errors, failing device %s.\n",
                               mdname(conf->mddev), bdn);

But now it just can be reproduced on virtual machines and we modify about 500MB data of the disk. And in real world I think it's hard to trigger this.

@Luca, this combination can fix the silent data corruption automatically. If there are some data is broken on some sectors, this combination can

fix it automatically. I pasted one link several days before. You can check it.

Regards

Xiao


On 07/24/2019 04:34 AM, Luca Lazzarin wrote:
I do not know it. Could you please link me to some infos?

Why do you suggest it?
I mean, which benefits will it give to me?

Thanks :-)

On 23/07/19 22:30, Wol's lists wrote:
On 23/07/2019 21:16, Luca Lazzarin wrote:
Thank you all for your suggestiong.

I'll probably choose RAID6.

Any chance of putting it on top of dm-integrity? If you do can you post about it (seeing as it's new), but it sounds like it's something all raids should have.

Cheers,
Wol






[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux