Re: btrfs RAID 5?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 5, 2021 at 12:31 PM Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
On Tue, Jan 5, 2021 at 6:24 AM Richard Shaw <hobbes1069@xxxxxxxxx> wrote:
> Ok, so not so bad. The main reason I'm considering raid5 is that I have one 4TB drive right now, if I add 2 more with raid one, I'm only going to get 2TB. I know it's kind of a duh, you're mirroring and right now I have no redundancy, but this is for home use and $$$/TB is important and I can't fit any more drives in this case :)

Adding 2 drives at a time to Btrfs raid1 is trivial, you just add each:
btrfs dev add /dev/3 /mnt
btrfs dev add /dev/4 /mnt

Implies both mkfs and resize. You don't need to balance it.

Ok, but my current drive is ext4 formatted. I could convert it in place but I just did that with my home drive on my desktop and it does take a while to create and remove the ext2_saved image. Wouldn't it be better to create a 2 drive raid1 array, copy the files over, update fstab, and then add the original drive to the raid1 array? 

As far as balancing, I wasn't sure it was helpful but was thinking of just spreading the data around :)

  
For btrfs raid5 that is also true, it'll just make new block groups
that have more stripes. But depending on the sizes of all the drives
it'll probably be more efficient utilization of space to rebalance.
Note if you add two drives that are bigger than the others, once the
others fill up, you'll get block groups made of two chunks on the two
drives with remaining space and that's effectively raid1 utilization,
because it's 1 data strip and 1 parity strip to do raid5 on two
devices.

In my case I'm going to use identical drives. 


Unique to Btrfs, you can start raid1 today, add drives, and move to
raid5 later. It's just a balance with a conversion filter.

That's pretty cool. 2TB of additional space will be plenty for now.

 
> Obviously if I go raid5 I won't have this option unless I can temporarily house my data on a separate drive.
>
> Looking at the link it looks like I'm OK?
>
> # smartctl -l scterc /dev/sda1
> smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.9.16-200.fc33.x86_64] (local build)
> Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
>
> SCT Error Recovery Control:
>            Read:    100 (10.0 seconds)
>           Write:    100 (10.0 seconds)

yeah if that's the default it's fine. The kernel's command timer is
30s, so the drive will give up on a read/write error before the kernel
will think it's MIA.

Good deal, so raid1 for now, hopefully raid5/6 support will be better if I need to convert later.

Thanks,
Richard
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx
[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux