Re: btrfs RAID 5?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 5, 2021 at 11:49 AM Richard Shaw <hobbes1069@xxxxxxxxx> wrote:
>
> On Tue, Jan 5, 2021 at 12:31 PM Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
>>
>> On Tue, Jan 5, 2021 at 6:24 AM Richard Shaw <hobbes1069@xxxxxxxxx> wrote:
>> > Ok, so not so bad. The main reason I'm considering raid5 is that I have one 4TB drive right now, if I add 2 more with raid one, I'm only going to get 2TB. I know it's kind of a duh, you're mirroring and right now I have no redundancy, but this is for home use and $$$/TB is important and I can't fit any more drives in this case :)
>>
>> Adding 2 drives at a time to Btrfs raid1 is trivial, you just add each:
>> btrfs dev add /dev/3 /mnt
>> btrfs dev add /dev/4 /mnt
>>
>> Implies both mkfs and resize. You don't need to balance it.
>
>
> Ok, but my current drive is ext4 formatted. I could convert it in place but I just did that with my home drive on my desktop and it does take a while to create and remove the ext2_saved image. Wouldn't it be better to create a 2 drive raid1 array, copy the files over, update fstab, and then add the original drive to the raid1 array?

Yes that. I just meant when you add the original, you don't mkfs.btrfs
on it. You just add it, mkfs is implied when adding.

Convert is stable and does get attention if bugs are found and seems
to be reliable. But there could still be edge cases, mainly because
there's not tons of aged ext4 file systems being converted.

>
> As far as balancing, I wasn't sure it was helpful but was thinking of just spreading the data around :)

For the odd drive case, and where you're adding one drive to Btrfs
raid1, best to do a balance. It won't equally spread the data around,
it favors the drive(s) with the most  free space. Once they all have
the same amount of free space, the allocator takes turns among them to
try to maintain equal free space.




-- 
Chris Murphy
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx



[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux