On 9/14/20 3:31 AM, Daniel Pocock wrote:
Given the plans to make btrfs the default, I'll share some of my own recent experiences, hopefully this can make it easier for the next person One issue I've come across is that a btrfs filesystem can only be used on hosts with the same page size as the host that created the filesystem E.g. x86-64 kernels have a 4k default page size but powerpc64le kernels have been compiled with the optional 64k page size. This impacts various distributions. If somebody creates some filesystems with the 4k parameter and then they migrate them to the powerpc64le host, they won't mount If they try to go the other way, the filesystems won't mount either There are other non-btrfs issues related to the 64k page size, for example, nouveau driver won't work either To make things easier for btrfs, could it be worthwhile changing the default page size from 64k back to 4k on default kernels for most ordinary users?
Yeah subpage blocksize support isn't something that we've prioritized. When btrfs was originally written the only option for that was to use buffer heads, which removed a lot of the flexibility we needed to support things like multi device file systems. At the time we tied the fs blocksize to the page size, because it was unlikely that a user would mkfs a fs on one arch and move it over to another arch.
There is work ongoing from Suse to bring this support in, there was a patchset last week posted to add read-only support for sub-page blocksizes. Write support will be harder but is coming along. However these are obviously not going to be ready for F33 timeline, nor probably F34. Thanks,
Josef _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx