This has been in the back of my mind for a while, but maybe it's worth addressing now that disks getting awfully large. class XFS(FS): """ XFS filesystem """ _type = "xfs" _mkfs = "mkfs.xfs" _modules = ["xfs"] _labelfs = "xfs_admin" _defaultFormatOptions = ["-f"] _defaultLabelOptions = ["-L"] _maxLabelChars = 16 _maxSize = 16 * 1024 * 1024 ... XFS can actually go much bigger than this, as can some other filesystems. However, there is a VM limit on some architectures; basically the page cache can't address > (2^(long bits) * (page size)) So for x86, it's 2^32 * 4096 = 16T hence the limit above. But for 64-bit arches, the max fs size is well beyond 16T. This stuff is available in C-land: #include <unistd.h> long sz = sysconf(_SC_PAGESIZE); (most systems allow the synonym _SC_PAGE_SIZE for _SC_PAGESIZE), or or: #include <unistd.h> int sz = getpagesize(); and sizeof(long) I guess; not sure how to get there from Python. Anyway if Anaconda wants to be more correct, it'd be something like: _maxSize = min((2^bits_per_long * page_size), fs_max_size) Does that sound doable? -Eric _______________________________________________ Anaconda-devel-list mailing list Anaconda-devel-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/anaconda-devel-list