On Fri, Nov 25, 2016 at 12:09:10PM -0500, Dave Hall wrote: > With a concatenated LV most AGs would be mapped to a single PV, but > XFS would still disperse disk activity across all AGs and thus > across all PVs. Like all things, this is only partially true. For inode64 (the default) the allocation load is spread based on directory structure. If all your work hits a single directory, then it won't get spread across multiple devices. The log will land on a single device, so it will always be limited by the throughput of that device. And read/overwrite workloads will only hit single devices, too. So unless you have a largely concurrent, widely distributed set of access patterns, XFS won't distribute the IO load. Now inode32, OTOH, distributes the data to different AGs at allocation time, meaning that data in a single directory is spread across multiple devices. However, all the metadata will be on the first device and that guarantees a device loading imbalance will occur. > With a striped LV each AG would be striped across > multiple PVs, which would change the distribution of disk activity > across the PVs but still lead to all PVs being fairly active. Striped devices can be thought of as the same as a single spindle - the characteristics from the filesystem perspective are the same, just with some added alignment constraints to optimise placement... > With DM-Thin, things would change. XFS would perceive that it's AGs > were fully allocated, but in reality new chunks of storage would be > allocated as needed. If DM-Thin uses a linear allocation algorithm > on a concatenated LV it would seem that certain kinds of disk > activity would tend to be concentrated in a single PV at a time. On > the other hand, DM-Thin in a striped LV would tend to spread things > around more evenly regardless of allocation patterns. Yup, exactly the same as for a filesystem. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html