Chris Adams wrote:
Once upon a time, hw <hw@xxxxxxxx> said:
xfs is supposed to detect the layout of a md-RAID devices when creating the
file system, but it doesn´t seem to do that:
# cat /proc/mdstat
Personalities : [raid1]
md10 : active raid1 sde[1] sdd[0]
499976512 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk
RAID 1 has no "layout" (for RAID, that usually refers to striping in
RAID levels 0/5/6), so there's nothing for a filesystem to detect or
optimize for.
Are you saying there is no difference between a RAID1 and a non-raid
device as far as xfs is concerned?
What if you use hardware RAID?
When you look at [1], it tells you to specify su and sw with hardware
RAID and says it detects everything automatically with md-RAID. It doesn´t
have an example with RAID1 but one with RAID10 --- however, why would that
make a difference? Aren´t there stripes in a RAID1? If you read from both
disks in a RAID1 simultaneously, you have to wait out the latency of both
disks before you get the data at full speed, and it might be better to use
stripes with them as well and read multiple parts of the data at the same
time.
[1]: http://xfs.org/index.php/XFS_FAQ#Q:_How_to_calculate_the_correct_sunit.2Cswidth_values_for_optimal_performance
> The chunk size above is for the md-RAID write-intent bitmap; that's not
> exposed information (for any RAID system that I'm aware of, software or
> hardware) or something that filesystems can optimize for.
Oh, ok. How do you know what stripe size was picked by mdadm? It seemd a
good idea to go with defaults as far as possible.
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos