Hi All,
New to the list so first hello to you all from St Andrews, Scotland.
We have three new raid hosts each with an Adaptec ASR71605
controller. Given upstream (Red Hat) are going to be using XFS as their
default file system we are going to use XFS on the three raid hosts. After
much reading around this is what I came up with.
All hosts have 16x4TB WD RE WD4000FYYZ drives and will run "RAID 6"
The underlying RAID details are
RAID level : 6 Reed-Solomon
Status of logical device : Optimal
Size : 53401590 MB
Stripe-unit size : 512 KB
Read-cache setting : Enabled
Read-cache status : On
Write-cache setting : Disabled
Write-cache status : Off
Partitioned : No
I built the filesystem with
mkfs.xfs -f -d su=512k,sw=14 /dev/sda
and mounted with fstab options
xfs defaults,inode64,nobarrier
My question is are the "mkfs.xfs" and the mount options I used sensible?
The RAID is to be used to store data from "numerical simulations" that
were run on a high performance cluster and is not mission critical in the
sense that it can be regenerated if lost. Of course that would take the
user/cluster some time.
Thanks for any advice.
Steve
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs