sw and su for hardware RAID10 (w/ LVM)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



RHEL6.x + XFS that comes w/ Red Hat's scalable file system add on.  We
have two PowerVault MD3260e's each configured with a 30 disk RAID10 (15
RAID groups) exposed to our server.  Segment size is 128K (in Dell's
world I'm not sure if this means my stripe width is 128K*15?)

Have set up a concatenated LVM volume on top of these two "virtual
disks" (with lvcreate -i 2).

By default LVM says it's used a stripe width of 64K.

# lvs -o path,size,stripes,stripe_size
  Path                           LSize   #Str Stripe
  /dev/agsfac_vg00/lv00          100.00t    2 64.00k

Unsure if these defaults should be adjusted.

I'm trying to figure out the appropriate sw/su values to use per:

  http://xfs.org/index.php/XFS_FAQ#Q:_How_to_calculate_the_correct_sunit.2Cswidth_values_for_optimal_performance

Am considering either just going with defaults (XFS should pull from
LVM I think) or doing something like sw=2,su=128K.  However, maybe I
should be doing sw=2,su=1920K?  And perhaps my LVM stripe width should
be adjusted?

Thanks,
Ray

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux