Help with XFS in VMs on VMFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello.

I would like to use XFS in VMs with VMFS datastores on top of RAID-6.  The RAID is a FC 14+2 x 4TB with 64K stripe.  There are 6 of these arrays.  Each contains one aligned VMFS partition, and this VMFS partition is shared by 4 ESXi hosts.  Each host runs 2-3 compute nodes, and some of these nodes have multiple partitions consuming 20-50 TB.  The data is comprised of files ranging from 100KB to 500KB, with few outliers reaching many MB.  The directory hierarchy is such that no single directory contains more than 2,000 or so of these files.  The data is added almost exclusively append-only, i.e. write once when added and read many times afterwards, but they come in spikes of 1-20GB at a time.  As the partitions fill up, new ones are added, but sometimes the existing partitions must be grown.

Normally I would use raw mappings and XFS directly on the volumes.  But there is a hard requirement to support VM snapshots, so all the data must reside within VMDK files on the VMFS datastores.  ESXi has a VMDK size limit of 2TB.  So, I am forced to create many 2TB virtual disks and attach them to the host, then use Linux LVM to group them into a single LV, then create XFS on the LV.

This setup is not optimal and has risks, but I must work within some constraints.  There are a few things I can do to increase I/O performance, such as distributing the VMDK files used by each LV across the 6 VMFS datastores.  But can XFS be tuned as well?  Do stripe unit and stripe width help?  Thanks for your help.

Jan.


_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux