Suggested XFS setup/options for 10TB file system w/ 18-20M files.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I have a use case where I'm writing ~500Kb (avg size) files to a 10TB XFS file systems. Each of system has 36 of these 10TB drives.

The application opens the file, writes the data (single call), and closes the file. In addition there are a few lines added to the extended attributes. The filesystem ends up with 18 to 20 million files when the drive is full. The files are currently spread over 128x128 directories using a hash of the filename.

The format command I'm using:

mkfs.xfs -f -i size=1024 ${DRIVE}

Mount options:

rw,noatime,attr2,inode64,allocsize=2048k,logbufs=8,logbsize=256k,noquota

As the drive is filling, the first few % of the drive seems fine. Using iostat the avgrq-sz is close to the average file size. What I'm noticing is as the drive starts to fill (say around 5-10%) the reads start increasing (r/s in iostat). In addition, the avgrq-sz starts to decrease. Pretty soon the r/s can be 1/3 to 1/2 as many as our w/s. At first we thought this was related to using extended attributes, but disabling that didn’t make a difference at all.

Considering I know the app isn’t making any read request, I’m guessing this is related to updating metadata etc. Any guidance on how to resolve/reduce/etc? For example, would a different directory structure help (more files in less directories)?

Thanks,
R. Jason Adams

--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux