Re: creating a new 80 TB XFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/24/12 6:52 AM, Richard Ems wrote:
> Hi list,
> 
> I am not a storage expert, so sorry in advance for probably some *naive*
> questions or proposals from me. 8)
> 
> *INTRO*
> We are getting new hardware soon and I wanted to check with you my plans
> for creating and mounting this XFS.
> 
> The storage system is from EUROstor,
> http://eurostor.com/en/products/raid-sas-host/es-6600-sassas-toploader.html
> .
> 
> We are getting now 32 x 3 TB Hitachi SATA HDDs.
> I plan to configure them in a single RAID 6 set with one or two
> hot-standby discs. The raw storage space will then be 28 x 3 TB = 84 TB.
> On this one RAID set I will create only one volume.
> Any thoughts on this?
> 
> This storage will be used as secondary storage for backups. We use
> dirvish (www.dirvish.org, which uses rsync) to run our daily backups.
> dirvish heavily uses hard links. It compares all files, one by one, and
> synchronizes all new or changed files with rsync to the current daily
> dir YYYY-MM-DD, and creates hard links for all not changed files from
> the last previous backup on YYYY-MM-(DD-1) to the current YYYY-MM-DD
> directory.
> 
> 
> *MKFS*
> We also heavily use ACLs for almost all of our files. Christoph Hellwig
> suggested in a previous mail to use "-i size=512" on XFS creation, so my
> mkfs.xfs would look something like:
> 
> mkfs.xfs -i size=512 -d su=stripe_size,sw=28 -L Backup_2 /dev/sdX1

Be sure the stripe geometry matches the way the raid controller is
set up.

You know the size of your acls, so you can probably do some testing
to find out how well 512-byte inodes keep ACLs in-line.

As others mentioned, if sdX1 means you've partitioned your 80T
device, that's probably unnecessary.

> *MOUNT*
> On mount I will use the options
> 
> mount -o noatime,nobarrier,nofail,logbufs=8,logbsize=256k,inode64
> /dev/sdX1 /mount_point

Understand what nobarrier means, and convince yourself that it's safe
before you turn them off.  Then convince yourself again.
You'll want to know if your raid controller has a write back
cache, whether it disables disk write back caches, whether any active
caches are battery-backed, etc.
 
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/writebarr.html

You are restating defaults for logbufs.  Your logbsize value is bigger than default.
"The trade off for this increase in metadata performance is that more operations may
be "missing" after recovery if the system crashes while actively making modifications. "

inode64 is a good idea.

also, why nofail?

> What about the largeio mount option? In which cases would it be useful?

probably none in your case.  It changes what stat reports in st_blksize,
so it depends on what (if anything) your userspace does with that.

> Do you have any other/better suggestions or comments?

http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E

-Eric

> 
> Many thanks,
> Richard
> 
> 
> 
> 
> 
> 

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux