On Monday 08 March 2010, Khusro Jaleel wrote: > Thanks to all of you for your help, and especially Tim Shubitz who faced > the same problem and his solution worked perfectly for me. > > However, now that I have properly created a GPT partition of size 2.7TB, > which filesystem is best on it? This filesystem will be used to store > backups of various other linux systems, so the files will be mostly small, > however some systems do host big movie files and sometimes SVN dumps, and > DB dumps can get a little big. I am going to be using rsnapshot to do the > backups, so perhaps I should be careful about the number of inodes I create > and try to maximise them? > > I am thinking of using XFS, but am not sure. I seem to have heard in the > past that one should avoid EXT3 on such huge filesystems, but I can't find > a reference or proper justification for it. JFS is another option but then > some mailing list threads online say it has lost data for them so I'm a bit > confused as to what is best to use in my scenario. My thoughts on this are roughly: * 2.7T isn't really big ext3,xfs,jfs,etc. should all be fine * We've run XFS alot, but still, it's a lot less mainstream than ext3 * Ext4 is still a tech preview in 5.4 * We have alot of data on Lustre-style ext3 (in the range 4-8T), no issues Boils down to: Use what you're comfortable with (XFS is typically faster for us but ext3 certainly won't break down at this scale). /Peter > As for XFS I have read that a UPS is necessary and this is not a problem > since these machines are already connected to a UPS (and that UPS has a > backup as well). > > Any help appreciated, thanks, > > Khusro
Attachment:
signature.asc
Description: This is a digitally signed message part.
_______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos