Re: file system and raid performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Mark Kirkwood [mailto:markir@xxxxxxxxxxxxxxx]
> Mark Wong wrote:
> > On Mon, Aug 4, 2008 at 10:56 PM, Gregory S. Youngblood
> <greg@xxxxxxxxx> wrote:
> >
> >> I recently ran some tests on Ubuntu Hardy Server (Linux) comparing
> JFS, XFS,
> >> and ZFS+FUSE. It was all 32-bit and on old hardware, plus I only
> used
> >> bonnie++, so the numbers are really only useful for my hardware.
> >>
> >> What parameters were used to create the XFS partition in these
> tests? And,
> >> what options were used to mount the file system? Was the kernel 32-
> bit or
> >> 64-bit? Given what I've seen with some of the XFS options (like
> lazy-count),
> >> I am wondering about the options used in these tests.
> >>
> >
> > The default (no arguments specified) parameters were used to create
> > the XFS partitions.  Mount options specified are described in the
> > table.  This was a 64-bit OS.
> >
> I think it is a good idea to match the raid stripe size and give some
> indication of how many disks are in the array. E.g:
> 
> For a 4 disk system with 256K stripe size I used:
> 
>  $ mkfs.xfs -d su=256k,sw=2 /dev/mdx
> 
> which performed about 2-3 times quicker than the default (I did try
> sw=4
> as well, but didn't notice any difference compared to sw=4).

[Greg says] 
I thought that xfs picked up those details when using md and a soft-raid
configuration. 






[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux