Re: GFS2 performance on large files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2009-04-22 at 20:37 -0300, Flavio Junior wrote:
> On Wed, Apr 22, 2009 at 8:11 PM, Andy Wallace <andy@xxxxxxxxxxxxxxxxxxxx> wrote:
> 
> > Although it's not as quick as I'd like, I'm getting about 150MB/s on
> > average when reading/writing files in the 100MB - 1GB range. However, if
> > I try to write a 10GB file, this goes down to about 50MB/s. That's just
> > doing dd to the mounted gfs2 on an individual node. If I do a get from
> > an ftp client, I'm seeing half that; cp from an NFS mount is more like
> > 1/5.
> >
> 
> Have you tried the same thing with another filesystem? Ext3 maybe ?
> Are you using RAID right? Did you check about RAID and LVM/partition alignment?
> 
> If you will try ext3, see about -E stride and -E stripe_width values
> on mkfs.ext3 manpage.
> This calc should helps: http://busybox.net/~aldot/mkfs_stride.html
> 
> 

Yes, I have (by the way, do you know how long ext3 takes to create a 6TB
filesystem???).

I've aligned the RAID and LVM stripes using various different values,
and found slight improvements in performance as a result. My main
problem is that when the file size hits a certain point, performance
degrades alarmingly. For example, on NFS moving a 100M file is about 20%
slower than direct access, with a 5GB file it's 80% slower (and the
direct access itself is 50% slower).

As I said before, I'll be working with 20G-170G files, so I really have
to find a way around this!

-- 
Andy

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux