Re: GFS block size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If your average file size is less than 1K then using a block size of 1k may be a good option. If you can fit your data in a single block you get the minor performance boost of using a stuffed inode so you never have to walk a list from your inode to your data block. The performance boost should be small but could add up to larger gains over time with lots of transactions. If your average data payload is less than the default block-size however, you'll end up losing the delta. So, from a filesystem perspective, using a 1k blocksize to store mostly sub-1k files may be a good idea. 

You additionally may want to experiment with reducing your resource group size. Blocks are organized into resource groups. If you are using 1k blocks and sub-1k files then you'll end up with tons of stuffed inodes per resource group. Some operations in GFS require locking the resource group metadata (such as deletes) so you may start to experience performance bottle-necks depending on usage patterns and disk layout.

All-in-all I'd be skeptical of the claim of large performance gains over time by changing rg size and block size but modest gains may be had. Still, some access patterns and filesystem layouts may experience greater performance gains with such tweaking. However, I would expect to see the most significant gains (in GFS1 at least) made by mount options and tuneables.

Regards,
Adam Drew

----- Original Message -----
From: "juncheol park" <nukejun@xxxxxxxxx>
To: "linux clustering" <linux-cluster@xxxxxxxxxx>
Sent: Tuesday, January 4, 2011 1:42:45 PM
Subject: Re:  GFS block size

I also experimented 1k block size on GFS1. Although you can improve
the disk usage using a smaller block size, typically it is recommended
to use the block size same as the page size, which is 4k in Linux.

I don't remember all the details of results. However, for large files,
the overall performance of read/write operations with 1k block size
was much worse than the one with 4k block size. This is obvious,
though. If you don't care any performance degradation for large files,
it would be fine for you to use 1k.

Just my two cents,

-Jun


On Fri, Dec 17, 2010 at 3:53 PM, Jeff Sturm <jeff.sturm@xxxxxxxxxx> wrote:
> One of our GFS filesystems tends to have a large number of very small files,
> on average about 1000 bytes each.
>
>
>
> I realized this week we'd created our filesystems with default options. As
> an experiment on a test system, I've recreated a GFS filesystem with "-b
> 1024" to reduce overall disk usage and disk bandwidth.
>
>
>
> Initially, tests look very goodâsingle file creates are less than one
> millisecond on average (down from about 5ms each). Before I go very far
> with this, I wanted to ask:Â Has anyone else experimented with the block
> size option, and are there any tricks or gotchas to report?
>
>
>
> (This is with CentOS 5.5, GFS 1.)
>
>
>
> -Jeff
>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
>

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux