GFS create file performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are using GFS to store session files for our web application.  I've spent some time exploring GFS performance and tuning the software for optimal latency on system calls—we control the software, and the core libraries are written in C.  So I've been following related discussions of e.g. stat() performance with a great deal of interest.

 

I hit a wall reducing latency of new file creation.  Average create times are around 10ms and fluctuate from about 1ms up to 100ms or so.   Here's an example:

 

open("/tb2/session/localhost/1800/ac18c/379/905bbc40.ts", O_WRONLY|O_CREAT|O_EXCL, 0660) = 4 <0.015415>

 

The parent directory of this file (379) was created on this node.  Our session storage ensures that no two nodes will attempt to create files in the same directory. I'm also limiting the number of directories we have to create so there is about a 50:1 ratio of files to directories (mkdir performance on GFS is generally awful).

 

Here's a breakdown of the most common system calls made from my test harness:

 

% time     seconds  usecs/call     calls    errors syscall

------ ----------- ----------- --------- --------- ----------------

 91.26    0.046362         228       203           open

  7.87    0.003998        1999         2           mkdir

  0.59    0.000298           0       600         2 stat

  0.19    0.000098           0       200           write

  0.09    0.000045           0       302           read

  0.00    0.000000           0       202           close

 

Note this report doesn't show wall-clock time  (I obtained it with strace –c).  Roughly half the calls to open() are creating files, the rest open existing files.

 

My questions:

 

-      What exactly happens during open()?  I'm guessing that at least the journal is flushed to disk.  Timings for open() are long and highly variable compared to other filesystems (e.g. ext3).  The strace utility is limited to showing system calls from user space—it'd be interesting to see what I/O takes place in kernel space, but I don't have any way to do that (do I?).  Am I network bound or I/O bound here?  The latency looks suspiciously like disk seek times to me.

-      Is there a strategy I can use to return more quickly from open() on a GFS filesystem?

-      Before I spend time migrating to GFS2, is there any reason to believe GFS2 would perform significantly better here?

 

Thanks,

 

Jeff

 

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux