Re: definitions for /proc/fs/xfs/stat

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ok, I have a simple reproducer.  try out the following, noting you'll obviously have to change the directory pointed to by dname:

libc=ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True)
falloc=getattr(libc, 'fallocate')

fb#!/usr/bin/python -u

import os
import sys
import time
import ctypes
import ctypes.util
from tempfile import mkstemp

dname = '/srv/node/disk0/mjs'
fname = 'foo'
fsize = 1024
nfiles = 1000

body = ' ' * fsize

time0 = time.time()
for i in range(nfiles):
    file_name = '%s/%s-%s' % (dname, fname, i)
    fd, tmppath = mkstemp(dir=dname)
    falloc(fd, 1, 0, fsize)
    os.rename(tmppath, file_name)

elapsed = time.time() - time0
tbytes = fsize * nfiles
rate = tbytes/elapsed/1024/1024

print "DName: %s" % dname
print "Bytes: %d" % (tbytes/1024/1024)
print "Time:  %.2f secs" % elapsed
print "Rate:  %.2f/sec" % rate

when I run it I see this:

segerm@az1-sw-object-0006:~$ sudo ./falloc.py
DName: /srv/node/disk0/mjs
Bytes: 9
Time:  5.84 secs
Rate:  1.67/sec

and while running collectl I see this:

#<----CPU[HYPER]-----><----------Disks-----------><----------Network---------->
#cpu sys inter  ctxsw KBRead  Reads KBWrit Writes   KBIn  PktIn  KBOut  PktOut
   0   0   110    113      0      0      0      0      0      5      0       3
   1   0  1576   2874      0      0 170240    665      0      3      0       2
   4   3  2248   6623      0      0 406585   1596      0      1      0       1
   4   3  2145   7680      0      0 473600   1850      0      1      0       1
   2   1  2200   7406      0      0 456633   1875      0      2      0       1
   4   3  3696   7401      0      0 454606   1823      0      1      0       1
   3   2  3877   7354      0      0 453537   1806      0      1      0       1
   1   0  1610   2764      0      0 163793    684      0      3      0       3

This is the same behavior I'm seeing on swift.  10K 1KB files X 4kb minimal block size still comes out to a lot less than the multiple GB of writes being reported.  Actually since these whole thing only takes a few seconds and I know a single disk can't write that fast maybe it's just a bug in the way the kernel is reported allocated preallocated blocks and nothing to do with XFS?  Or iis xfs responsible for the stats?

If I remove the fallocate call I see the expected amount of disk traffic.

-mark



On Sat, Jun 15, 2013 at 8:11 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
On Sat, Jun 15, 2013 at 12:22:35PM -0400, Mark Seger wrote:
> I was thinking a little color commentary might be helpful from a
> perspective of what the functionally is that's driving the need for
> fallocate.  I think I mentioned somewhere in this thread that the
> application is OpenStack Swift, which is  a highly scalable cloud object
> store.

I'm familiar with it and the problems it causes filesystems. What
application am I talking about here, for example?

http://oss.sgi.com/pipermail/xfs/2013-June/027159.html

Basically, Swift is trying to emulate Direct IO because python
does't support Direct IO. Hence Swift is hacking around that problem
and causing secondary issues that would never have occurred if
Direct IO was used in the first place.

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux