Re: definitions for /proc/fs/xfs/stat

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey guys,

----- Original Message -----
> ok, I have a simple reproducer.  try out the following, noting you'll
> obviously have to change the directory pointed to by dname:
> 
> libc=ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True)
> falloc=getattr(libc, 'fallocate')
> 

This is using the glibc fallocate wrapper - I have vague memories of an
old libc which used to do per-page buffered writes providing a poor-mans
implementation of fallocate, maybe somehow that older version/behaviour
is being triggered.

Running the test case on a RHEL6 box here, you should see patterns like
the attached ("pmchart -c XFSLog" - config attached too), which suggest
log traffic dominates (though I have no stripe-fu setup like you, Mark,
which adds another wrinkle).

> This is the same behavior I'm seeing on swift.  10K 1KB files X 4kb minimal
> block size still comes out to a lot less than the multiple GB of writes
> being reported.  Actually since these whole thing only takes a few seconds
> and I know a single disk can't write that fast maybe it's just a bug in the
> way the kernel is reported allocated preallocated blocks and nothing to do
> with XFS?  Or iis xfs responsible for the stats?

The device level IOPs and bandwidth stats are accounted in the block layer,
below the filesystem (at that point, there is no concept of preallocation
anymore - something has submitted actual I/O), whereas the XFS log traffic
accounting is done inside XFS.

> If I remove the fallocate call I see the expected amount of disk traffic.
> 
> -mark
> 
> On Sat, Jun 15, 2013 at 8:11 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > On Sat, Jun 15, 2013 at 12:22:35PM -0400, Mark Seger wrote:
> > > I was thinking a little color commentary might be helpful from a
> > > perspective of what the functionally is that's driving the need for
> > > fallocate.  I think I mentioned somewhere in this thread that the
> > > application is OpenStack Swift, which is  a highly scalable cloud object
> > > store.
> >
> > I'm familiar with it and the problems it causes filesystems. What
> > application am I talking about here, for example?
> >
> > http://oss.sgi.com/pipermail/xfs/2013-June/027159.html
> >
> > Basically, Swift is trying to emulate Direct IO because python
> > does't support Direct IO. Hence Swift is hacking around that problem

I think it is still possible, FWIW.  One could use python ctypes (as in
Marks test program) and achieve a page-aligned POSIX memalign, and some
quick googling suggests flags can be passed to open(2) via os.O_DIRECT.

cheers.

--
Nathan

Attachment: XFSLog
Description: Binary data

Attachment: xfs-log-vs-fallocate.png
Description: PNG image

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux