RE: Delaylog information enquiry

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Indeed, I turned sync and wsync flags on. As excpected, I had terribly low performance (1MB/s for write operations). So I decided to turn them back off. (I got my 100 MB/s write throughput back).
I just wanted to reduce as much as possible unnecessary cache between my VM's and my physcal hard drives knowing that there are up to 8 write cache levels.
I'm getting off the subject a bit but here is the list. This is only my conclusion. I don't know if I'm right.

- Guest page cache.
- Virtual disk drive write cache. (off KVM cache=directsync)
- Host page cache. (off KVM cache=directsync)
- GlusterFS cache. (off)
- NAS page cache. (?)
- XFS cache (filesystem).
- RAID controller write cache. (off)
- Physical hard drive write cache. (off)

The main difficulty is that I have to gather information from different sources (editors / constructors) to get an overview of the cache mechanisms. I need to make sure our databases will not crash to any failure of one of those layers.

If you have any suggestions on where to find information or who to ask I would be rather grateful.
But at least I had answers about the XFS part.
Thank you very much !

> Date: Wed, 30 Jul 2014 18:18:58 +1000
> From: david@xxxxxxxxxxxxx
> To: neutrino8@xxxxxxxxx
> CC: bfoster@xxxxxxxxxx; frank_1005@xxxxxxx; xfs@xxxxxxxxxxx
> Subject: Re: Delaylog information enquiry
>
> On Wed, Jul 30, 2014 at 07:42:32AM +0200, Grozdan wrote:
> > On Wed, Jul 30, 2014 at 1:41 AM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > > Note that this does not change file data behaviour. In this case you
> > > need to add the "sync" mount option, which forces all buffered IO to
> > > be synchronous and so will be *very slow*. But if you've already
> > > turned off the BBWC on the RAID controller then your storage is
> > > already terribly slow and so you probably won't care about making
> > > performance even worse...
> >
> > Dave, excuse my ignorant questions
> >
> > I know the Linux kernel keeps data in cache up to 30 seconds before a
> > kernel daemon flushes it to disk, unless
> > the configured dirty ratio (which is 40% of RAM, iirc) is reached
>
> 10% of RAM, actually.
>
> > before these 30 seconds so the flush is done before it
> >
> > What I did is lower these 30 seconds to 5 seconds so every 5 seconds
> > data is flushed to disk (I've set the dirty_expire_centisecs to 500).
> > So, are there any drawbacks in doing this?
>
> Depends on your workload. For a desktop, you probably won't notice
> anything different. For a machine that creates lots of temporary
> files and then removes them (e.g. build machines) then it could
> crater performance completely because it causes writeback before the
> files are removed...
>
> > I mean, I don't care *that*
> > much for performance but I do want my dirty data to be on
> > storage in a reasonable amount of time. I looked at the various sync
> > mount options but they all are synchronous so it is my
> > impression they'll be slower than giving the kernel 5 seconds to keep
> > data and then flush it.
> >
> > From XFS perspective, I'd like to know if this is not recommended or
> > if it is? I know that with setting the above to 500 centisecs
> > means that there will be more writes to disk and potentially may
> > result in tear & wear, thus shortening the lifetime of the
> > storage
> >
> > This is a regular desktop system with a single Seagate Constellation
> > SATA disk so no RAID, LVM, thin provision or anything else
> >
> > What do you think? :)
>
> I don't think it really matters either way. I don't change
> the writeback time on my workstations, build machines or test
> machines, but I actually *increase* it on my laptops to save power
> by not writing to disk as often. So if you want a little more
> safety, then reducing the writeback timeout shouldn't have any
> significant affect on performance or wear unless you are doing
> something unusual....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@xxxxxxxxxxxxx
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux