[PATCH] UnbufferedFile improvements v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Montag 21 November 2005 02:15, Artur Skawina wrote:

> the freezing certainly applies to NFS -- it shows clearly if you have

Ok - I see.

> it's a problem even on 100mbit -- while the fileserver certainly can
> accept sustained 10M/s data for several seconds (at least), it's the
> client, ie vdr-box, that does not behave well -- it sits almost
> completely idle for minutes (zero network traffic, no writeback at
> all), and then goes busy for a second or so.

But this very much sounds like a NFS-problem - and much less like a VDR 
problem ...

> [...] I had
> hoped the extra fadvice every 10M would fix that, but i wanted to get
> the recording and replay cases right first. (the issue when cutting
> is simply that we need: a) start the writeback, and b) drop the
> cached data after it has hit the disk. The problem is that we don't
> really know when to do b...

Thats exactly the problem here ... without special force my kernel seems 
to prefer to use memory instead of disk ...

> For low write rates the heuristic seems 
> to work, for high rates it might fail. Yes, fdatasync obviously will
> work, but this is the sledgehammer approach :)

I know. I also don't like this approach. But at least it worked (here). 

> The fadvise(0,0) 
> solution was a first try at using a slightly smaller hammer. Keeping
> a dirty-list and flushing it after some time would be the next step
> if fadvise isn't enough.)

How do you know what is still dirty in case of writes?

> How does the cache behave when _not_ cutting? Over here it looks ok,
> i've done several recordings while playing back others, and the cache
> was basically staying the same. (as this is not a dedicated vdr box
> it is however sometimes hard to be sure)

With the active read ahead I even have leaks when only reading - the 
initiated non-blocking reads of the WILL_NEED seem to keep pages in the 
buffer caches.

> in v1 i was using a relatively small readahead window -- maybe for a
> slow disk it was _too_ small. In v2 it's a little bigger, maybe that
> will help (i increased it to make sure the readahead worked for
> fast-forward, but so far i haven't been able to see much difference).
> But I don't usually replay anything while cutting, so this hasn't
> really been tested...

My initial intention when trying to use an active read ahead has been to 
have no hangs even when another disks needs to spin up. On my system I 
sometimes have this problem and it is annoying. So a read ahead of 
several megabytes would be needed here - but even without such a huge 
read ahead I get this annoying leaks here. For normal operation 
(replay) they could be avoided by increasing the region which has to be 
cleared to at least the size of the read ahead.
   
> (BTW, with the added readahead in the v2 patch, vdr seems to come
> close to saturating a 100M connection when cutting. Even when _both_
> the source  and destination are on the same NFSv3 mounted disk, which
> kind of surprised me. LocalDisk->NFS rate  and v/v seems to be
> limited by the network. I didn't check localdisk->localdisk (lack of
> sufficient diskpace). Didn't do any real benchmarking, these are
> estimations based on observing the free diskspace decrease rate and
> network traffic)

Cool!

> The current vdr behavior isn't really acceptable -- at the very least
> the fsyncs have to be configurable -- even a few hundred megabytes
> needlessly dirtied by vdr is still much better than the bursts of
> traffic, disk and cpu usage. I personally don't mind the cache
> trashing so much; it would be enough to keep vdr happily running
> in the background without disturbing other tasks.

Depends on the use case. You are absolutely right in the NFS case. In 
the "dedicated to VDR standalone" case this is different. By throwing 
away the inode cache it makes usage of big recording archives 
uncomfortable - it takes up to 20 seconds to scan my local recordings 
directory. Thats a long time when you just want to select a 
recording ...

> > To be honest - I did not found the place where writes get flushed
> > in your patch. posix_fadvise() doesn't seem to influence flushing
> > at all.
>
> Hmm, what glibc/kernel?
> It works here w/ glibc-2.3.90 and linux-2.6.14.

SuSE 9.1:
GNU C Library stable release version 2.3.3 (20040405)
Kernel 2.6.14

> Here's "vmstat 1" output; vdr (patched 1.3.36) is currently doing a
> recording to local disk:
>
> procs -----------memory---------- ---swap-- -----io---- ...
> [ ... ]
>
> the 'bo' column shows the writeout caused by vdr. Also note the
> 'free' and 'cache' field fluctuate a bit, but do not grow. Hmm, now i
> noticed the slowly growing 'buff' -- is this causing you problems?

I don't think so - this would not fill my RAM in the next weeks ;) I 
usually have 300MB left on the box (yes - it has quite much memory for 
just a VDR ... )

> I didn't mind this here, as there's clearly plenty of free RAM
> around. Will have to investigate what happens under some memory
> pressure.

As I said - at least here there is no pressure.

> Are saying you don't get any writeback activity w/ my patch?

Correct. It starts writing back when memory is filled. Not a single 
second earlier.

> With no posix_fadvice and no fdatasync calls in the write path i get
> almost no writeout with multi-megabyte bursts every minute (triggered
> probably by ext3 journal commit (interval set to 60s) and/or memory
> pressure).

Using reiserfs here. I remember having configured it for lazy disk 
operations ... maybe this is the source for the above results. The idea 
has been to collect system writes - to not spin up the disks if not 
absolutely necessary. But this obviously also results in collecting VDR 
writes ... anyway I think this is a valid case too. At least for 
dedicated "multimedia" stations ... A bit more control about VDR IO 
would be a great thing to have.

> > It only applies to already written buffers. So the normal write
>
> /usr/src/linux/mm/fadvise.c should contain the implementation of the
> various fadvice modes in a linux 2.6 kernel. It certainly does
> trigger writeback here. Both in the local disk case, and on NFS,
> where it causes a similar traffic pattern.

Will have a look at the code.

> See vmstat output above. Are you sure you have a working
> posix_fadvise?

Quite sure - the current VDR version is performing perfectly well - 
within its limit.

> If not, that would also explain the hang during 
> playback as no readahead was actually taking place... (to be honest,
> i don't think that you need any manual readahead at all in a
> normal-playback situation; especially as the kernel will by default
> do some. It's only when the disk is getting busier that the benefits
> of readahead show up. At least this is what i saw here)

Remember - you switched off read ahead: POSIX_FADV_RANDOM
;) 

Anyway - it seems the small read ahead in your patch doesn't had the 
sightest chance against the multi megabyte write back triggered when 
buffer cache was on its limits.

> What happens when you start a replay and then end it? is the memory
> freed immediately?

I will have a look at it again.

Thanks a lot for working on the problem
Regards
Ralf

-- 
Van Roy's Law: -------------------------------------------------------
       An unbreakable toy is useful for breaking other toys.



[Index of Archives]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Util Linux NG]     [Xfree86]     [Big List of Linux Books]     [Fedora Users]     [Fedora Women]     [ALSA Devel]     [Linux USB]

  Powered by Linux