Re: OOM problem caused by fs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 07, 2010 at 04:34:16PM +0800, Seth Huang wrote:
> Hello everyone,
> 
> Our group is developing a new file system for linux and we got stuck
> with out-of-memory problem.
> 
> When creating large files in our fs, the system will run out of
> memory(I mean the kernel starts to dump memory usage repeatedly and
> the oom killer begins to kill processes) as long as the amount of data
> exceeds the capacity of free memory, even if the kernel is flushing
> out dirty pages.
> 
> If i'm right, when available memory is low, the writes will be blocked
> in page cache allocation until some dirty pages are cleaned. I've
> checked pdflush, it works fine in our system, which means dirty pages
> can be flushed out and cleaned in time. However, it still crashes the
> system. I've no idea how could this happen.
> 
> Has anyone experienced the same thing? Any advices will be appreciated.

Do you have a pointer to your source?

Are you using set_page_dirty to dirty the pages?

Are you sure you don't have a refcount leak?

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux