Re: OOM's on the Ceph client machine

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ted,

On Tue, 12 Oct 2010, Ted Ts'o wrote:
> On Tue, Oct 12, 2010 at 07:30:48PM -0700, Gregory Farnum wrote:
> > Does this mean you're using cfuse rather than the kernel client?
> > FUSE performance in general is fairly disappointing and our cfuse is
> > probably not as fast as the kernel client even so, though I don't
> > think it should be *that* unhappy in most environments.
> 
> No, I'm using the kernel client (from 2.6.34).  Specifically, I'm
> doing a "modprobe ceph; mount -t ceph 1.2.3.4:/ /mnt"
> 
> Sorry, I should have mentioned that.  I can use a more recent kernel
> (i.e., 2.6.36-rc7) if that's likely to help.

There have been a number of memory leak fixes since then, at least one of 
which may be causing your problem (it was caused by an uninitialized 
variable and didn't usually trigger for us, but may in your environment).  
Can you retry with the latest mainline?  The benchmark completes without 
problems in my test environment.

> > So you have 5 journals running on one spindle? This could be the cause
> > of your slightly low sequential write performance; in the current
> > default configuration writes have to go to the journal before going to
> > the main disk and with multiple OSDs on one journal spindle they could
> > be getting in each other's way.
> 
> Hmm, what do you recommend, then?  The problem is if the journal only
> needs to be a few gigabytes (I used a 5GB file), using an entire 1T or
> 2T disk just so each of the journals can have their own spindle is
> pretty wasteful.

If fsync on a single file in journal-less ext4 doesn't do any extra work, 
I would just put the (preallocated) journal file together with the data on 
each disk.  Usually that's bad news because of the journal flushing, but 
you shouldn't have that problem.  Alternatively, you could use a small 
separate partition on the same spindle.  

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux