Re: Bluestore RAM usage/utilization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Adam,

On Thu, 16 Jun 2016 23:40:26 -0500 Adam Tygart wrote:

> According to Sage[1] Bluestore makes use of the pagecache. I don't
> believe read-ahead is a filesystem tunable in Linux, it is set on the
> block device itself, therefore read-ahead shouldn't be an issue.
> 
Thank's for that link, that's very welcome news.
So all that RAM is not going to waste, the equivalent to dir-entries and
inodes I guess is in the RocksDB, so that being able to grow accordingly
in RAM would be a good thing, too.

As for read-ahead, take a peek at these:

https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg27674.html
http://www.spinics.net/lists/ceph-devel/msg30010.html

The "We are more dependent on client-side readahead with bluestore since
there is no underlying filesystem below the OSDs helping us out." bit is
what was stuck in my head.

Thanks again,

Christian
> I'm not familiar enough with Bluestore to comment on the rest.
> 
> [1] http://www.spinics.net/lists/ceph-devel/msg29398.html
> 
> --
> Adam
> 
> On Thu, Jun 16, 2016 at 11:09 PM, Christian Balzer <chibi@xxxxxxx> wrote:
> >
> > Hello,
> >
> > I don't have anything running Jewel yet, so this is for devs and people
> > who have played with bluestore or read the code.
> >
> > With filestore, Ceph benefits from ample RAM, both in terms of
> > pagecache for reads of hot objects and SLAB to keep all the
> > dir-entries and inodes in memory.
> >
> > With bluestore not being a FS, I'm wondering what can and will be done
> > for it to maximize performance by using available RAM.
> > I doubt there's a dynamic cache allocation ala pagecache present or on
> > the road-map.
> > But how about parameters to grow caches (are there any?) and give the
> > DB more breathing space?
> >
> > I suppose this also cuts into the current inability to do read-ahead
> > with bluestore by itself (not client driven).
> >
> > The underlying reason for this of course to future proof OSD storage
> > servers, any journal SSDs will be beneficial for RocksDB and WAL as
> > well, but if available memory can't be utilized beyond what the OSDs
> > need themselves it makes little sense to put extra RAM into them.
> >
> > Christian
> > --
> > Christian Balzer        Network/Systems Engineer
> > chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
> > http://www.gol.com/
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux