According to Sage[1] Bluestore makes use of the pagecache. I don't believe read-ahead is a filesystem tunable in Linux, it is set on the block device itself, therefore read-ahead shouldn't be an issue. I'm not familiar enough with Bluestore to comment on the rest. [1] http://www.spinics.net/lists/ceph-devel/msg29398.html -- Adam On Thu, Jun 16, 2016 at 11:09 PM, Christian Balzer <chibi@xxxxxxx> wrote: > > Hello, > > I don't have anything running Jewel yet, so this is for devs and people > who have played with bluestore or read the code. > > With filestore, Ceph benefits from ample RAM, both in terms of pagecache > for reads of hot objects and SLAB to keep all the dir-entries and inodes > in memory. > > With bluestore not being a FS, I'm wondering what can and will be done for > it to maximize performance by using available RAM. > I doubt there's a dynamic cache allocation ala pagecache present or on the > road-map. > But how about parameters to grow caches (are there any?) and give the DB > more breathing space? > > I suppose this also cuts into the current inability to do read-ahead with > bluestore by itself (not client driven). > > The underlying reason for this of course to future proof OSD storage > servers, any journal SSDs will be beneficial for RocksDB and WAL as well, > but if available memory can't be utilized beyond what the OSDs need > themselves it makes little sense to put extra RAM into them. > > Christian > -- > Christian Balzer Network/Systems Engineer > chibi@xxxxxxx Global OnLine Japan/Rakuten Communications > http://www.gol.com/ > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com