Re: [NewStore]About PGLog Workload With RocksDB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Tue, Sep 8, 2015 at 9:58 PM, Haomai Wang <haomaiwang@xxxxxxxxx> wrote:
> > Hi Sage,
> >
> > I notice your post in rocksdb page about make rocksdb aware of short
> > alive key/value pairs.
> >
> > I think it would be great if one keyvalue db impl could support
> > different key types with different store behaviors. But it looks like
> > difficult for me to add this feature to an existing db.

WiredTiger comes to mind.. it supports a few different backing 
strategies (both btree and lsm, iirc).  Also, rocksdb has column families.  
That doesn't help with the write log piece (the log is shared, as I 
understand it) but it does mean that we can segregate the log events or 
other bits of the namespace off into regions that have different 
compaction policies (e.g., rarely/ever compact so that we avoid 
amplification and but suffer on reads during startup).

> > So combine my experience with filestore, I just think let
> > NewStore/FileStore aware of this short-alive keys(Or just PGLog keys)
> > could be easy and effective. PGLog owned by PG and maintain the
> > history of ops. It's alike Journal Data but only have several hundreds
> > bytes. Actually we only need to have several hundreds MB at most to
> > store all pgs pglog. For FileStore, we already have FileJournal have a
> > copy of PGLog, previously I always think about reduce another copy in
> > leveldb to reduce leveldb calls which consumes lots of cpu cycles. But
> > it need a lot of works to be done in FileJournal to aware of pglog
> > things. NewStore doesn't use FileJournal and it should be easier to
> > settle down my idea(?).
> >
> > Actually I think a rados write op in current objectstore impl that
> > omap key/value pairs hurts performance hugely. Lots of cpu cycles are
> > consumed and contributes to short-alive keys(pglog). It should be a
> > obvious optimization point. In the other hands, pglog is dull and
> > doesn't need rich keyvalue api supports. Maybe a lightweight
> > filejournal to settle down pglogs keys is also worth to try.
> >
> > In short, I think it would be cleaner and easier than improving
> > rocksdb to impl a pglog-optimization structure to store this.

I've given some thought to adding a FileJournal to newstore to do the wal 
events (which are the main thing we're putting in rocksdb that is *always* 
shortlived and can be reasonably big--and thus cost a lot when it gets 
flushed to L0).  But it just makes things a lot more complex.  We would 
have two synchronization (fsync) targets, or we would want to be smart 
about putting entire transactions in one journal and not the other.  
Still thinking about it, but it makes me a bit sad--it really feels like 
this is a common, simple workload that the KeyValueDB implementation 
should be able to handle.  What I'd really like is a hint on they key, or 
a predetermined key range that we use, so that the backend knows our 
lifecycle expectations and can optimize accordingly.

I'm hoping I can sell the rocksdb folks on a log rotation and flush 
strategy that prevents these keys from every making it into L0... that, 
combined with the overwrite change, will give us both low latency and no 
amplification for these writes (and any other keys that get rewritten, 
like hot object metadata).

On Tue, 8 Sep 2015, Haomai Wang wrote:
> Hit "Send" by accident for previous mail. :-(
> 
> some points about pglog:
> 1. short-alive but frequency(HIGH)
> 2. small and related to the number of pgs
> 3. typical seq read/write scene
> 4. doesn't need rich structure like LSM or B-tree to support apis, has
> obvious different to user-side/other omap keys.
> 5. a simple loopback impl is efficient and simple

It simpler.. though not quite as simple as it could be.  The pg log 
lengths may vary widely (10000 events could be any time span, 
depending on how active the pg is).  And we want to pix all the pg log 
events into a single append stream for write efficiency.  So we still need 
some complex tracking and eventual compaction ... which is part of 
what the LSM is doing for us.

> > PS(off topic): a keyvaluedb benchmark http://sphia.org/benchmarks.html

This looks pretty interested!  Anyone interested in giving it a spin?  It 
should be pretty easy to write it into the KeyValueDB interface.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux