RE: crimson discussion brain dump

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Kefu:
     Got it. Thanks very much!

Thanks!
Jianpeng

> -----Original Message-----
> From: kefu chai [mailto:tchaikov@xxxxxxxxx]
> Sent: Tuesday, November 13, 2018 9:17 PM
> To: Ma, Jianpeng <jianpeng.ma@xxxxxxxxx>; The Esoteric Order of the Squid
> Cybernetic <ceph-devel@xxxxxxxxxxxxxxx>
> Subject: Re: crimson discussion brain dump
> 
> + ceph-devel
> 
> On Wed, Nov 7, 2018 at 3:48 PM Ma, Jianpeng <jianpeng.ma@xxxxxxxxx>
> wrote:
> >
> >  Today's meeting,  I'm not full understanding.
> >
> > 1:  is there still a only OSDMap Cache?
> 
> in the C/S model, yes. there will be only a single instance of osdmap cache.
> 
> > 2:  other shard# decode OSDMap and send to shared#0 to handle. This is
> from your mail.
> >  But why not direct send message to shared#0? Like other message which
> pg isn't belong to this core of connection?
> 
> because i think it would be simpler this way, so we can 1. decouple the
> message processing from the centralize cache, and focus on the cache design.
> 2. have a better test which exercising the cache only.
> 
> >
> > Thanks!
> > Jianpeng
> >
> >
> > > -----Original Message-----
> > > From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-
> > > owner@xxxxxxxxxxxxxxx] On Behalf Of kefu chai
> > > Sent: Tuesday, November 6, 2018 8:56 AM
> > > To: The Esoteric Order of the Squid Cybernetic
> > > <ceph-devel@xxxxxxxxxxxxxxx>; Liu, Chunmei <chunmei.liu@xxxxxxxxx>;
> > > Gregory Farnum <gfarnum@xxxxxxxxxx>; Neha Ojha
> <nojha@xxxxxxxxxx>
> > > Subject: crimson discussion brain dump
> > >
> > > hi,
> > >
> > > i am trying to note down the discussion on crimson we have in this
> morning:
> > >
> > > 1. osdmap cache: there are two approaches,
> > >  * C/S mode: shard#0 will act as the server. we could follow the
> > > model of config proxy. say, let shard#0 be the mini server, and when
> > > a certain shard, shard#n for instance, gets a new osdmap, it will
> > > inform shard#0, by submitting following message to shard#0:
> > >    future<foreign_ptr<osdmap_ref>> add(foreign_ptr<osdmap_ref>&&
> map);
> > >   and here are two cases here,
> > >   1) shad#0 already has a copy of map of map->epoch. in this case,
> > > it will just return its own copy of osdmap as a foreign_ptr. shard#n
> > > will throw the `map` away, and keep the returned foreign_ptr<>
> > > instead in its local map<epoch_t, foreign_ptr<osdmap_ref>>. but for
> > > returning a proper *foreign_ptr<>*, we need to delegate this request to
> the one who actually *owns* the osdmap.
> > >   2) shard#0 does not has the map yet. it will add the new osdmap to
> > > its map<epoch_t, foreign_ptr<osdmap_ref>>, and return
> > > foreign_ptr<osdmap_ref>. shard#n can tell if it actually owns the
> > > returned map by checking get_owner_shard().
> > > * symmetric model: everyone is the client and everyone is the server.
> > > when shard#n gets an osdmap of epoch m, which it does not possess
> > > yet, it will keep it in a map<epoch_t, lw_shared_ptr<> after
> > > querying all shards for a map of epoch m, and nobody replies with a
> > > non-null foreign_ptr<>. when shard#n needs the osdmap of epoch m, it
> > > sends the following message to all its peer shards in parallel:
> > >   future<foreign_ptr<osdmap_ref>> get(epoch_t epoch).
> > >
> > > in both mode, we might want to avoid refcounting foreign_ptr<>
> > > locally. and only delete it when we trimming this certain osdmap.
> > >
> > > 2. regarding to the interface between bluestore (objectstore) and
> > > the rest part of osd, we could have something like:
> > >     seastar::future<RetVal>
> > > ObjectStore::submit_transaction(Transaction&&
> > > txn)
> > >
> > > 3. the CephContext in ObjectStore, we need a *copy* of the
> > > ConfigValue which is registered as an observer which is updated by
> > > config proxy in Seastar world. so we don't need to worry about
> > > reading a dirty option being updated by a Seastar thread.
> > >
> > >
> > >
> > > --
> > > Regards
> > > Kefu Chai
> 
> 
> 
> --
> Regards
> Kefu Chai




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux