Re: crimson-osd queues discussion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 13, 2019 at 12:16 AM kefu chai <tchaikov@xxxxxxxxx> wrote:
>
> On Wed, Mar 13, 2019 at 2:31 AM Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
> >
> > On Fri, Mar 8, 2019 at 6:23 PM kefu chai <tchaikov@xxxxxxxxx> wrote:
> > > yes, i am also in favor of this. see
> > > https://github.com/ceph/ceph/pull/26697/commits/81e906d82d9e04ebe5b8b230d424b300ebff2f93
> > > and https://github.com/ceph/ceph/pull/26697/commits/d64c7c8022cacfc787231cfa61d9ea0fdcc58013#diff-1449683df2509676ff6b4977eff7e74bR660
> > > for examples . to chain the producer and consumer in the same place
> > > helps with the readability and probably helps with the performance.
> >
> > These links have gone stale in a rebase; can you share the commit titles?
>
> sure, they are
> - "crimson/osd: wait osdmap before processing peering evt"
> - "crimson/osd/pg: wait until pg is active"
> in https://github.com/ceph/ceph/pull/26697

Thanks.

>
> > Browsing some of them, I see that "crimson/osd: wait osdmap before
> > processing peering evt" has added an explicit "waiting_peering"
> > application queue that I presume we have to maintain just like all the
> > others that exist in the classic code.
>
> to be specific, it is a map of shared_promise, ideally any
> request/event expecting an osdmap not available yet should be waiting
> for a future returned by the promise stored in this map. once a
> new-enough map is consumed, all waiters of maps older than that map
> will be awoken and continue with whatever they were doing. currently,
> only peering messages are using this facility. once there are more
> consumers, probably we should rename it to osdmap_promises.

Right.
Now, in defense of this approach, they do look simpler than the queues
we have right now. I'm just not sure if that's because there are fewer
queues and interactions, or if they're going to maintain that
simplicity. What's the plan going forward?

When I was thinking about this before: In the "classic" code, we have
to be careful in a lot of places to explicitly wake up queues in the
right order. If we try and do it implicitly with futures, it would
still look a lot like that but I think we might need/be able to map
more explicitly to blocking operations ahead of us in line, and then
we wouldn't need to worry about ordering wake-ups properly. I'm not
sure if this current approach will take care of that or not?
That is, in addition to waiting for specific epochs, we might need to
also track futures for things like "the last operation from this
client" or "to this PG" and block on them plus the other conditions of
our own. Does that make any sense?
-Greg



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux