Oh yeah, protocol buffers are useful probably in strongly typed languages with
lots of types to maintain that type information and being able to parse out the
serialized content based on that type information. But in case we are transferring
json, we send the type information in the content itself ({}, [], "", <int>) so we
don't need to know anything else except the content, we just need to decide
how we are going to transform the content into language-native structures.
It is just an additional thing that I've just realized but I am little bit guessing here.
It might not be relevant. Forgive me if it is not.
On Wed, Aug 15, 2018 at 2:53 PM Michal Novotny <clime@xxxxxxxxxx> wrote:
> Anyway, to summarize, I really really want this to be super easy to use
and just work. I hope we can improve it further and I'd love to hear
your thoughts. Do you think my problem statements and design goals are
reasonable? Given those, do you still feel like sending the schema along
is worthwhile?I actually no longer think it is worthwhile.> As a consumer, I can validate the JSON in a message matches the JSON schema
in the same message, but what does that get me? It doesn't seem any
different (on the consumer side) than just parsing the JSON outright and
trying to access whatever deserialized object I get.I completely agree with this.Let's go through the problems you mentioned:1. Make catching accidental schema changes as a publisher easy.So we can just solve this by registering the scheme with the publisherfirst before any content gets published and based on the scheme, the publisherinstance may check if the content intended to be sent conforms tothe scheme, which could catch some bugs before the contentis actually sent. If we require this to be done on publisher side, thenthere is actually no reason to send the schema alongside the contentbecause the check has already been done so consumer already knowsthe message is alright when it is received. What should be sent, however,is a scheme ID, e.g. just a natural number. The scheme ID may be thenused to version the scheme, which would be available somewhere publiclye.g. in the service docs the same way Github/Gitlab/etc publishes structuresof their webhook messages. It would be basically part of public API ofa service.2. Make catching mis-behaving publishers on the consuming side easy.By checking against the scheme on the publisher side, thisshouldn't be necessary. If someone somehow bypasses thepublisher check, at worst the message won't be parsable,depending on how the message is being parsed. If someonewants to really make sure the message is what it is supposedto be, he/she can integrate the schema published on the servicesite into the parsing logic but I don't think that's necessarything to do (I personally wouldn't do it in my code).3. Make changing the schema a painless process for publishers and
consumers.I think, the only way to do this is to send both content types simultaneouslyfor some time, each message being marked with its scheme ID. It would begood if consumer always specified what scheme ID it wants to consume.If there is a higher scheme ID available in the message, a warning could be printedmaybe even to syslog even so that consumers get the information. At the same time it shouldbe communicated on the service site or by other means available. I don't think it is possibleto make it any better than this.I fail to see what's the point of packaging the schemas.If the message content is in json, then after receiving the message,I would like to be able to just call json.loads(msg) and work with the resulting structureas I am used to.Actually, what I would do in python is that I would make it a munch and then workwith it. Needing to install some additional package and instantiate some high-levelobjects just seems clumsy to me in comparison.In other programming languages, this procedure would be pretty much the same,I believe as they all probably provide some json implementation.You mentioned:> In the current proposal, consumers don't interact with the JSON at all,
but with a higher-level Python API that gives publishers flexibility
when altering their on-the-wire format.Yes, but with the current proposal if I change the on-the-wire API, I needto make a new version of the schema, package it and somehow get it toconsumers and make them use the correct version that correctly parsesthe new on-the-wire format and translates it correctly to what the consumersare used to consume? That's seems like something very difficult to getdone.And also I don't quite see the point. I wouldn't alter the on-the-wireformat if it is not actually what users work with and if I needed to gothrough all those steps described above.If I need to alter the on-the-wire format because application logichas been somehow changed, then I would like to make the changesin the high-level API as well so again there is no gain there exceptmore work with packaging new schemas.My main point here is that trying to package the schemas to providesome high-level objects seems to be redundant. I think lots of people wouldjust welcome to work something really simple, which is already provided inthe language standard library.For python, If I had to install and import just a single messaging library,say to what hub, topic, and scheme ID I want to listen and then consumethe incoming messages immediately as munches, I would be super happy.Actually, it might be the case the scheme ID is redundant as well andit can be just made part of the topic somehow, in which case the producerwould probably just produce the content twice on a scheme change atleast for some time. "Deprecated by <topic>" flag on an incoming messagewould be nice then. Of course, the producer would need to register the twoschemas and mark one of them as deprecated. The framework would thensend two messages simultaneously for him. This might be even easiersolution to the problem. The exact publisher (producer) interface wouldneed to be thought through.> The big problem is that right now the majority of messages are not
formatted in a way that makes sense and really need to be changed to be
simple, flat structures that contain the information services need and
nothing they don't. I'd like to get those fixed in a way that doesn't
require massive coordinated changes in apps.In Copr, for example, we take this as an opportunity to change ourformat. If the messaging framework will support format deprecation,we might go that way as well to avoid sudden change. But we don'tcurrently have many (or maybe any) consumers so I am not sure it isnecessary for us.I am not familiar with protocol buffers but to me that thingseems rather useful, if you want to send the content in a compactbinary form to save as much space as possible. If we will send content,which can be interpreted as json already, then to make somehigher-level classes and objects on that seems already unnecessary.I think we could really just take that already existing generic frameworkyou were talking about (RabbitMQ?) and just make sure we cancheck the content against message schemas on producer side (which isgreat for catching little bugs) and that we know how a message format canget deprecated (e.g. by adding "deprecated_by: <topic>" field into each messageby the messaging framework, which should somehow log warnings onconsumer side), also the framework could automaticallytransform the messages into some language-native structures:in python, the munches would probably be the most sexy ones.The whole "let's package schemas" thing seems like somethingwe would typically do (because we are packagers) but not as somethingthat would solve the actual problems you have mentioned. Rather itmakes them more difficult to deal with if I am correct.I think what you are doing is good but I think most people willwelcome less dependencies and simpler language-native structures.So if we could make the framework go more into that direction,that would be great.climeOn Tue, Aug 14, 2018 at 10:55 AM Jeremy Cline <jeremy@xxxxxxxxxx> wrote:On 08/13/2018 10:20 PM, Michal Novotny wrote:
> So I got to know on the flock that fedmsg is going to be replaced?
>
> Anyway, it seems that there is an idea to create schemas for the messages
> and distribute them in packages? And those python packages need to be
> present on producer as well as consumer?
>> JSON schemas
>
>> Message bodies are JSON objects, that adhere to a schema. Message schemas
> live in their own Python package, so they can be installed on the producer
> and on the consumer.
>
> Could we instead just send the message schemas together with the message
> content always?
I considered this early on, but it seemed to me it didn't solve all the
problems I wanted solved. Those problems are:
1. Make catching accidental schema changes as a publisher easy.
2. Make catching mis-behaving publishers on the consuming side easy.
3. Make changing the schema a painless process for publishers and
consumers.
Doing this would solve #1, but #2 and #3 are still a problem. As a
consumer, I can validate the JSON in a message matches the JSON schema
in the same message, but what does that get me? It doesn't seem any
different (on the consumer side) than just parsing the JSON outright and
trying to access whatever deserialized object I get.
In the current proposal, consumers don't interact with the JSON at all,
but with a higher-level Python API that gives publishers flexibility
when altering their on-the-wire format.
>
> I would like to be able to parse any message I receive without some
> additional packages installed. If I am about to start listening to a new
> message type, I don't want to spend time to be looking up what i should
> install to make it work. It should just work. Requiring to have some
> packages with schemas installed on consumer and having to maintain them by
> the producer does not seem that great idea. Mainly because one of the
> raising requirements for fedmsg was that it should be made a generic
> messaging framework easily usable outside of Fedora Infrastructure. We
> should make it easy for anyone outside to be able to listen and understand
> our messages so that they can react to them. Needing to have some python
> packages installed (how are they going to be distributed PyPI + fedora ?)
> seems to be just an unnecessary hassle. So can we send a schema with each
> message as documentation and validation of the message itself?
You can parse any message you receive without anything beyond a JSON
parsing library. You can do that now and you'll be able to do that after
the move. The problem with that is the JSON format might change. The
schema alone doesn't solve the problem of changing formats, it just
clearly documents what the message used to be and what it is now.
I'd love for this to just work and I'm up for any suggestions to make it
easier, but I do think we need to make sure any solution covers the
three problems stated above.
Finally, I do not want to create a generic messaging framework. I want
something small that makes a generic messaging framework very easy to
use for Fedora infrastructure specifically. I'm happy to help develop a
generic framework (like Pika) when necessary, but I don't want to be in
the business of authoring and maintaining a generic framework.
>
> a) it will make our life easier
>
> b) it will allow people outside of Fedora (that e.g. also don't tend to use
> python) to consume our messages easily
>
> c) what if I am doing a ruby app, not python app, do I need then provide
> ruby schema as well as python schema? What if a consumer is a ruby app? We
> should only need to write a consumer and producer parts in different
> languages. The message schemes should not be bound to a particular
> language, otherwise we are just adding us more work when somebody wants to
> use the messaging system in another language than python.
I agree, and that's why I chose json-schema. A different language just
needs to wrap the schema in accessor functions. An alternative (and
something I wanted to propose longer term after the AMQP->ZMQ
transition) is to use something like protocol buffers rather than JSON.
The advantage there is a simplified schema format, it generally pushes
into a pattern of backwards compatibility (thus reducing the need for
a higher level API), and it auto-generates an object wrapper in many
languages. You still need to potentially implement wrappers for access
if you change the schema in a way that isn't additive, though.
You may notice (and it's not an accident) that the recommended
implementation of a Message produces an API that is very similar to the
one produced by a Python object generated by protocol buffers. This
makes it possible to quietly change to protocol buffers without breaking
consumers, assuming they're not digging into the JSON. I'm not saying
we'll definitely do that, but it is still on the table and a transition
_should_ be easy.
The big problem is that right now the majority of messages are not
formatted in a way that makes sense and really need to be changed to be
simple, flat structures that contain the information services need and
nothing they don't. I'd like to get those fixed in a way that doesn't
require massive coordinated changes in apps.
Anyway, to summarize, I really really want this to be super easy to use
and just work. I hope we can improve it further and I'd love to hear
your thoughts. Do you think my problem statements and design goals are
reasonable? Given those, do you still feel like sending the schema along
is worthwhile?
--
Jeremy Cline
XMPP: jeremy@xxxxxxxxxx
IRC: jcline
_______________________________________________ infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx/message/N3IF64YUOZQVIGQVNCZKGIH3O5NJXG32/